Sample records for sensing image processing

  1. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  2. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  3. Research on assessment and improvement method of remote sensing image reconstruction

    NASA Astrophysics Data System (ADS)

    Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping

    2018-01-01

    Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.

  4. a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.

    2015-07-01

    Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.

  5. Geometric correction of synchronous scanned Operational Modular Imaging Spectrometer II hyperspectral remote sensing images using spatial positioning data of an inertial navigation system

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaohu; Neubauer, Franz; Zhao, Dong; Xu, Shichao

    2015-01-01

    The high-precision geometric correction of airborne hyperspectral remote sensing image processing was a hard nut to crack, and conventional methods of remote sensing image processing by selecting ground control points to correct the images are not suitable in the correction process of airborne hyperspectral image. The optical scanning system of an inertial measurement unit combined with differential global positioning system (IMU/DGPS) is introduced to correct the synchronous scanned Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing images. Posture parameters, which were synchronized with the OMIS II, were first obtained from the IMU/DGPS. Second, coordinate conversion and flight attitude parameters' calculations were conducted. Third, according to the imaging principle of OMIS II, mathematical correction was applied and the corrected image pixels were resampled. Then, better image processing results were achieved.

  6. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    PubMed

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  7. System design and implementation of digital-image processing using computational grids

    NASA Astrophysics Data System (ADS)

    Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping

    2005-06-01

    As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.

  8. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    NASA Astrophysics Data System (ADS)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory Digital Image Processing, A Remote Sensing Perspective" authored by John Jensen. The textbook is widely adopted in the geography departments around the world for training students on digital processing of remote sensing images. In the traditional teaching setting for the course, the instructor prepares a set of sample remote sensing images to be used for the course. Commercial desktop remote sensing software, such as ERDAS, is used for students to do the lab exercises. The students have to do the excurses in the lab and can only use the simple images. For this specific course at GMU, we developed GeoBrain-based lab excurses for the course. With GeoBrain, students now can explore petabytes of remote sensing images in the NASA, NOAA, and USGS data archives instead of dealing only with sample images. Students have a much more powerful computing facility available for their lab excurses. They can explore the data and do the excurses any time at any place they want as long as they can access the Internet through the Web Browser. The feedbacks from students are all very positive about the learning experience on the digital image processing with the help of GeoBrain web processing services. The teaching/lab materials and GeoBrain services are freely available to anyone at http://www.laits.gmu.edu.

  9. Compressive sensing method for recognizing cat-eye effect targets.

    PubMed

    Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo

    2013-10-01

    This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.

  10. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  11. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  12. Linear- and Repetitive Feature Detection Within Remotely Sensed Imagery

    DTIC Science & Technology

    2017-04-01

    applicable to Python or other pro- gramming languages with image- processing capabilities. 4.1 Classification machine learning The first methodology uses...remotely sensed images that are in panchromatic or true-color formats. Image- processing techniques, in- cluding Hough transforms, machine learning, and...data fusion .................................................................................................... 44 6.3 Context-based processing

  13. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  14. Machine processing of remotely sensed data - quantifying global process: Models, sensor systems, and analytical methods; Proceedings of the Eleventh International Symposium, Purdue University, West Lafayette, IN, June 25-27, 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mengel, S.K.; Morrison, D.B.

    1985-01-01

    Consideration is given to global biogeochemical issues, image processing, remote sensing of tropical environments, global processes, geology, landcover hydrology, and ecosystems modeling. Topics discussed include multisensor remote sensing strategies, geographic information systems, radars, and agricultural remote sensing. Papers are presented on fast feature extraction; a computational approach for adjusting TM imagery terrain distortions; the segmentation of a textured image by a maximum likelihood classifier; analysis of MSS Landsat data; sun angle and background effects on spectral response of simulated forest canopies; an integrated approach for vegetation/landcover mapping with digital Landsat images; geological and geomorphological studies using an image processing technique;more » and wavelength intensity indices in relation to tree conditions and leaf-nutrient content.« less

  15. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  16. PI2GIS: processing image to geographical information systems, a learning tool for QGIS

    NASA Astrophysics Data System (ADS)

    Correia, R.; Teodoro, A.; Duarte, L.

    2017-10-01

    To perform an accurate interpretation of remote sensing images, it is necessary to extract information using different image processing techniques. Nowadays, it became usual to use image processing plugins to add new capabilities/functionalities integrated in Geographical Information System (GIS) software. The aim of this work was to develop an open source application to automatically process and classify remote sensing images from a set of satellite input data. The application was integrated in a GIS software (QGIS), automating several image processing steps. The use of QGIS for this purpose is justified since it is easy and quick to develop new plugins, using Python language. This plugin is inspired in the Semi-Automatic Classification Plugin (SCP) developed by Luca Congedo. SCP allows the supervised classification of remote sensing images, the calculation of vegetation indices such as NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) and other image processing operations. When analysing SCP, it was realized that a set of operations, that are very useful in teaching classes of remote sensing and image processing tasks, were lacking, such as the visualization of histograms, the application of filters, different image corrections, unsupervised classification and several environmental indices computation. The new set of operations included in the PI2GIS plugin can be divided into three groups: pre-processing, processing, and classification procedures. The application was tested consider an image from Landsat 8 OLI from a North area of Portugal.

  17. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  18. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  19. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  20. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    NASA Astrophysics Data System (ADS)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  1. The Hico Image Processing System: A Web-Accessible Hyperspectral Remote Sensing Toolbox

    NASA Astrophysics Data System (ADS)

    Harris, A. T., III; Goodman, J.; Justice, B.

    2014-12-01

    As the quantity of Earth-observation data increases, the use-case for hosting analytical tools in geospatial data centers becomes increasingly attractive. To address this need, HySpeed Computing and Exelis VIS have developed the HICO Image Processing System, a prototype cloud computing system that provides online, on-demand, scalable remote sensing image processing capabilities. The system provides a mechanism for delivering sophisticated image processing analytics and data visualization tools into the hands of a global user community, who will only need a browser and internet connection to perform analysis. Functionality of the HICO Image Processing System is demonstrated using imagery from the Hyperspectral Imager for the Coastal Ocean (HICO), an imaging spectrometer located on the International Space Station (ISS) that is optimized for acquisition of aquatic targets. Example applications include a collection of coastal remote sensing algorithms that are directed at deriving critical information on water and habitat characteristics of our vulnerable coastal environment. The project leverages the ENVI Services Engine as the framework for all image processing tasks, and can readily accommodate the rapid integration of new algorithms, datasets and processing tools.

  2. Theory on data processing and instrumentation. [remote sensing

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1978-01-01

    A selection of NASA Earth observations programs are reviewed, emphasizing hardware capabilities. Sampling theory, noise and detection considerations, and image evaluation are discussed for remote sensor imagery. Vision and perception are considered, leading to numerical image processing. The use of multispectral scanners and of multispectral data processing systems, including digital image processing, is depicted. Multispectral sensing and analysis in application with land use and geographical data systems are also covered.

  3. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  4. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  5. Cybernetic Basis and System Practice of Remote Sensing and Spatial Information Science

    NASA Astrophysics Data System (ADS)

    Tan, X.; Jing, X.; Chen, R.; Ming, Z.; He, L.; Sun, Y.; Sun, X.; Yan, L.

    2017-09-01

    Cybernetics provides a new set of ideas and methods for the study of modern science, and it has been fully applied in many areas. However, few people have introduced cybernetics into the field of remote sensing. The paper is based on the imaging process of remote sensing system, introducing cybernetics into the field of remote sensing, establishing a space-time closed-loop control theory for the actual operation of remote sensing. The paper made the process of spatial information coherently, and improved the comprehensive efficiency of the space information from acquisition, procession, transformation to application. We not only describes the application of cybernetics in remote sensing platform control, sensor control, data processing control, but also in whole system of remote sensing imaging process control. We achieve the information of output back to the input to control the efficient operation of the entire system. This breakthrough combination of cybernetics science and remote sensing science will improve remote sensing science to a higher level.

  6. NDSI products system based on Hadoop platform

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Jiang, He; Yang, Xiaoxia; Geng, Erhui

    2015-12-01

    Snow is solid state of water resources on earth, and plays an important role in human life. Satellite remote sensing is significant in snow extraction with the advantages of cyclical, macro, comprehensiveness, objectivity, timeliness. With the continuous development of remote sensing technology, remote sensing data access to the trend of multiple platforms, multiple sensors and multiple perspectives. At the same time, in view of the remote sensing data of compute-intensive applications demand increase gradually. However, current the producing system of remote sensing products is in a serial mode, and this kind of production system is used for professional remote sensing researchers mostly, and production systems achieving automatic or semi-automatic production are relatively less. Facing massive remote sensing data, the traditional serial mode producing system with its low efficiency has been difficult to meet the requirements of mass data timely and efficient processing. In order to effectively improve the production efficiency of NDSI products, meet the demand of large-scale remote sensing data processed timely and efficiently, this paper build NDSI products production system based on Hadoop platform, and the system mainly includes the remote sensing image management module, NDSI production module, and system service module. Main research contents and results including: (1)The remote sensing image management module: includes image import and image metadata management two parts. Import mass basis IRS images and NDSI product images (the system performing the production task output) into HDFS file system; At the same time, read the corresponding orbit ranks number, maximum/minimum longitude and latitude, product date, HDFS storage path, Hadoop task ID (NDSI products), and other metadata information, and then create thumbnails, and unique ID number for each record distribution, import it into base/product image metadata database. (2)NDSI production module: includes the index calculation, production tasks submission and monitoring two parts. Read HDF images related to production task in the form of a byte stream, and use Beam library to parse image byte stream to the form of Product; Use MapReduce distributed framework to perform production tasks, at the same time monitoring task status; When the production task complete, calls remote sensing image management module to store NDSI products. (3)System service module: includes both image search and DNSI products download. To image metadata attributes described in JSON format, return to the image sequence ID existing in the HDFS file system; For the given MapReduce task ID, package several task output NDSI products into ZIP format file, and return to the download link (4)System evaluation: download massive remote sensing data and use the system to process it to get the NDSI products testing the performance, and the result shows that the system has high extendibility, strong fault tolerance, fast production speed, and the image processing results with high accuracy.

  7. Vision-aided Monitoring and Control of Thermal Spray, Spray Forming, and Welding Processes

    NASA Technical Reports Server (NTRS)

    Agapakis, John E.; Bolstad, Jon

    1993-01-01

    Vision is one of the most powerful forms of non-contact sensing for monitoring and control of manufacturing processes. However, processes involving an arc plasma or flame such as welding or thermal spraying pose particularly challenging problems to conventional vision sensing and processing techniques. The arc or plasma is not typically limited to a single spectral region and thus cannot be easily filtered out optically. This paper presents an innovative vision sensing system that uses intense stroboscopic illumination to overpower the arc light and produce a video image that is free of arc light or glare and dedicated image processing and analysis schemes that can enhance the video images or extract features of interest and produce quantitative process measures which can be used for process monitoring and control. Results of two SBIR programs sponsored by NASA and DOE and focusing on the application of this innovative vision sensing and processing technology to thermal spraying and welding process monitoring and control are discussed.

  8. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    DTIC Science & Technology

    2013-04-01

    Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [20] A. Gurbuz, J. McClellan, and W. Scott, Jr., "Compressive sensing for subsurface imaging using...SciTech Publishing, 2010, pp. 922- 938. [45] A. C. Gurbuz, J. H. McClellan, and W. R. Scott, Jr., "Compressive sensing for subsurface imaging using

  9. Secure distribution for high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Sun, Jing; Xu, Zheng Q.

    2010-09-01

    The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.

  10. Visual Image Sensor Organ Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  11. Image processing methods in two and three dimensions used to animate remotely sensed data. [cloud cover

    NASA Technical Reports Server (NTRS)

    Hussey, K. J.; Hall, J. R.; Mortensen, R. A.

    1986-01-01

    Image processing methods and software used to animate nonimaging remotely sensed data on cloud cover are described. Three FORTRAN programs were written in the VICAR2/TAE image processing domain to perform 3D perspective rendering, to interactively select parameters controlling the projection, and to interpolate parameter sets for animation images between key frames. Operation of the 3D programs and transferring the images to film is automated using executive control language and custom hardware to link the computer and camera.

  12. HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing

    PubMed Central

    Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori

    2018-01-01

    Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022

  13. Spatial information technologies for remote sensing today and tomorrow; Proceedings of the Ninth Pecora Symposium, Sioux Falls, SD, October 2-4, 1984

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Topics discussed at the symposium include hardware, geographic information system (GIS) implementation, processing remotely sensed data, spatial data structures, and NASA programs in remote sensing information systems. Attention is also given GIS applications, advanced techniques, artificial intelligence, graphics, spatial navigation, and classification. Papers are included on the design of computer software for geographic image processing, concepts for a global resource information system, algorithm development for spatial operators, and an application of expert systems technology to remotely sensed image analysis.

  14. Multispectral remote sensing from unmanned aircraft: image processing workflows and applications for rangeland environments

    USDA-ARS?s Scientific Manuscript database

    Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Most image acquisitions from UAS have been in the visible bands, while multispectral remote sensing ap...

  15. Remote sensing, imaging, and signal engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brase, J.M.

    1993-03-01

    This report discusses the Remote Sensing, Imaging, and Signal Engineering (RISE) trust area which has been very active in working to define new directions. Signal and image processing have always been important support for existing programs at Lawrence Livermore National Laboratory (LLNL), but now these technologies are becoming central to the formation of new programs. Exciting new applications such as high-resolution telescopes, radar remote sensing, and advanced medical imaging are allowing us to participate in the development of new programs.

  16. Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2007-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.

  17. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  18. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  19. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  20. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  1. Remote sensing image segmentation based on Hadoop cloud platform

    NASA Astrophysics Data System (ADS)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  2. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  3. Model for mapping settlements

    DOEpatents

    Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.

    2016-07-05

    A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.

  4. Methods of training the graduate level and professional geologist in remote sensing technology

    NASA Technical Reports Server (NTRS)

    Kolm, K. E.

    1981-01-01

    Requirements for a basic course in remote sensing to accommodate the needs of the graduate level and professional geologist are described. The course should stress the general topics of basic remote sensing theory, the theory and data types relating to different remote sensing systems, an introduction to the basic concepts of computer image processing and analysis, the characteristics of different data types, the development of methods for geological interpretations, the integration of all scales and data types of remote sensing in a given study, the integration of other data bases (geophysical and geochemical) into a remote sensing study, and geological remote sensing applications. The laboratories should stress hands on experience to reinforce the concepts and procedures presented in the lecture. The geologist should then be encouraged to pursue a second course in computer image processing and analysis of remotely sensed data.

  5. Luminescent sensing and imaging of oxygen: Fierce competition to the Clark electrode

    PubMed Central

    2015-01-01

    Luminescence‐based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid‐state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle‐based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. PMID:26113255

  6. Standardizing Quality Assessment of Fused Remotely Sensed Images

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Moellmann, J.; Fries, K.

    2017-09-01

    The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.

  7. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    PubMed

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  8. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    PubMed Central

    Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440

  9. Three-dimensional imaging and remote sensing imaging; Proceedings of the Meeting, Los Angeles, CA, Jan. 14, 15, 1988

    NASA Astrophysics Data System (ADS)

    Robbins, Woodrow E.

    1988-01-01

    The present conference discusses topics in novel technologies and techniques of three-dimensional imaging, human factors-related issues in three-dimensional display system design, three-dimensional imaging applications, and image processing for remote sensing. Attention is given to a 19-inch parallactiscope, a chromostereoscopic CRT-based display, the 'SpaceGraph' true three-dimensional peripheral, advantages of three-dimensional displays, holographic stereograms generated with a liquid crystal spatial light modulator, algorithms and display techniques for four-dimensional Cartesian graphics, an image processing system for automatic retina diagnosis, the automatic frequency control of a pulsed CO2 laser, and a three-dimensional display of magnetic resonance imaging of the spine.

  10. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao

    2015-12-01

    The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.

  11. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  12. Contribution of non-negative matrix factorization to the classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  13. Development and implementation of software systems for imaging spectroscopy

    USGS Publications Warehouse

    Boardman, J.W.; Clark, R.N.; Mazer, A.S.; Biehl, L.L.; Kruse, F.A.; Torson, J.; Staenz, K.

    2006-01-01

    Specialized software systems have played a crucial role throughout the twenty-five year course of the development of the new technology of imaging spectroscopy, or hyperspectral remote sensing. By their very nature, hyperspectral data place unique and demanding requirements on the computer software used to visualize, analyze, process and interpret them. Often described as a marriage of the two technologies of reflectance spectroscopy and airborne/spaceborne remote sensing, imaging spectroscopy, in fact, produces data sets with unique qualities, unlike previous remote sensing or spectrometer data. Because of these unique spatial and spectral properties hyperspectral data are not readily processed or exploited with legacy software systems inherited from either of the two parent fields of study. This paper provides brief reviews of seven important software systems developed specifically for imaging spectroscopy.

  14. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    NASA Astrophysics Data System (ADS)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  15. A REMOTE SENSING AND GIS-ENABLED HIGHWAY ASSET MANAGEMENT SYSTEM PHASE 2

    DOT National Transportation Integrated Search

    2018-02-02

    The objective of this project is to validate the use of commercial remote sensing and spatial information (CRS&SI) technologies, including emerging 3D line laser imaging technology, mobile light detection and ranging (LiDAR), image processing algorit...

  16. A remote sensing and GIS-enabled highway asset management system : final report.

    DOT National Transportation Integrated Search

    2016-04-01

    The objective of this project is to validate the use of commercial remote sensing and spatial information : (CRS&SI) technologies, including emerging 3D line laser imaging technology, mobile LiDAR, image : processing algorithms, and GPS/GIS technolog...

  17. A change detection method for remote sensing image based on LBP and SURF feature

    NASA Astrophysics Data System (ADS)

    Hu, Lei; Yang, Hao; Li, Jin; Zhang, Yun

    2018-04-01

    Finding the change in multi-temporal remote sensing image is important in many the image application. Because of the infection of climate and illumination, the texture of the ground object is more stable relative to the gray in high-resolution remote sensing image. And the texture features of Local Binary Patterns (LBP) and Speeded Up Robust Features (SURF) are outstanding in extracting speed and illumination invariance. A method of change detection for matched remote sensing image pair is present, which compares the similarity by LBP and SURF to detect the change and unchanged of the block after blocking the image. And region growing is adopted to process the block edge zone. The experiment results show that the method can endure some illumination change and slight texture change of the ground object.

  18. Research Issues in Image Registration for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Eastman, Roger D.; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    Image registration is an important element in data processing for remote sensing with many applications and a wide range of solutions. Despite considerable investigation the field has not settled on a definitive solution for most applications and a number of questions remain open. This article looks at selected research issues by surveying the experience of operational satellite teams, application-specific requirements for Earth science, and our experiments in the evaluation of image registration algorithms with emphasis on the comparison of algorithms for subpixel accuracy. We conclude that remote sensing applications put particular demands on image registration algorithms to take into account domain-specific knowledge of geometric transformations and image content.

  19. Proceedings of the Eleventh International Symposium on Remote Sensing of Environment, volume 2. [application and processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Application and processing of remotely sensed data are discussed. Areas of application include: pollution monitoring, water quality, land use, marine resources, ocean surface properties, and agriculture. Image processing and scene analysis are described along with automated photointerpretation and classification techniques. Data from infrared and multispectral band scanners onboard LANDSAT satellites are emphasized.

  20. Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite

    NASA Astrophysics Data System (ADS)

    Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi

    2018-05-01

    LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.

  1. Compressive Sensing Image Sensors-Hardware Implementation

    PubMed Central

    Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram

    2013-01-01

    The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123

  2. SUPERFUND REMOTE SENSING SUPPORT

    EPA Science Inventory

    This task provides remote sensing technical support to the Superfund program. Support includes the collection, processing, and analysis of remote sensing data to characterize hazardous waste disposal sites and their history. Image analysis reports, aerial photographs, and assoc...

  3. Application research on land use remote sensing dynamic monitoring: A case study of Anning district, Lanzhou

    NASA Astrophysics Data System (ADS)

    Zhu, Yunqiang; Zhu, Huazhong; Lu, Heli; Ni, Jianguang; Zhu, Shaoxia

    2005-10-01

    Remote sensing dynamic monitoring of land use can detect the change information of land use and update the current land use map, which is important for rational utilization and scientific management of land resources. This paper discusses the technological procedure of remote sensing dynamic monitoring of land use including the process of remote sensing images, the extraction of annual change information of land use, field survey, indoor post processing and accuracy assessment. Especially, we emphasize on comparative research on the choice of remote sensing rectifying models, image fusion algorithms and accuracy assessment methods. Taking Anning district in Lanzhou as an example, we extract the land use change information of the district during 2002-2003, access monitoring accuracy and analyze the reason of land use change.

  4. Remote Sensing as a Demonstration of Applied Physics.

    ERIC Educational Resources Information Center

    Colwell, Robert N.

    1980-01-01

    Provides information about the field of remote sensing, including discussions of geo-synchronous and sun-synchronous remote-sensing platforms, the actual physical processes and equipment involved in sensing, the analysis of images by humans and machines, and inexpensive, small scale methods, including aerial photography. (CS)

  5. Luminescent sensing and imaging of oxygen: fierce competition to the Clark electrode.

    PubMed

    Wolfbeis, Otto S

    2015-08-01

    Luminescence-based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid-state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle-based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. © 2015 The Author. Bioessays published by WILEY Periodicals, Inc.

  6. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  7. Information Processing of Remote-Sensing Data.

    ERIC Educational Resources Information Center

    Berry, P. A. M.; Meadows, A. J.

    1987-01-01

    Reviews the current status of satellite remote sensing data, including problems with efficient storage and rapid retrieval of the data, and appropriate computer graphics to process images. Areas of research concerned with overcoming these problems are described. (16 references) (CLB)

  8. Instructional image processing on a university mainframe: The Kansas system

    NASA Technical Reports Server (NTRS)

    Williams, T. H. L.; Siebert, J.; Gunn, C.

    1981-01-01

    An interactive digital image processing program package was developed that runs on the University of Kansas central computer, a Honeywell Level 66 multi-processor system. The module form of the package allows easy and rapid upgrades and extensions of the system and is used in remote sensing courses in the Department of Geography, in regional five-day short courses for academics and professionals, and also in remote sensing projects and research. The package comprises three self-contained modules of processing functions: Subimage extraction and rectification; image enhancement, preprocessing and data reduction; and classification. Its use in a typical course setting is described. Availability and costs are considered.

  9. Research in remote sensing of agriculture, earth resources, and man's environment

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1975-01-01

    Progress is reported for several projects involving the utilization of LANDSAT remote sensing capabilities. Areas under study include crop inventory, crop identification, crop yield prediction, forest resources evaluation, land resources evaluation and soil classification. Numerical methods for image processing are discussed, particularly those for image enhancement and analysis.

  10. Vision-sensing image analysis for GTAW process control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  11. Textbooks and technical references for remote sensing

    NASA Technical Reports Server (NTRS)

    Rudd, R. D.; Bowden, L. W.; Colwell, R. N.; Estes, J. E.

    1980-01-01

    A selective bibliography is presented which cites 89 textbooks, monographs, and articles covering introductory and advanced remote sensing techniques, photointerpretation, photogrammetry, and image processing.

  12. Postprocessing classification images

    NASA Technical Reports Server (NTRS)

    Kan, E. P.

    1979-01-01

    Program cleans up remote-sensing maps. It can be used with existing image-processing software. Remapped images closely resemble familiar resource information maps and can replace or supplement classification images not postprocessed by this program.

  13. Overall design of imaging spectrometer on-board light aircraft

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhongqi, H.; Zhengkui, C.; Changhua, C.

    1996-11-01

    Aerial remote sensing is the earliest remote sensing technical system and has gotten rapid development in recent years. The development of aerial remote sensing was dominated by high to medium altitude platform in the past, and now it is characterized by the diversity platform including planes of high-medium-low flying altitude, helicopter, airship, remotely controlled airplane, glider, and balloon. The widely used and rapidly developed platform recently is light aircraft. Early in the close of 1970s, Beijing Research Institute of Uranium Geology began aerial photography and geophysical survey using light aircraft, and put forward the overall design scheme of light aircraftmore » imaging spectral application system (LAISAS) in 19905. LAISAS is comprised of four subsystem. They are called measuring platform, data acquiring subsystem, ground testing and data processing subsystem respectively. The principal instruments of LAISAS include measuring platform controlled by inertia gyroscope, aerial spectrometer with high spectral resolution, imaging spectrometer, 3-channel scanner, 128-channel imaging spectrometer, GPS, illuminance-meter, and devices for atmospheric parameters measuring, ground testing, data correction and processing. LAISAS has the features of integrity from data acquisition to data processing and to application; of stability which guarantees the image quality and is comprised of measuring, ground testing device, and in-door data correction system; of exemplariness of integrated the technology of GIS, GPS, and Image Processing System; of practicality which embodied LAISAS with flexibility and high ratio of performance to cost. So, it can be used in the fields of fundamental research of Remote Sensing and large-scale mapping for resource exploration, environmental monitoring, calamity prediction, and military purpose.« less

  14. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  15. Multispectral image enhancement processing for microsat-borne imager

    NASA Astrophysics Data System (ADS)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  16. Thematic Conference on Remote Sensing for Exploration Geology, 6th, Houston, TX, May 16-19, 1988, Proceedings. Volumes 1 & 2

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Papers concerning remote sensing applications for exploration geology are presented, covering topics such as remote sensing technology, data availability, frontier exploration, and exploration in mature basins. Other topics include offshore applications, geobotany, mineral exploration, engineering and environmental applications, image processing, and prospects for future developments in remote sensing for exploration geology. Consideration is given to the use of data from Landsat, MSS, TM, SAR, short wavelength IR, the Geophysical Environmental Research Airborne Scanner, gas chromatography, sonar imaging, the Airborne Visible-IR Imaging Spectrometer, field spectrometry, airborne thermal IR scanners, SPOT, AVHRR, SIR, the Large Format camera, and multitimephase satellite photographs.

  17. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, James L.; Wiczer, James J.

    1995-01-01

    A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.

  18. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, James L.; Wiczer, James J.

    1994-01-01

    A system and a method for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.

  19. A new framework for UAV-based remote sensing data processing and its application in almond water stress quantification

    USDA-ARS?s Scientific Manuscript database

    With the rapid development of small imaging sensors and unmanned aerial vehicles (UAVs), remote sensing is undergoing a revolution with greatly increased spatial and temporal resolutions. While more relevant detail becomes available, it is a challenge to analyze the large number of images to extract...

  20. Central Processing of the Chemical Senses: An Overview

    PubMed Central

    2010-01-01

    Our knowledge regarding the neural processing of the three chemical senses has been considerably lagging behind that of our other senses. It is only during the last 25 years that significant advances have been made in our understanding of where in the human brain odors, tastants, and trigeminal stimuli are processed. Here, we provide an overview of the current knowledge of how the human brain processes chemical stimuli based on findings in neuroimaging studies using positron emission tomography and functional magnetic resonance imaging. Additionally, we provide new insights from recent meta-analyses, on the basis of all published neuroimaging studies of the chemical senses, of where the chemical senses converge in the brain. PMID:21503268

  1. Thirty years of use and improvement of remote sensing, applied to epidemiology: from early promises to lasting frustration.

    PubMed

    Herbreteau, Vincent; Salem, Gérard; Souris, Marc; Hugot, Jean-Pierre; Gonzalez, Jean-Paul

    2007-06-01

    Remote sensing, referring to the remote study of objects, was originally developed for Earth observation, through the use of sensors on board planes or satellites. Improvements in the use and accessibility of multi-temporal satellite-derived environmental data have, for 30 years, contributed to a growing use in epidemiology. Despite the potential of remote-sensed images and processing techniques for a better knowledge of disease dynamics, an exhaustive analysis of the bibliography shows a generalized use of pre-processed spatial data and low-cost images, resulting in a limited adaptability when addressing biological questions.

  2. A scene-analysis approach to remote sensing. [San Francisco, California

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M. (Principal Investigator); Fischler, M. A.; Wolf, H. C.

    1978-01-01

    The author has identified the following significant results. Geometric correspondance between a sensed image and a symbolic map is established in an initial stage of processing by adjusting parameters of a sensed model so that the image features predicted from the map optimally match corresponding features extracted from the sensed image. Information in the map is then used to constrain where to look in an image, what to look for, and how to interpret what is seen. For simple monitoring tasks involving multispectral classification, these constraints significantly reduce computation, simplify interpretation, and improve the utility of the resulting information. Previously intractable tasks requiring spatial and textural analysis may become straightforward in the context established by the map knowledge. The use of map-guided image analysis in monitoring the volume of water in a reservoir, the number of boxcars in a railyard, and the number of ships in a harbor is demonstrated.

  3. On the Satisfaction of Modulus and Ambiguity Function Constraints in Radar Waveform Optimization for Detection

    DTIC Science & Technology

    2010-06-01

    sense that the two waveforms are as close as possible in a Euclidean sense . Li et al. [33] later devised an algorithm that provides the optimal waveform...respectively), and the SWORD algorithm in [33]. These algorithms were designed for the problem of detecting a known signal in the presence of wide- sense ... sensing , astronomy, crystallography, signal processing, and image processing. (See references in the works cited below for examples.) In the general

  4. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    NASA Astrophysics Data System (ADS)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  5. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    PubMed Central

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  6. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    PubMed

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  7. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

    PubMed Central

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-01-01

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  8. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement.

    PubMed

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-02-07

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  9. Higher resolution satellite remote sensing and the impact on image mapping

    USGS Publications Warehouse

    Watkins, Allen H.; Thormodsgard, June M.

    1987-01-01

    Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.

  10. High efficient optical remote sensing images acquisition for nano-satellite: reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Feng; Xin, Lei; Fu, Jie; Huang, Puming

    2017-10-01

    Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.

  11. Developing an undergraduate geography course on digital image processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Baumann, P. R.

    1981-01-01

    Problems relating to the development of a digital image processing course in an undergraduate geography environment is discussed. Computer resource requirements, course prerequisites, and the size of the study area are addressed.

  12. CORSE-81: The 1981 Conference on Remote Sensing Education

    NASA Technical Reports Server (NTRS)

    Davis, S. M. (Compiler)

    1981-01-01

    Summaries of the presentations and tutorial workshops addressing various strategies in remote sensing education are presented. Course design from different discipline perspectives, equipment requirements for image interpretation and processing, and the role of universities, private industry, and government agencies in the education process are covered.

  13. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  14. The microcomputer workstation - An alternate hardware architecture for remotely sensed image analysis

    NASA Technical Reports Server (NTRS)

    Erickson, W. K.; Hofman, L. B.; Donovan, W. E.

    1984-01-01

    Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.

  15. Translational Imaging Spectroscopy for Proximal Sensing

    PubMed Central

    Rogass, Christian; Koerting, Friederike M.; Mielke, Christian; Brell, Maximilian; Boesche, Nina K.; Bade, Maria; Hohmann, Christian

    2017-01-01

    Proximal sensing as the near field counterpart of remote sensing offers a broad variety of applications. Imaging spectroscopy in general and translational laboratory imaging spectroscopy in particular can be utilized for a variety of different research topics. Geoscientific applications require a precise pre-processing of hyperspectral data cubes to retrieve at-surface reflectance in order to conduct spectral feature-based comparison of unknown sample spectra to known library spectra. A new pre-processing chain called GeoMAP-Trans for at-surface reflectance retrieval is proposed here as an analogue to other algorithms published by the team of authors. It consists of a radiometric, a geometric and a spectral module. Each module consists of several processing steps that are described in detail. The processing chain was adapted to the broadly used HySPEX VNIR/SWIR imaging spectrometer system and tested using geological mineral samples. The performance was subjectively and objectively evaluated using standard artificial image quality metrics and comparative measurements of mineral and Lambertian diffuser standards with standard field and laboratory spectrometers. The proposed algorithm provides highly qualitative results, offers broad applicability through its generic design and might be the first one of its kind to be published. A high radiometric accuracy is achieved by the incorporation of the Reduction of Miscalibration Effects (ROME) framework. The geometric accuracy is higher than 1 μpixel. The critical spectral accuracy was relatively estimated by comparing spectra of standard field spectrometers to those from HySPEX for a Lambertian diffuser. The achieved spectral accuracy is better than 0.02% for the full spectrum and better than 98% for the absorption features. It was empirically shown that point and imaging spectrometers provide different results for non-Lambertian samples due to their different sensing principles, adjacency scattering impacts on the signal and anisotropic surface reflection properties. PMID:28800111

  16. Image processing methods used to simulate flight over remotely sensed data

    NASA Technical Reports Server (NTRS)

    Mortensen, H. B.; Hussey, K. J.; Mortensen, R. A.

    1988-01-01

    It has been demonstrated that image processing techniques can provide an effective means of simulating flight over remotely sensed data (Hussey et al. 1986). This paper explains the methods used to simulate and animate three-dimensional surfaces from two-dimensional imagery. The preprocessing techniques used on the input data, the selection of the animation sequence, the generation of the animation frames, and the recording of the animation is covered. The software used for all steps is discussed.

  17. Difet: Distributed Feature Extraction Tool for High Spatial Resolution Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Eken, S.; Aydın, E.; Sayar, A.

    2017-11-01

    In this paper, we propose distributed feature extraction tool from high spatial resolution remote sensing images. Tool is based on Apache Hadoop framework and Hadoop Image Processing Interface. Two corner detection (Harris and Shi-Tomasi) algorithms and five feature descriptors (SIFT, SURF, FAST, BRIEF, and ORB) are considered. Robustness of the tool in the task of feature extraction from LandSat-8 imageries are evaluated in terms of horizontal scalability.

  18. Focal-Plane Sensing-Processing: A Power-Efficient Approach for the Implementation of Privacy-Aware Networked Visual Sensors

    PubMed Central

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-01-01

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849

  19. Focal-plane sensing-processing: a power-efficient approach for the implementation of privacy-aware networked visual sensors.

    PubMed

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-08-19

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.

  20. Optimized computational imaging methods for small-target sensing in lens-free holographic microscopy

    NASA Astrophysics Data System (ADS)

    Xiong, Zhen; Engle, Isaiah; Garan, Jacob; Melzer, Jeffrey E.; McLeod, Euan

    2018-02-01

    Lens-free holographic microscopy is a promising diagnostic approach because it is cost-effective, compact, and suitable for point-of-care applications, while providing high resolution together with an ultra-large field-of-view. It has been applied to biomedical sensing, where larger targets like eukaryotic cells, bacteria, or viruses can be directly imaged without labels, and smaller targets like proteins or DNA strands can be detected via scattering labels like micro- or nano-spheres. Automated image processing routines can count objects and infer target concentrations. In these sensing applications, sensitivity and specificity are critically affected by image resolution and signal-to-noise ratio (SNR). Pixel super-resolution approaches have been shown to boost resolution and SNR by synthesizing a high-resolution image from multiple, partially redundant, low-resolution images. However, there are several computational methods that can be used to synthesize the high-resolution image, and previously, it has been unclear which methods work best for the particular case of small-particle sensing. Here, we quantify the SNR achieved in small-particle sensing using regularized gradient-descent optimization method, where the regularization is based on cardinal-neighbor differences, Bayer-pattern noise reduction, or sparsity in the image. In particular, we find that gradient-descent with sparsity-based regularization works best for small-particle sensing. These computational approaches were evaluated on images acquired using a lens-free microscope that we assembled from an off-the-shelf LED array and color image sensor. Compared to other lens-free imaging systems, our hardware integration, calibration, and sample preparation are particularly simple. We believe our results will help to enable the best performance in lens-free holographic sensing.

  1. City of Flagstaff Project: Ground Water Resource Evaluation, Remote Sensing Component

    USGS Publications Warehouse

    Chavez, Pat S.; Velasco, Miguel G.; Bowell, Jo-Ann; Sides, Stuart C.; Gonzalez, Rosendo R.; Soltesz, Deborah L.

    1996-01-01

    Many regions, cities, and towns in the Western United States need new or expanded water resources because of both population growth and increased development. Any tools or data that can help in the evaluation of an area's potential water resources must be considered for this increasingly critical need. Remotely sensed satellite images and subsequent digital image processing have been under-utilized in ground water resource evaluation and exploration. Satellite images can be helpful in detecting and mapping an area's regional structural patterns, including major fracture and fault systems, two important geologic settings for an area's surface to ground water relations. Within the United States Geological Survey's (USGS) Flagstaff Field Center, expertise and capabilities in remote sensing and digital image processing have been developed over the past 25 years through various programs. For the City of Flagstaff project, this expertise and these capabilities were combined with traditional geologic field mapping to help evaluate ground water resources in the Flagstaff area. Various enhancement and manipulation procedures were applied to the digital satellite images; the results, in both digital and hardcopy format, were used for field mapping and analyzing the regional structure. Relative to surface sampling, remotely sensed satellite and airborne images have improved spatial coverage that can help study, map, and monitor the earth surface at local and/or regional scales. Advantages offered by remotely sensed satellite image data include: 1. a synoptic/regional view compared to both aerial photographs and ground sampling, 2. cost effectiveness, 3. high spatial resolution and coverage compared to ground sampling, and 4. relatively high temporal coverage on a long term basis. Remotely sensed images contain both spectral and spatial information. The spectral information provides various properties and characteristics about the surface cover at a given location or pixel (that is, vegetation and/or soil type). The spatial information gives the distribution, variation, and topographic relief of the cover types from pixel to pixel. Therefore, the main characteristics that determine a pixel's brightness/reflectance and, consequently, the digital number (DN) assigned to the pixel, are the physical properties of the surface and near surface, the cover type, and the topographic slope. In this application, the ability to detect and map lineaments, especially those related to fractures and faults, is critical. Therefore, the extraction of spatial information from the digital images was of prime interest in this project. The spatial information varies among the different spectral bands available; in particular, a near infrared spectral band is better than a visible band when extracting spatial information in highly vegetated areas. In this study, both visible and near infrared bands were analyzed and used to extract the desired spatial information from the images. The wide swath coverage of remotely sensed satellite digital images makes them ideal for regional analysis and mapping. Since locating and mapping highly fractured and faulted areas is a major requirement for ground water resource evaluation and exploration this aspect of satellite images was considered critical; it allowed us to stand back (actually up about 440 miles), look at, and map the regional structural setting of the area. The main focus of the remote sensing and digital image processing component of this project was to use both remotely sensed digital satellite images and a Digital Elevation Model (DEM) to extract spatial information related to the structural and topographic patterns in the area. The data types used were digital satellite images collected by the United States' Landsat Thematic Mapper (TM) and French Systeme Probatoire d'Observation de laTerre (SPOT) imaging systems, along with a DEM of the Flagstaff region. The USGS Mini Image Processing Sy

  2. Enhancement of Capabilities in Hyperspectral and Radar Remote Sensing for Environmental Assessment and Monitoring

    NASA Technical Reports Server (NTRS)

    Hepner, George F.

    1999-01-01

    The University of Utah, Department of Geography has developed a research and instructional program in satellite remote sensing and image processing. The University requested funds for the purchase of software licenses, mass storage for massive hyperspectral imager data sets, upgrades for the central data server to handle the additional storage capacity, a spectroradiometer for field data collection. These purchases have been made. This equipment will support research in one of the newest and most rapidly expanding areas of remote sensing.

  3. ROLES OF REMOTE SENSING AND CARTOGRAPHY IN THE USGS NATIONAL MAPPING DIVISION.

    USGS Publications Warehouse

    Southard, Rupert B.; Salisbury, John W.

    1983-01-01

    The inseparable roles of remote sensing and photogrammetry have been recognized to be consistent with the aims and interests of the American Society of Photogrammetry. In particular, spatial data storage, data merging and manipulation methods and other techniques originally developed for remote sensing applications also have applications for digital cartography. Also, with the introduction of much improved digital processing techniques, even relatively low resolution (80 m) traditional Landsat images can now be digitally mosaicked into excellent quality 1:250,000-scale image maps.

  4. Land Use and Change

    NASA Technical Reports Server (NTRS)

    Irwin, Daniel E.

    2004-01-01

    The overall purpose of this training session is to familiarize Central American project cooperators with the remote sensing and image processing research that is being conducted by the NASA research team and to acquaint them with the data products being produced in the areas of Land Cover and Land Use Change and carbon modeling under the NASA SERVIR project. The training session, therefore, will be both informative and practical in nature. Specifically, the course will focus on the physics of remote sensing, various satellite and airborne sensors (Landsat, MODIS, IKONOS, Star-3i), processing techniques, and commercial off the shelf image processing software.

  5. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    NASA Astrophysics Data System (ADS)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  6. Accommodating Student Diversity in Remote Sensing Instruction.

    ERIC Educational Resources Information Center

    Hammen, John L., III.

    1992-01-01

    Discusses the difficulty of teaching computer-based remote sensing to students of varying levels of computer literacy. Suggests an instructional method that accommodates all levels of technical expertise through the use of microcomputers. Presents a curriculum that includes an introduction to remote sensing, digital image processing, and…

  7. Advanced image based methods for structural integrity monitoring: Review and prospects

    NASA Astrophysics Data System (ADS)

    Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.

    2018-02-01

    There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.

  8. Optical correlators for automated rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1991-01-01

    The paper begins with a description of optical correlation. In this process, the propagation physics of coherent light is used to process images and extract information. The processed image is operated on as an area, rather than as a collection of points. An essentially instantaneous convolution is performed on that image to provide the sensory data. In this process, an image is sensed and encoded onto a coherent wavefront, and the propagation is arranged to create a bright spot of the image to match a model of the desired object. The brightness of the spot provides an indication of the degree of resemblance of the viewed image to the mode, and the location of the bright spot provides pointing information. The process can be utilized for AR&C to achieve the capability to identify objects among known reference types, estimate the object's location and orientation, and interact with the control system. System characteristics (speed, robustness, accuracy, small form factors) are adequate to meet most requirements. The correlator exploits the fact that Bosons and Fermions pass through each other. Since the image source is input as an electronic data set, conventional imagers can be used. In systems where the image is input directly, the correlating element must be at the sensing location.

  9. Northern Everglades, Florida, satellite image map

    USGS Publications Warehouse

    Thomas, Jean-Claude; Jones, John W.

    2002-01-01

    These satellite image maps are one product of the USGS Land Characteristics from Remote Sensing project, funded through the USGS Place-Based Studies Program with support from the Everglades National Park. The objective of this project is to develop and apply innovative remote sensing and geographic information system techniques to map the distribution of vegetation, vegetation characteristics, and related hydrologic variables through space and over time. The mapping and description of vegetation characteristics and their variations are necessary to accurately simulate surface hydrology and other surface processes in South Florida and to monitor land surface changes. As part of this research, data from many airborne and satellite imaging systems have been georeferenced and processed to facilitate data fusion and analysis. These image maps were created using image fusion techniques developed as part of this project.

  10. Automated seamline detection along skeleton for remote sensing image mosaicking

    NASA Astrophysics Data System (ADS)

    Zhang, Hansong; Chen, Jianyu; Liu, Xin

    2015-08-01

    The automatic generation of seamline along the overlap region skeleton is a concerning problem for the mosaicking of Remote Sensing(RS) images. Along with the improvement of RS image resolution, it is necessary to ensure rapid and accurate processing under complex conditions. So an automated seamline detection method for RS image mosaicking based on image object and overlap region contour contraction is introduced. By this means we can ensure universality and efficiency of mosaicking. The experiments also show that this method can select seamline of RS images with great speed and high accuracy over arbitrary overlap regions, and realize RS image rapid mosaicking in surveying and mapping production.

  11. Symposium on Machine Processing of Remotely Sensed Data, Purdue University, West Lafayette, Ind., June 29-July 1, 1976, Proceedings

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Papers are presented on the applicability of Landsat data to water management and control needs, IBIS, a geographic information system based on digital image processing and image raster datatype, and the Image Data Access Method (IDAM) for the Earth Resources Interactive Processing System. Attention is also given to the Prototype Classification and Mensuration System (PROCAMS) applied to agricultural data, the use of Landsat for water quality monitoring in North Carolina, and the analysis of geophysical remote sensing data using multivariate pattern recognition. The Illinois crop-acreage estimation experiment, the Pacific Northwest Resources Inventory Demonstration, and the effects of spatial misregistration on multispectral recognition are also considered. Individual items are announced in this issue.

  12. Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.

    PubMed

    Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili

    2015-12-15

    Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.

  13. Filtering method of star control points for geometric correction of remote sensing image based on RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Xiangli; Yang, Jungang; Deng, Xinpu

    2018-04-01

    In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.

  14. Research Status and Development Trend of Remote Sensing in China Using Bibliometric Analysis

    NASA Astrophysics Data System (ADS)

    Zeng, Y.; Zhang, J.; Niu, R.

    2015-06-01

    Remote sensing was introduced into China in 1970s and then began to flourish. At present, China has developed into a big remote sensing country, and remote sensing is increasingly playing an important role in various fields of national economic construction and social development. Based on China Academic Journals Full-text Database and China Citation Database published by China National Knowledge Infrastructure, this paper analyzed academic characteristics of 963 highly cited papers published by 16 professional and academic journals in the field of surveying and mapping from January 2010 to December 2014 in China, which include hot topics, literature authors, research institutions, and fundations. At the same time, it studied a total of 51,149 keywords published by these 16 journals during the same period. Firstly by keyword selection, keyword normalization, keyword consistency and keyword incorporation, and then by analysis of high frequency keywords, the progress and prospect of China's remote sensing technology in data acquisition, data processing and applications during the past five years were further explored and revealed. It can be seen that: highly cited paper analysis and word frequency analysis is complementary on subject progress analysis; in data acquisition phase, research focus is new civilian remote sensing satellite systems and UAV remote sensing system; research focus of data processing and analysis is multi-source information extraction and classification, laser point cloud data processing, objectoriented high resolution image analysis, SAR data and hyper-spectral image processing, etc.; development trend of remote sensing data processing is quantitative, intelligent, automated, and real-time, and the breadth and depth of remote sensing application is gradually increased; parallel computing, cloud computing and geographic conditions monitoring and census are the new research focuses to be paid attention to.

  15. Conference of Remote Sensing Educators (CORSE-78)

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Ways of improving the teaching of remote sensing students at colleges and universities are discussed. Formal papers and workshops on various Earth resources disciplines, image interpretation, and data processing concepts are presented. An inventory of existing remote sensing and related subject courses being given in western regional universities is included.

  16. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, J.L.; Wiczer, J.J.

    1994-01-25

    A system and a method for imaging desired surfaces of a workpiece is described. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.

  17. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, J.L.; Wiczer, J.J.

    1995-01-03

    A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.

  18. Uniform competency-based local feature extraction for remote sensing images

    NASA Astrophysics Data System (ADS)

    Sedaghat, Amin; Mohammadi, Nazila

    2018-01-01

    Local feature detectors are widely used in many photogrammetry and remote sensing applications. The quantity and distribution of the local features play a critical role in the quality of the image matching process, particularly for multi-sensor high resolution remote sensing image registration. However, conventional local feature detectors cannot extract desirable matched features either in terms of the number of correct matches or the spatial and scale distribution in multi-sensor remote sensing images. To address this problem, this paper proposes a novel method for uniform and robust local feature extraction for remote sensing images, which is based on a novel competency criterion and scale and location distribution constraints. The proposed method, called uniform competency (UC) local feature extraction, can be easily applied to any local feature detector for various kinds of applications. The proposed competency criterion is based on a weighted ranking process using three quality measures, including robustness, spatial saliency and scale parameters, which is performed in a multi-layer gridding schema. For evaluation, five state-of-the-art local feature detector approaches, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), scale-invariant feature operator (SFOP), maximally stable extremal region (MSER) and hessian-affine, are used. The proposed UC-based feature extraction algorithms were successfully applied to match various synthetic and real satellite image pairs, and the results demonstrate its capability to increase matching performance and to improve the spatial distribution. The code to carry out the UC feature extraction is available from href="https://www.researchgate.net/publication/317956777_UC-Feature_Extraction.

  19. An Image Retrieval and Processing Expert System for the World Wide Web

    NASA Technical Reports Server (NTRS)

    Rodriguez, Ricardo; Rondon, Angelica; Bruno, Maria I.; Vasquez, Ramon

    1998-01-01

    This paper presents a system that is being developed in the Laboratory of Applied Remote Sensing and Image Processing at the University of P.R. at Mayaguez. It describes the components that constitute its architecture. The main elements are: a Data Warehouse, an Image Processing Engine, and an Expert System. Together, they provide a complete solution to researchers from different fields that make use of images in their investigations. Also, since it is available to the World Wide Web, it provides remote access and processing of images.

  20. Remote-Sensing Time Series Analysis, a Vegetation Monitoring Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney; Prados, Donald; Ryan, Robert; Ross, Kenton; Spruce, Joseph; Gasser, Gerald; Greer, Randall

    2008-01-01

    The Time Series Product Tool (TSPT) is software, developed in MATLAB , which creates and displays high signal-to- noise Vegetation Indices imagery and other higher-level products derived from remotely sensed data. This tool enables automated, rapid, large-scale regional surveillance of crops, forests, and other vegetation. TSPT temporally processes high-revisit-rate satellite imagery produced by the Moderate Resolution Imaging Spectroradiometer (MODIS) and by other remote-sensing systems. Although MODIS imagery is acquired daily, cloudiness and other sources of noise can greatly reduce the effective temporal resolution. To improve cloud statistics, the TSPT combines MODIS data from multiple satellites (Aqua and Terra). The TSPT produces MODIS products as single time-frame and multitemporal change images, as time-series plots at a selected location, or as temporally processed image videos. Using the TSPT program, MODIS metadata is used to remove and/or correct bad and suspect data. Bad pixel removal, multiple satellite data fusion, and temporal processing techniques create high-quality plots and animated image video sequences that depict changes in vegetation greenness. This tool provides several temporal processing options not found in other comparable imaging software tools. Because the framework to generate and use other algorithms is established, small modifications to this tool will enable the use of a large range of remotely sensed data types. An effective remote-sensing crop monitoring system must be able to detect subtle changes in plant health in the earliest stages, before the effects of a disease outbreak or other adverse environmental conditions can become widespread and devastating. The integration of the time series analysis tool with ground-based information, soil types, crop types, meteorological data, and crop growth models in a Geographic Information System, could provide the foundation for a large-area crop-surveillance system that could identify a variety of plant phenomena and improve monitoring capabilities.

  1. The Real-Time Monitoring Service Platform for Land Supervision Based on Cloud Integration

    NASA Astrophysics Data System (ADS)

    Sun, J.; Mao, M.; Xiang, H.; Wang, G.; Liang, Y.

    2018-04-01

    Remote sensing monitoring has become the important means for land and resources departments to strengthen supervision. Aiming at the problems of low monitoring frequency and poor data currency in current remote sensing monitoring, this paper researched and developed the cloud-integrated real-time monitoring service platform for land supervision which enhanced the monitoring frequency by acquiring the domestic satellite image data overall and accelerated the remote sensing image data processing efficiency by exploiting the intelligent dynamic processing technology of multi-source images. Through the pilot application in Jinan Bureau of State Land Supervision, it has been proved that the real-time monitoring technical method for land supervision is feasible. In addition, the functions of real-time monitoring and early warning are carried out on illegal land use, permanent basic farmland protection and boundary breakthrough in urban development. The application has achieved remarkable results.

  2. Looking back to inform the future: The role of cognition in forest disturbance characterization from remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel Anne

    Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.

  3. Precise calibration of pupil images in pyramid wavefront sensor.

    PubMed

    Liu, Yong; Mu, Quanquan; Cao, Zhaoliang; Hu, Lifa; Yang, Chengliang; Xuan, Li

    2017-04-20

    The pyramid wavefront sensor (PWFS) is a novel wavefront sensor with several inspiring advantages compared with Shack-Hartmann wavefront sensors. The PWFS uses four pupil images to calculate the local tilt of the incoming wavefront. Pupil images are conjugated with a telescope pupil so that each pixel in the pupil image is diffraction-limited by the telescope pupil diameter, thus the sensing error of the PWFS is much lower than that of the Shack-Hartmann sensor and is related to the extraction and alignment accuracy of pupil images. However, precise extraction of these images is difficult to conduct in practice. Aiming at improving the sensing accuracy, we analyzed the physical model of calibration of a PWFS and put forward an extraction algorithm. The process was verified via a closed-loop correction experiment. The results showed that the sensing accuracy of the PWFS increased after applying the calibration and extraction method.

  4. Thematic Conference on Geologic Remote Sensing, 8th, Denver, CO, Apr. 29-May 2, 1991, Proceedings. Vols. 1 & 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The proceedings contain papers discussing the state-of-the-art exploration, engineering, and environmental applications of geologic remote sensing, along with the research and development activities aimed at increasing the future capabilities of this technology. The following topics are addressed: spectral geology, U.S. and international hydrocarbon exporation, radar and thermal infrared remote sensing, engineering geology and hydrogeology, mineral exploration, remote sensing for marine and environmental applications, image processing and analysis, geobotanical remote sensing, and data integration and geographic information systems. Particular attention is given to spectral alteration mapping with imaging spectrometers, mapping the coastal plain of the Congo with airborne digital radar, applications of remote sensing techniques to the assessment of dam safety, remote sensing of ferric iron minerals as guides for gold exploration, principal component analysis for alteration mappping, and the application of remote sensing techniques for gold prospecting in the north Fujian province.

  5. Remote Sensing of Landscapes with Spectral Images

    NASA Astrophysics Data System (ADS)

    Adams, John B.; Gillespie, Alan R.

    2006-05-01

    Remote Sensing of Landscapes with Spectral Images describes how to process and interpret spectral images using physical models to bridge the gap between the engineering and theoretical sides of remote-sensing and the world that we encounter when we venture outdoors. The emphasis is on the practical use of images rather than on theory and mathematical derivations. Examples are drawn from a variety of landscapes and interpretations are tested against the reality seen on the ground. The reader is led through analysis of real images (using figures and explanations); the examples are chosen to illustrate important aspects of the analytic framework. This textbook will form a valuable reference for graduate students and professionals in a variety of disciplines including ecology, forestry, geology, geography, urban planning, archeology and civil engineering. It is supplemented by a web-site hosting digital color versions of figures in the book as well as ancillary images (www.cambridge.org/9780521662214). Presents a coherent view of practical remote sensing, leading from imaging and field work to the generation of useful thematic maps Explains how to apply physical models to help interpret spectral images Supplemented by a website hosting digital colour versions of figures in the book, as well as additional colour figures

  6. Exploitation of commercial remote sensing images: reality ignored?

    NASA Astrophysics Data System (ADS)

    Allen, Paul C.

    1999-12-01

    The remote sensing market is on the verge of being awash in commercial high-resolution images. Market estimates are based on the growing numbers of planned commercial remote sensing electro-optical, radar, and hyperspectral satellites and aircraft. EarthWatch, Space Imaging, SPOT, and RDL among others are all working towards launch and service of one to five meter panchromatic or radar-imaging satellites. Additionally, new advances in digital air surveillance and reconnaissance systems, both manned and unmanned, are also expected to expand the geospatial customer base. Regardless of platform, image type, or location, each system promises images with some combination of increased resolution, greater spectral coverage, reduced turn-around time (request-to- delivery), and/or reduced image cost. For the most part, however, market estimates for these new sources focus on the raw digital images (from collection to the ground station) while ignoring the requirements for a processing and exploitation infrastructure comprised of exploitation tools, exploitation training, library systems, and image management systems. From this it would appear the commercial imaging community has failed to learn the hard lessons of national government experience choosing instead to ignore reality and replicate the bias of collection over processing and exploitation. While this trend may be not impact the small quantity users that exist today it will certainly adversely affect the mid- to large-sized users of the future.

  7. The Land-Use and Land-Cover Change Analysis in Beijing Huairou in Last Ten Years

    NASA Astrophysics Data System (ADS)

    Zhao, Q.; Liu, G.; Tu, J.; Wang, Z.

    2018-04-01

    With eCognition software, the sample-based object-oriented classification method is used. Remote sensing images in Huairou district of Beijing had been classified using remote sensing images of last ten years. According to the results of image processing, the land use types in Huairou district of Beijing were analyzed in the past ten years, and the changes of land use types in Huairou district were obtained, and the reasons for its occurrence were analyzed.

  8. South Florida Everglades: satellite image map

    USGS Publications Warehouse

    Jones, John W.; Thomas, Jean-Claude; Desmond, G.B.

    2001-01-01

    These satellite image maps are one product of the USGS Land Characteristics from Remote Sensing project, funded through the USGS Place-Based Studies Program (http://access.usgs.gov/) with support from the Everglades National Park (http://www.nps.gov/ever/). The objective of this project is to develop and apply innovative remote sensing and geographic information system techniques to map the distribution of vegetation, vegetation characteristics, and related hydrologic variables through space and over time. The mapping and description of vegetation characteristics and their variations are necessary to accurately simulate surface hydrology and other surface processes in South Florida and to monitor land surface changes. As part of this research, data from many airborne and satellite imaging systems have been georeferenced and processed to facilitate data fusion and analysis. These image maps were created using image fusion techniques developed as part of this project.

  9. High-resolution imaging of cellular processes across textured surfaces using an indexed-matched elastomer.

    PubMed

    Ravasio, Andrea; Vaishnavi, Sree; Ladoux, Benoit; Viasnoff, Virgile

    2015-03-01

    Understanding and controlling how cells interact with the microenvironment has emerged as a prominent field in bioengineering, stem cell research and in the development of the next generation of in vitro assays as well as organs on a chip. Changing the local rheology or the nanotextured surface of substrates has proved an efficient approach to improve cell lineage differentiation, to control cell migration properties and to understand environmental sensing processes. However, introducing substrate surface textures often alters the ability to image cells with high precision, compromising our understanding of molecular mechanisms at stake in environmental sensing. In this paper, we demonstrate how nano/microstructured surfaces can be molded from an elastomeric material with a refractive index matched to the cell culture medium. Once made biocompatible, contrast imaging (differential interference contrast, phase contrast) and high-resolution fluorescence imaging of subcellular structures can be implemented through the textured surface using an inverted microscope. Simultaneous traction force measurements by micropost deflection were also performed, demonstrating the potential of our approach to study cell-environment interactions, sensing processes and cellular force generation with unprecedented resolution. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  10. Remote sensing in marine environment - acquiring, processing, and interpreting GLORIA sidescan sonor images of deep sea floor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, D.W.

    1989-03-01

    The US Geological Survey's remote sensing instrument for regional imaging of the deep sea floor (> 400 m water depth) is the GLORIA (Geologic Long-Range Inclined Asdic) sidescan sonar system, designed and operated by the British Institute of Oceanographic Sciences. A 30-sec sweep rate provides for a swath width of approximately 45 km, depending on water depth. The return signal is digitally recorded as 8 bit data to provide a cross-range pixel dimension of 50 m. Postcruise image processing is carried out by using USGS software. Processing includes precision water-column removal, geometric and radiometric corrections, and contrast enhancement. Mosaicking includesmore » map grid fitting, concatenation, and tone matching. Seismic reflection profiles, acquired along track during the survey, are image correlative and provide a subsurface dimension unique to marine remote sensing. Generally GLORIA image interpretation is based on brightness variations which are largely a function of (1) surface roughness at a scale of approximately 1 m and (2) slope changes of more than about 4/degrees/ over distances of at least 50 m. Broader, low-frequency changes in slope that cannot be detected from the Gloria data can be determined from seismic profiles. Digital files of bathymetry derived from echo-sounder data can be merged with GLORIA image data to create relief models of the sea floor for geomorphic interpretation of regional slope effects.« less

  11. Laser Covariance Vibrometry for Unsymmetrical Mode Detection

    DTIC Science & Technology

    2006-09-01

    surface rough- ness. Results show that the remote sensing spectra adequately match the structural vibration, including non – imaging spatially...the speckle. 10 profile (cross – section), is an air turbulence effect ignored in this work that will affect both the sensed vibration phase change and...like spike impulse. 13 Chapter three describes optical processing issues. This chapter delineates the image propagation algorithms used for the work

  12. A 'user friendly' geographic information system in a color interactive digital image processing system environment

    NASA Technical Reports Server (NTRS)

    Campbell, W. J.; Goldberg, M.

    1982-01-01

    NASA's Eastern Regional Remote Sensing Applications Center (ERRSAC) has recognized the need to accommodate spatial analysis techniques in its remote sensing technology transfer program. A computerized Geographic Information System to incorporate remotely sensed data, specifically Landsat, with other relevant data was considered a realistic approach to address a given resource problem. Questions arose concerning the selection of a suitable available software system to demonstrate, train, and undertake demonstration projects with ERRSAC's user community. The very specific requirements for such a system are discussed. The solution found involved the addition of geographic information processing functions to the Interactive Digital Image Manipulation System (IDIMS). Details regarding the functions of the new integrated system are examined along with the characteristics of the software.

  13. Rapid Target Detection in High Resolution Remote Sensing Images Using Yolo Model

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Chen, X.; Gao, Y.; Li, Y.

    2018-04-01

    Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.

  14. Terahertz imaging with compressed sensing and phase retrieval.

    PubMed

    Chan, Wai Lam; Moravec, Matthew L; Baraniuk, Richard G; Mittleman, Daniel M

    2008-05-01

    We describe a novel, high-speed pulsed terahertz (THz) Fourier imaging system based on compressed sensing (CS), a new signal processing theory, which allows image reconstruction with fewer samples than traditionally required. Using CS, we successfully reconstruct a 64 x 64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels, which defines the image in the Fourier plane, and observe improved reconstruction quality when we apply phase correction. For our chosen image, only about 12% of the pixels are required for reassembling the image. In combination with phase retrieval, our system has the capability to reconstruct images with only a small subset of Fourier amplitude measurements and thus has potential application in THz imaging with cw sources.

  15. Information mining in remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Li, Jiang

    The volume of remotely sensed imagery continues to grow at an enormous rate due to the advances in sensor technology, and our capability for collecting and storing images has greatly outpaced our ability to analyze and retrieve information from the images. This motivates us to develop image information mining techniques, which is very much an interdisciplinary endeavor drawing upon expertise in image processing, databases, information retrieval, machine learning, and software design. This dissertation proposes and implements an extensive remote sensing image information mining (ReSIM) system prototype for mining useful information implicitly stored in remote sensing imagery. The system consists of three modules: image processing subsystem, database subsystem, and visualization and graphical user interface (GUI) subsystem. Land cover and land use (LCLU) information corresponding to spectral characteristics is identified by supervised classification based on support vector machines (SVM) with automatic model selection, while textural features that characterize spatial information are extracted using Gabor wavelet coefficients. Within LCLU categories, textural features are clustered using an optimized k-means clustering approach to acquire search efficient space. The clusters are stored in an object-oriented database (OODB) with associated images indexed in an image database (IDB). A k-nearest neighbor search is performed using a query-by-example (QBE) approach. Furthermore, an automatic parametric contour tracing algorithm and an O(n) time piecewise linear polygonal approximation (PLPA) algorithm are developed for shape information mining of interesting objects within the image. A fuzzy object-oriented database based on the fuzzy object-oriented data (FOOD) model is developed to handle the fuzziness and uncertainty. Three specific applications are presented: integrated land cover and texture pattern mining, shape information mining for change detection of lakes, and fuzzy normalized difference vegetation index (NDVI) pattern mining. The study results show the effectiveness of the proposed system prototype and the potentials for other applications in remote sensing.

  16. Remote Sensing: The View from Above. Know Your Environment.

    ERIC Educational Resources Information Center

    Academy of Natural Sciences, Philadelphia, PA.

    This publication identifies some of the general concepts of remote sensing and explains the image collection process and computer-generated reconstruction of the data. Monitoring the ecological collapse in coral reefs, weather phenomena like El Nino/La Nina, and U.S. Space Shuttle-based sensing projects are some of the areas for which remote…

  17. Mapping of Polar Areas Based on High-Resolution Satellite Images: The Example of the Henryk Arctowski Polish Antarctic Station

    NASA Astrophysics Data System (ADS)

    Kurczyński, Zdzisław; Różycki, Sebastian; Bylina, Paweł

    2017-12-01

    To produce orthophotomaps or digital elevation models, the most commonly used method is photogrammetric measurement. However, the use of aerial images is not easy in polar regions for logistical reasons. In these areas, remote sensing data acquired from satellite systems is much more useful. This paper presents the basic technical requirements of different products which can be obtain (in particular orthoimages and digital elevation model (DEM)) using Very-High-Resolution Satellite (VHRS) images. The study area was situated in the vicinity of the Henryk Arctowski Polish Antarctic Station on the Western Shore of Admiralty Bay, King George Island, Western Antarctic. Image processing was applied on two triplets of images acquired by the Pléiades 1A and 1B in March 2013. The results of the generation of orthoimages from the Pléiades systems without control points showed that the proposed method can achieve Root Mean Squared Error (RMSE) of 3-9 m. The presented Pléiades images are useful for thematic remote sensing analysis and processing of measurements. Using satellite images to produce remote sensing products for polar regions is highly beneficial and reliable and compares well with more expensive airborne photographs or field surveys.

  18. Parallel algorithm of real-time infrared image restoration based on total variation theory

    NASA Astrophysics Data System (ADS)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  19. Brazil's remote sensing activities in the Eighties

    NASA Technical Reports Server (NTRS)

    Raupp, M. A.; Pereiradacunha, R.; Novaes, R. A.

    1985-01-01

    Most of the remote sensing activities in Brazil have been conducted by the Institute for Space Research (INPE). This report describes briefly INPE's activities in remote sensing in the last years. INPE has been engaged in research (e.g., radiance studies), development (e.g., CCD-scanners, image processing devices) and applications (e.g., crop survey, land use, mineral resources, etc.) of remote sensing. INPE is also responsible for the operation (data reception and processing) of the LANDSATs and meteorological satellites. Data acquisition activities include the development of CCD-Camera to be deployed on board the space shuttle and the construction of a remote sensing satellite.

  20. Construction of an unmanned aerial vehicle remote sensing system for crop monitoring

    NASA Astrophysics Data System (ADS)

    Jeong, Seungtaek; Ko, Jonghan; Kim, Mijeong; Kim, Jongkwon

    2016-04-01

    We constructed a lightweight unmanned aerial vehicle (UAV) remote sensing system and determined the ideal method for equipment setup, image acquisition, and image processing. Fields of rice paddy (Oryza sativa cv. Unkwang) grown under three different nitrogen (N) treatments of 0, 50, or 115 kg/ha were monitored at Chonnam National University, Gwangju, Republic of Korea, in 2013. A multispectral camera was used to acquire UAV images from the study site. Atmospheric correction of these images was completed using the empirical line method, and three-point (black, gray, and white) calibration boards were used as pseudo references. Evaluation of our corrected UAV-based remote sensing data revealed that correction efficiency and root mean square errors ranged from 0.77 to 0.95 and 0.01 to 0.05, respectively. The time series maps of simulated normalized difference vegetation index (NDVI) produced using the UAV images reproduced field variations of NDVI reasonably well, both within and between the different N treatments. We concluded that the UAV-based remote sensing technology utilized in this study is potentially an easy and simple way to quantitatively obtain reliable two-dimensional remote sensing information on crop growth.

  1. Reliable clarity automatic-evaluation method for optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  2. NASA Computational Case Study SAR Data Processing: Ground-Range Projection

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Rincon, Rafael

    2013-01-01

    Radar technology is used extensively by NASA for remote sensing of the Earth and other Planetary bodies. In this case study, we learn about different computational concepts for processing radar data. In particular, we learn how to correct a slanted radar image by projecting it on the surface that was sensed by a radar instrument.

  3. Self-Referenced Smartphone-Based Nanoplasmonic Imaging Platform for Colorimetric Biochemical Sensing.

    PubMed

    Wang, Xinhao; Chang, Te-Wei; Lin, Guohong; Gartia, Manas Ranjan; Liu, Gang Logan

    2017-01-03

    Colorimetric sensors usually suffer due to errors from variation in light source intensity, the type of light source, the Bayer filter algorithm, and the sensitivity of the camera to incoming light. Here, we demonstrate a self-referenced portable smartphone-based plasmonic sensing platform integrated with an internal reference sample along with an image processing method to perform colorimetric sensing. Two sensing principles based on unique nanoplasmonics enabled phenomena from a nanostructured plasmonic sensor, named as nanoLCA (nano Lycurgus cup array), were demonstrated here for colorimetric biochemical sensing: liquid refractive index sensing and optical absorbance enhancement sensing. Refractive indices of colorless liquids were measured by simple smartphone imaging and color analysis. Optical absorbance enhancement in the colorimetric biochemical assay was achieved by matching the plasmon resonance wavelength with the chromophore's absorbance peak wavelength. Such a sensing mechanism improved the limit of detection (LoD) by 100 times in a microplate reader format. Compared with a traditional colorimetric assay such as urine testing strips, a smartphone plasmon enhanced colorimetric sensing system provided 30 times improvement in the LoD. The platform was applied for simulated urine testing to precisely identify the samples with higher protein concentration, which showed potential point-of-care and early detection of kidney disease with the smartphone plasmonic resonance sensing system.

  4. Integrated solution for the complete remote sensing process - Earth Observation Mission Control Centre (EOMC2)

    NASA Astrophysics Data System (ADS)

    Czapski, Paweł

    2016-07-01

    We are going to show the latest achievements of the Remote Sensing Division of the Institute of Aviation in the area of remote sensing, i.e. the project of the integrated solution for the whole remote sensing process ranging from acquiring to providing the end user with required information. Currently, these tasks are partially performed by several centers in Poland, however there is no leader providing an integrated solution. Motivated by this fact, the Earth Observation Mission Control Centre (EOMC2) was established in the Remote Sensing Division of the Institute of Aviation that will provide such a comprehensive approach. Establishing of EOMC2 can be compared with creating Data Center Aerial and Satellite Data Centre (OPOLIS) in the Institute of Geodesy and Cartography in the mid-70s in Poland. OPOLIS was responsible for broadly defined data processing, it was a breakthrough innovation that initiated the use of aerial image analysis in Poland. Operation center is a part of the project that will be created, which in comparison with the competitors will provide better solutions, i.e.: • Centralization of the acquiring, processing, publishing and archiving of data, • Implementing elements of the INSPIRE directive recommendations on spatial data management, • Providing the end-user with information in the near real-time, • Ability of supplying the system with images of various origin (aerial, satellite, e.g. EUMETCast, Sentinel, Landsat) and diversity of telemetry data, data aggregation and using the same algorithms to images obtained from different sources, • System reconfiguration and batch processing of large data sets at any time, • A wide range of potential applications: precision agriculture, environmental protection, crisis management and national security, aerial, small satellite and sounding rocket missions monitoring.

  5. Environmental analysis using integrated GIS and remotely sensed data - Some research needs and priorities

    NASA Technical Reports Server (NTRS)

    Davis, Frank W.; Quattrochi, Dale A.; Ridd, Merrill K.; Lam, Nina S.-N.; Walsh, Stephen J.

    1991-01-01

    This paper discusses some basic scientific issues and research needs in the joint processing of remotely sensed and GIS data for environmental analysis. Two general topics are treated in detail: (1) scale dependence of geographic data and the analysis of multiscale remotely sensed and GIS data, and (2) data transformations and information flow during data processing. The discussion of scale dependence focuses on the theory and applications of spatial autocorrelation, geostatistics, and fractals for characterizing and modeling spatial variation. Data transformations during processing are described within the larger framework of geographical analysis, encompassing sampling, cartography, remote sensing, and GIS. Development of better user interfaces between image processing, GIS, database management, and statistical software is needed to expedite research on these and other impediments to integrated analysis of remotely sensed and GIS data.

  6. Introduction to Remote Sensing Image Registration

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline

    2017-01-01

    For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications

  7. Geomorphic Processes and Remote Sensing Signatures of Alluvial Fans in the Kun Lun Mountains, China

    NASA Technical Reports Server (NTRS)

    Farr, Tom G.; Chadwick, Oliver A.

    1996-01-01

    The timing of alluvial deposition in arid and semiarid areas is tied to land-surface instability caused by regional climate changes. The distribution pattern of dated deposits provides maps of regional land-surface response to past climate change. Sensitivity to differences in surface roughness and composition makes remote sensing techniques useful for regional mapping of alluvial deposits. Radar images from the Spaceborne Radar Laboratory and visible wavelength images from the French SPOT satellite were used to determine remote sensing signatures of alluvial fan units for an area in the Kun Lun Mountains of northwestern China. These data were combined with field observations to compare surface processes and their effects on remote sensing signatures in northwestern China and the southwestern United States. Geomorphic processes affecting alluvial fans in the two areas include aeolian deposition, desert varnish, and fluvial dissection. However, salt weathering is a much more important process in the Kun Lun than in the southwestern United States. This slows the formation of desert varnish and prevents desert pavement from forming. Thus the Kun Lun signatures are characteristic of the dominance of salt weathering, while signatures from the southwestern United States are characteristic of the dominance of desert varnish and pavement processes. Remote sensing signatures are consistent enough in these two regions to be used for mapping fan units over large areas.

  8. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  9. [Investigation on remote measurement of air pollution by a method of infrared passive scanning imaging].

    PubMed

    Jiao, Yang; Xu, Liang; Gao, Min-Guang; Feng, Ming-Chun; Jin, Ling; Tong, Jing-Jing; Li, Sheng

    2012-07-01

    Passive remote sensing by Fourier-transform infrared (FTIR) spectrometry allows detection of air pollution. However, for the localization of a leak and a complete assessment of the situation in the case of the release of a hazardous cloud, information about the position and the distribution of a cloud is essential. Therefore, an imaging passive remote sensing system comprising an interferometer, a data acquisition and processing software, scan system, a video system, and a personal computer has been developed. The remote sensing of SF6 was done. The column densities of all directions in which a target compound has been identified may be retrieved by a nonlinear least squares fitting algorithm and algorithm of radiation transfer, and a false color image is displayed. The results were visualized by a video image, overlaid by false color concentration distribution image. The system has a high selectivity, and allows visualization and quantification of pollutant clouds.

  10. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    NASA Astrophysics Data System (ADS)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image classification activities. Currently, the approach is used only on high resolution optical three-band remote sensing imagery. The feasibility using the approach on other kinds of remote sensing images or involving additional bands in classification will be studied in future.

  11. Flood mapping from Sentinel-1 and Landsat-8 data: a case study from river Evros, Greece

    NASA Astrophysics Data System (ADS)

    Kyriou, Aggeliki; Nikolakopoulos, Konstantinos

    2015-10-01

    Floods are suddenly and temporary natural events, affecting areas which are not normally covered by water. The influence of floods plays a significant role both in society and the natural environment, therefore flood mapping is crucial. Remote sensing data can be used to develop flood map in an efficient and effective way. This work is focused on expansion of water bodies overtopping natural levees of the river Evros, invading the surroundings areas and converting them in flooded. Different techniques of flood mapping were used using data from active and passive remote sensing sensors like Sentinlel-1 and Landsat-8 respectively. Space borne pairs obtained from Sentinel-1 were processed in this study. Each pair included an image during the flood, which is called "crisis image" and another one before the event, which is called "archived image". Both images covering the same area were processed producing a map, which shows the spread of the flood. Multispectral data From Landsat-8 were also processed in order to detect and map the flooded areas. Different image processing techniques were applied and the results were compared to the respective results of the radar data processing.

  12. Maritime Domain Awareness: C4I for the 1000 Ship Navy

    DTIC Science & Technology

    2009-12-04

    unit action, provide unit sensed contacts, coordinate unit operations, process unit information, release image , and release contact report, Figure 33...Intelligence Tasking Request Intelligence Summary Release Unit Person Incident Release Unit Vessel Incident Process Intelligence Tasking Release Image ...xi LIST OF FIGURES Figure 1. Functional Problem Sequence Process Flow. ....................................................4 Figure 2. United

  13. Information Acquisition, Analysis and Integration

    DTIC Science & Technology

    2016-08-03

    of sensing and processing, theory, applications, signal processing, image and video processing, machine learning , technology transfer. 16. SECURITY... learning . 5. Solved elegantly old problems like image and video debluring, intro- ducing new revolutionary approaches. 1 DISTRIBUTION A: Distribution...Polatkan, G. Sapiro, D. Blei, D. B. Dunson, and L. Carin, “ Deep learning with hierarchical convolution factor analysis,” IEEE 6 DISTRIBUTION A

  14. Characterizing canopy biochemistry from imaging spectroscopy and its application to ecosystem studies

    USGS Publications Warehouse

    Kokaly, R.F.; Asner, Gregory P.; Ollinger, S.V.; Martin, M.E.; Wessman, C.A.

    2009-01-01

    For two decades, remotely sensed data from imaging spectrometers have been used to estimate non-pigment biochemical constituents of vegetation, including water, nitrogen, cellulose, and lignin. This interest has been motivated by the important role that these substances play in physiological processes such as photosynthesis, their relationships with ecosystem processes such as litter decomposition and nutrient cycling, and their use in identifying key plant species and functional groups. This paper reviews three areas of research to improve the application of imaging spectrometers to quantify non-pigment biochemical constituents of plants. First, we examine recent empirical and modeling studies that have advanced our understanding of leaf and canopy reflectance spectra in relation to plant biochemistry. Next, we present recent examples of how spectroscopic remote sensing methods are applied to characterize vegetation canopies, communities and ecosystems. Third, we highlight the latest developments in using imaging spectrometer data to quantify net primary production (NPP) over large geographic areas. Finally, we discuss the major challenges in quantifying non-pigment biochemical constituents of plant canopies from remotely sensed spectra.

  15. Remote sensing for grassland management in the arid Southwest

    USGS Publications Warehouse

    Marsett, R.C.; Qi, J.; Heilman, P.; Biedenbender, S.H.; Watson, M.C.; Amer, S.; Weltz, M.; Goodrich, D.; Marsett, R.

    2006-01-01

    We surveyed a group of rangeland managers in the Southwest about vegetation monitoring needs on grassland. Based on their responses, the objective of the RANGES (Rangeland Analysis Utilizing Geospatial Information Science) project was defined to be the accurate conversion of remotely sensed data (satellite imagery) to quantitative estimates of total (green and senescent) standing cover and biomass on grasslands and semidesert grasslands. Although remote sensing has been used to estimate green vegetation cover, in arid grasslands herbaceous vegetation is senescent much of the year and is not detected by current remote sensing techniques. We developed a ground truth protocol compatible with both range management requirements and Landsat's 30 m resolution imagery. The resulting ground-truth data were then used to develop image processing algorithms that quantified total herbaceous vegetation cover, height, and biomass. Cover was calculated based on a newly developed Soil Adjusted Total Vegetation Index (SATVI), and height and biomass were estimated based on reflectance in the near infrared (NIR) band. Comparison of the remotely sensed estimates with independent ground measurements produced r2 values of 0.80, 0.85, and 0.77 and Nash Sutcliffe values of 0.78, 0.70, and 0.77 for the cover, plant height, and biomass, respectively. The approach for estimating plant height and biomass did not work for sites where forbs comprised more than 30% of total vegetative cover. The ground reconnaissance protocol and image processing techniques together offer land managers accurate and timely methods for monitoring extensive grasslands. The time-consuming requirement to collect concurrent data in the field for each image implies a need to share the high fixed costs of processing an image across multiple users to reduce the costs for individual rangeland managers.

  16. Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth

    NASA Astrophysics Data System (ADS)

    Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana

    2017-10-01

    In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.

  17. Using deep learning in image hyper spectral segmentation, classification, and detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Su, Zhenyu

    2018-02-01

    Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.

  18. Ship detection using STFT sea background statistical modeling for large-scale oceansat remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan

    2018-03-01

    Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.

  19. An efficient cloud detection method for high resolution remote sensing panchromatic imagery

    NASA Astrophysics Data System (ADS)

    Li, Chaowei; Lin, Zaiping; Deng, Xinpu

    2018-04-01

    In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.

  20. Remote Sensing Image Quality Assessment Experiment with Post-Processing

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.

    2018-04-01

    This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  1. Advanced Image Processing for NASA Applications

    NASA Technical Reports Server (NTRS)

    LeMoign, Jacqueline

    2007-01-01

    The future of space exploration will involve cooperating fleets of spacecraft or sensor webs geared towards coordinated and optimal observation of Earth Science phenomena. The main advantage of such systems is to utilize multiple viewing angles as well as multiple spatial and spectral resolutions of sensors carried on multiple spacecraft but acting collaboratively as a single system. Within this framework, our research focuses on all areas related to sensing in collaborative environments, which means systems utilizing intracommunicating spatially distributed sensor pods or crafts being deployed to monitor or explore different environments. This talk will describe the general concept of sensing in collaborative environments, will give a brief overview of several technologies developed at NASA Goddard Space Flight Center in this area, and then will concentrate on specific image processing research related to that domain, specifically image registration and image fusion.

  2. Satellite land remote sensing advancements for the eighties; Proceedings of the Eighth Pecora Symposium, Sioux Falls, SD, October 4-7, 1983

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Among the topics discussed are NASA's land remote sensing plans for the 1980s, the evolution of Landsat 4 and the performance of its sensors, the Landsat 4 thematic mapper image processing system radiometric and geometric characteristics, data quality, image data radiometric analysis and spectral/stratigraphic analysis, and thematic mapper agricultural, forest resource and geological applications. Also covered are geologic applications of side-looking airborne radar, digital image processing, the large format camera, the RADARSAT program, the SPOT 1 system's program status, distribution plans, and simulation program, Space Shuttle multispectral linear array studies of the optical and biological properties of terrestrial land cover, orbital surveys of solar-stimulated luminescence, the Space Shuttle imaging radar research facility, and Space Shuttle-based polar ice sounding altimetry.

  3. Carbon nanotube thin film strain sensor models assembled using nano- and micro-scale imaging

    NASA Astrophysics Data System (ADS)

    Lee, Bo Mi; Loh, Kenneth J.; Yang, Yuan-Sen

    2017-07-01

    Nanomaterial-based thin films, particularly those based on carbon nanotubes (CNT), have brought forth tremendous opportunities for designing next-generation strain sensors. However, their strain sensing properties can vary depending on fabrication method, post-processing treatment, and types of CNTs and polymers employed. The objective of this study was to derive a CNT-based thin film strain sensor model using inputs from nano-/micro-scale experimental measurements of nanotube physical properties. This study began with fabricating ultra-low-concentration CNT-polymer thin films, followed by imaging them using atomic force microscopy. Image processing was employed for characterizing CNT dispersed shapes, lengths, and other physical attributes, and results were used for building five different types of thin film percolation-based models. Numerical simulations were conducted to assess how the morphology of dispersed CNTs in its 2D matrix affected bulk film electrical and electromechanical (strain sensing) properties. The simulation results showed that CNT morphology had a significant impact on strain sensing performance.

  4. Combining remote sensing image with DEM to identify ancient Minqin Oasis, northwest of China

    NASA Astrophysics Data System (ADS)

    Xie, Yaowen

    2008-10-01

    The developing and desertification process of Minqin oasis is representative in the whole arid area of northwest China. Combining Remote Sensing image with Digital Elevation Model (DEM) can produce the three-dimensional image of the research area which can give prominence to the spatial background of historical geography phenomenon's distribution, providing the conditions for extracting and analyzing historical geographical information thoroughly. This research rebuilds the ancient artificial Oasis based on the three-dimensional images produced by the TM digital Remote Sensing image and DEM created using 1:100000 topographic maps. The result indicates that the whole area of the ancient artificial oasis in Minqin Basin over the whole historical period reaches 321km2, in the form of discontinuous sheet, separated on the two banks of ancient Shiyang River and its branches, namely, Xishawo area, west to modern Minqin Basin and Zhongshawo area, in the center of the oasis. Except for a little of the ancient oasis unceasingly used by later people, most of it became desert. The combination of digital Remote Sensing image and DEM can integrate the advantages of both in identifying ancient oasis and improve the interpreting accuracy greatly.

  5. Integrated analysis of remote sensing products from basic geological surveys. [Brazil

    NASA Technical Reports Server (NTRS)

    Dasilvafagundesfilho, E. (Principal Investigator)

    1984-01-01

    Recent advances in remote sensing led to the development of several techniques to obtain image information. These techniques as effective tools in geological maping are analyzed. A strategy for optimizing the images in basic geological surveying is presented. It embraces as integrated analysis of spatial, spectral, and temporal data through photoptic (color additive viewer) and computer processing at different scales, allowing large areas survey in a fast, precise, and low cost manner.

  6. In-flight edge response measurements for high-spatial-resolution remote sensing systems

    NASA Astrophysics Data System (ADS)

    Blonski, Slawomir; Pagnutti, Mary A.; Ryan, Robert; Zanoni, Vickie

    2002-09-01

    In-flight measurements of spatial resolution were conducted as part of the NASA Scientific Data Purchase Verification and Validation process. Characterization included remote sensing image products with ground sample distance of 1 meter or less, such as those acquired with the panchromatic imager onboard the IKONOS satellite and the airborne ADAR System 5500 multispectral instrument. Final image products were used to evaluate the effects of both the image acquisition system and image post-processing. Spatial resolution was characterized by full width at half maximum of an edge-response-derived line spread function. The edge responses were analyzed using the tilted-edge technique that overcomes the spatial sampling limitations of the digital imaging systems. As an enhancement to existing algorithms, the slope of the edge response and the orientation of the edge target were determined by a single computational process. Adjacent black and white square panels, either painted on a flat surface or deployed as tarps, formed the ground-based edge targets used in the tests. Orientation of the deployable tarps was optimized beforehand, based on simulations of the imaging system. The effects of such factors as acquisition geometry, temporal variability, Modulation Transfer Function compensation, and ground sample distance on spatial resolution were investigated.

  7. Architectures and algorithms for digital image processing; Proceedings of the Meeting, Cannes, France, December 5, 6, 1985

    NASA Technical Reports Server (NTRS)

    Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)

    1986-01-01

    The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.

  8. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  9. Understanding Appearance-Enhancing Drug Use in Sport Using an Enactive Approach to Body Image

    PubMed Central

    Hauw, Denis; Bilard, Jean

    2017-01-01

    From an enactive approach to human activity, we suggest that the use of appearance-enhancing drugs is better explained by the sense-making related to body image rather than the cognitive evaluation of social norms about appearance and consequent psychopathology-oriented approach. After reviewing the main psychological disorders thought to link body image issues to the use of appearance-enhancing substances, we sketch a flexible, dynamic and embedded account of body image defined as the individual’s propensity to act and experience in specific situations. We show how this enacted body image is a complex process of sense-making that people engage in when they are trying to adapt to specific situations. These adaptations of the enacted body image require effort, perseverance and time, and therefore any substance that accelerates this process appears to be an easy and attractive solution. In this enactive account of body image, we underline that the link between the enacted body image and substance use is also anchored in the history of the body’s previous interactions with the world. This emerges during periods of upheaval and hardship, especially in a context where athletes experience weak participatory sense-making in a sport community. We conclude by suggesting prevention and intervention designs that would promote a safe instrumental use of the body in sports and psychological helping procedures for athletes experiencing difficulties with substances use and body image. PMID:29238320

  10. Image processing developments and applications for water quality monitoring and trophic state determination

    NASA Technical Reports Server (NTRS)

    Blackwell, R. J.

    1982-01-01

    Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.

  11. Visual Sensing for Urban Flood Monitoring

    PubMed Central

    Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han

    2015-01-01

    With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201

  12. Mapping and modeling the urban landscape in Bangkok, Thailand: Physical-spectral-spatial relations of population-environmental interactions

    NASA Astrophysics Data System (ADS)

    Shao, Yang

    This research focuses on the application of remote sensing, geographic information systems, statistical modeling, and spatial analysis to examine the dynamics of urban land cover, urban structure, and population-environment interactions in Bangkok, Thailand, with an emphasis on rural-to-urban migration from rural Nang Rong District, Northeast Thailand to the primate city of Bangkok. The dissertation consists of four main sections: (1) development of remote sensing image classification and change-detection methods for characterizing imperviousness for Bangkok, Thailand from 1993-2002; (2) development of 3-D urban mapping methods, using high spatial resolution IKONOS satellite images, to assess high-rises and other urban structures; (3) assessment of urban spatial structure from 2-D and 3-D perspectives; and (4) an analysis of the spatial clustering of migrants from Nang Rong District in Bangkok and the neighborhood environments of migrants' locations. Techniques are developed to improve the accuracy of the neural network classification approach for the analysis of remote sensing data, with an emphasis on the spectral unmixing problem. The 3-D building heights are derived using the shadow information on the high-resolution IKONOS image. The results from the 2-D and 3-D mapping are further examined to assess urban structure and urban feature identification. This research contributes to image processing of remotely-sensed images and urban studies. The rural-urban migration process and migrants' settlement patterns are examined using spatial statistics, GIS, and remote sensing perspectives. The results show that migrants' spatial clustering in urban space is associated with the source village and a number of socio-demographic variables. In addition, the migrants' neighborhood environments in urban setting are modeled using a set of geographic and socio-demographic variables, and the results are scale-dependent.

  13. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2015-05-01

    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  14. Basic research planning in mathematical pattern recognition and image analysis

    NASA Technical Reports Server (NTRS)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  15. A Unified Approach to Passive and Active Ocean Acoustic Waveguide Remote Sensing

    DTIC Science & Technology

    2012-09-30

    acoustic sensing reveals humpback whale behavior synchronous with herring spawning processes and sonar had no effect on humpback song ,” submitted to...source and receiver arrays to enable instantaneous continental-shelf scale imaging and continuous monitoring of fish and whale populations. Acoustic...Preliminary analysis shows that humpback whale behavior is synchronous with peak annual Atlantic herring spawning processes in the Gulf of

  16. Fabricating a hybrid imaging device having non-destructive sense nodes

    NASA Technical Reports Server (NTRS)

    Wadsworth, Mark (Inventor); Atlas, Gene (Inventor)

    2001-01-01

    A hybrid detector or imager includes two substrates fabricated under incompatible processes. An array of detectors, such as charged-coupled devices, are formed on the first substrate using a CCD fabrication process, such as a buried channel or peristaltic process. One or more charge-converting amplifiers are formed on a second substrate using a CMOS fabrication process. The two substrates are then bonded together to form a hybrid detector.

  17. Single-Image Super Resolution for Multispectral Remote Sensing Data Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Liebel, L.; Körner, M.

    2016-06-01

    In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.

  18. Modeling, simulation, and analysis of optical remote sensing systems

    NASA Technical Reports Server (NTRS)

    Kerekes, John Paul; Landgrebe, David A.

    1989-01-01

    Remote Sensing of the Earth's resources from space-based sensors has evolved in the past 20 years from a scientific experiment to a commonly used technological tool. The scientific applications and engineering aspects of remote sensing systems have been studied extensively. However, most of these studies have been aimed at understanding individual aspects of the remote sensing process while relatively few have studied their interrelations. A motivation for studying these interrelationships has arisen with the advent of highly sophisticated configurable sensors as part of the Earth Observing System (EOS) proposed by NASA for the 1990's. Two approaches to investigating remote sensing systems are developed. In one approach, detailed models of the scene, the sensor, and the processing aspects of the system are implemented in a discrete simulation. This approach is useful in creating simulated images with desired characteristics for use in sensor or processing algorithm development. A less complete, but computationally simpler method based on a parametric model of the system is also developed. In this analytical model the various informational classes are parameterized by their spectral mean vector and covariance matrix. These class statistics are modified by models for the atmosphere, the sensor, and processing algorithms and an estimate made of the resulting classification accuracy among the informational classes. Application of these models is made to the study of the proposed High Resolution Imaging Spectrometer (HRIS). The interrelationships among observational conditions, sensor effects, and processing choices are investigated with several interesting results.

  19. Design and fabrication of a 1-DOF drive mode and 2-DOF sense mode micro-gyroscope using SU-8 based UV-LIGA process

    NASA Astrophysics Data System (ADS)

    Verma, Payal; Juneja, Sucheta; Savelyev, Dmitry A.; Khonina, Svetlana N.; Gopal, Ram

    2016-04-01

    This paper presents design and fabrication of a 1-DOF (degree-of-freedom) drive mode and 2-DOF sense mode micro-gyroscope. It is an inherently robust structure and offers a high sense frequency bandwidth. The proposed design utilizes resonance of the1-DOF drive mode oscillator and employs dynamic amplification concept in sense modes to increase the sensitivity while maintaining robustness. The 2-DOF in the sense direction renders the device immune to process imperfections and environmental effects. The design is simulated using FEA software (CoventorWare®). The device is designed considering process compatibility with SU-8 based UV-LIGA process, which is an economical fabrication technique. The complete fabrication process is presented along with SEM images of the fabricated device. The device has 9 µm thick Nickel as the key structural layer with an overall reduced key structure size of 2.2 mm by 2.1 mm.

  20. Sea-land segmentation for infrared remote sensing images based on superpixels and multi-scale features

    NASA Astrophysics Data System (ADS)

    Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei

    2018-06-01

    Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.

  1. Digital holographic image fusion for a larger size object using compressive sensing

    NASA Astrophysics Data System (ADS)

    Tian, Qiuhong; Yan, Liping; Chen, Benyong; Yao, Jiabao; Zhang, Shihua

    2017-05-01

    Digital holographic imaging fusion for a larger size object using compressive sensing is proposed. In this method, the high frequency component of the digital hologram under discrete wavelet transform is represented sparsely by using compressive sensing so that the data redundancy of digital holographic recording can be resolved validly, the low frequency component is retained totally to ensure the image quality, and multiple reconstructed images with different clear parts corresponding to a laser spot size are fused to realize the high quality reconstructed image of a larger size object. In addition, a filter combing high-pass and low-pass filters is designed to remove the zero-order term from a digital hologram effectively. The digital holographic experimental setup based on off-axis Fresnel digital holography was constructed. The feasible and comparative experiments were carried out. The fused image was evaluated by using the Tamura texture features. The experimental results demonstrated that the proposed method can improve the processing efficiency and visual characteristics of the fused image and enlarge the size of the measured object effectively.

  2. A data mining based approach to predict spatiotemporal changes in satellite images

    NASA Astrophysics Data System (ADS)

    Boulila, W.; Farah, I. R.; Ettabaa, K. Saheb; Solaiman, B.; Ghézala, H. Ben

    2011-06-01

    The interpretation of remotely sensed images in a spatiotemporal context is becoming a valuable research topic. However, the constant growth of data volume in remote sensing imaging makes reaching conclusions based on collected data a challenging task. Recently, data mining appears to be a promising research field leading to several interesting discoveries in various areas such as marketing, surveillance, fraud detection and scientific discovery. By integrating data mining and image interpretation techniques, accurate and relevant information (i.e. functional relation between observed parcels and a set of informational contents) can be automatically elicited. This study presents a new approach to predict spatiotemporal changes in satellite image databases. The proposed method exploits fuzzy sets and data mining concepts to build predictions and decisions for several remote sensing fields. It takes into account imperfections related to the spatiotemporal mining process in order to provide more accurate and reliable information about land cover changes in satellite images. The proposed approach is validated using SPOT images representing the Saint-Denis region, capital of Reunion Island. Results show good performances of the proposed framework in predicting change for the urban zone.

  3. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems II. Extension to the thermal infrared: equations and methods

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Lomheim, Terrence S.; Florio, Christopher J.; Harbold, Jeffrey M.; Muto, B. Michael; Schoolar, Richard B.; Wintz, Daniel T.; Keller, Robert A.

    2011-10-01

    In a previous paper in this series, we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) tool may be used to model space and airborne imaging systems operating in the visible to near-infrared (VISNIR). PICASSO is a systems-level tool, representative of a class of such tools used throughout the remote sensing community. It is capable of modeling systems over a wide range of fidelity, anywhere from conceptual design level (where it can serve as an integral part of the systems engineering process) to as-built hardware (where it can serve as part of the verification process). In the present paper, we extend the discussion of PICASSO to the modeling of Thermal Infrared (TIR) remote sensing systems, presenting the equations and methods necessary to modeling in that regime.

  4. Improving the analysis of biogeochemical patterns associated with internal waves in the strait of Gibraltar using remote sensing images

    NASA Astrophysics Data System (ADS)

    Navarro, Gabriel; Vicent, Jorge; Caballero, Isabel; Gómez-Enri, Jesús; Morris, Edward P.; Sabater, Neus; Macías, Diego; Bolado-Penagos, Marina; Gomiz, Juan Jesús; Bruno, Miguel; Caldeira, Rui; Vázquez, Águeda

    2018-05-01

    High Amplitude Internal Waves (HAIWs) are physical processes observed in the Strait of Gibraltar (the narrow channel between the Atlantic Ocean and the Mediterranean Sea). These internal waves are generated over the Camarinal Sill (western side of the strait) during the tidal outflow (toward the Atlantic Ocean) when critical hydraulic conditions are established. HAIWs remain over the sill for up to 4 h until the outflow slackens, being then released (mostly) towards the Mediterranean Sea. These have been previously observed using Synthetic Aperture Radar (SAR), which captures variations in surface water roughness. However, in this work we use high resolution optical remote sensing, with the aim of examining the influence of HAIWs on biogeochemical processes. We used hyperspectral images from the Hyperspectral Imager for the Coastal Ocean (HICO) and high spatial resolution (10 m) images from the MultiSpectral Instrument (MSI) onboard the Sentinel-2A satellite. This work represents the first attempt to examine the relation between internal wave generation and the water constituents of the Camarinal Sill using hyperspectral and high spatial resolution remote sensing images. This enhanced spatial and spectral resolution revealed the detailed biogeochemical patterns associated with the internal waves and suggests local enhancements of productivity associated with internal waves trains.

  5. Applying high resolution remote sensing image and DEM to falling boulder hazard assessment

    NASA Astrophysics Data System (ADS)

    Huang, Changqing; Shi, Wenzhong; Ng, K. C.

    2005-10-01

    Boulder fall hazard assessing generally requires gaining the boulder information. The extensive mapping and surveying fieldwork is a time-consuming, laborious and dangerous conventional method. So this paper proposes an applying image processing technology to extract boulder and assess boulder fall hazard from high resolution remote sensing image. The method can replace the conventional method and extract the boulder information in high accuracy, include boulder size, shape, height and the slope and aspect of its position. With above boulder information, it can be satisfied for assessing, prevention and cure boulder fall hazard.

  6. Earth remote sensing - 1970-1995

    NASA Technical Reports Server (NTRS)

    Thome, P. G.

    1984-01-01

    The past-achievements, current status, and future prospects of the Landsat terrestrial-remote-sensing satellite program are surveyed. Topics examined include the early history of space flight; the development of analysis techniques to interpret the multispectral images obtained by Landsats 1, 2, and 3; the characteristics of the advanced Landsat-4 Thematic Mapper; microwave scanning by Seasat and the Shuttle Imaging Radar; the usefulness of low-resolution AVHRR data from the NOAA satellites; improvements in Landsats 4 and 5 to permit tailoring of information to user needs; expansion and internationalization of the remote-sensing market in the late 1980s; and technological advances in both instrumentation and data-processing predicted by the 1990s.

  7. Environmental and Landscape Remote Sensing Using Free and Open Source Image Processing Tools

    EPA Science Inventory

    As global climate change and human activities impact the environment, there is a growing need for scientific tools to monitor and measure environmental conditions that support human and ecological health. Remotely sensed imagery from satellite and airborne platforms provides a g...

  8. Feasibility Analysis and Demonstration of High-Speed Digital Imaging Using Micro-Arrays of Vertical Cavity Surface-Emitting Lasers

    DTIC Science & Technology

    2011-04-01

    Proceedings, Bristol, UK (2006). 5. M. A. Mentzer, Applied Optics Fundamentals and Device Applications: Nano, MOEMS , and Biotechnology, CRC Taylor...ballistic sensing, flash x-ray cineradiography, digital image correlation, image processing al- gorithms, and applications of MOEMS to nano- and

  9. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  10. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757

  11. The research of road and vehicle information extraction algorithm based on high resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong

    2016-09-01

    With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.

  12. Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging

    PubMed Central

    Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528

  13. Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.

    PubMed

    Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-05-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.

  14. Technology study of quantum remote sensing imaging

    NASA Astrophysics Data System (ADS)

    Bi, Siwen; Lin, Xuling; Yang, Song; Wu, Zhiqiang

    2016-02-01

    According to remote sensing science and technology development and application requirements, quantum remote sensing is proposed. First on the background of quantum remote sensing, quantum remote sensing theory, information mechanism, imaging experiments and prototype principle prototype research situation, related research at home and abroad are briefly introduced. Then we expounds compress operator of the quantum remote sensing radiation field and the basic principles of single-mode compression operator, quantum quantum light field of remote sensing image compression experiment preparation and optical imaging, the quantum remote sensing imaging principle prototype, Quantum remote sensing spaceborne active imaging technology is brought forward, mainly including quantum remote sensing spaceborne active imaging system composition and working principle, preparation and injection compression light active imaging device and quantum noise amplification device. Finally, the summary of quantum remote sensing research in the past 15 years work and future development are introduced.

  15. A tool for NDVI time series extraction from wide-swath remotely sensed images

    NASA Astrophysics Data System (ADS)

    Li, Zhishan; Shi, Runhe; Zhou, Cong

    2015-09-01

    Normalized Difference Vegetation Index (NDVI) is one of the most widely used indicators for monitoring the vegetation coverage in land surface. The time series features of NDVI are capable of reflecting dynamic changes of various ecosystems. Calculating NDVI via Moderate Resolution Imaging Spectrometer (MODIS) and other wide-swath remotely sensed images provides an important way to monitor the spatial and temporal characteristics of large-scale NDVI. However, difficulties are still existed for ecologists to extract such information correctly and efficiently because of the problems in several professional processes on the original remote sensing images including radiometric calibration, geometric correction, multiple data composition and curve smoothing. In this study, we developed an efficient and convenient online toolbox for non-remote sensing professionals who want to extract NDVI time series with a friendly graphic user interface. It is based on Java Web and Web GIS technically. Moreover, Struts, Spring and Hibernate frameworks (SSH) are integrated in the system for the purpose of easy maintenance and expansion. Latitude, longitude and time period are the key inputs that users need to provide, and the NDVI time series are calculated automatically.

  16. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  17. Geometric registration of remotely sensed data with SAMIR

    NASA Astrophysics Data System (ADS)

    Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto

    2015-06-01

    The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.

  18. Kingfisher: a system for remote sensing image database management

    NASA Astrophysics Data System (ADS)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  19. The development of machine technology processing for earth resource survey

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1970-01-01

    The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.

  20. Refocusing Space Technology

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This video presents two examples of NASA Technology Transfer. The first is a Downhole Video Logger, which uses remote sensing technology to help in mining. The second example is the use of satellite image processing technology to enhance ultrasound images taken during pregnancy.

  1. Live cell imaging of cytoskeletal and organelle dynamics in gravity-sensing cells in plant gravitropism.

    PubMed

    Nakamura, Moritaka; Toyota, Masatsugu; Tasaka, Masao; Morita, Miyo Terao

    2015-01-01

    Plants sense gravity and change their morphology/growth direction accordingly (gravitropism). The early process of gravitropism, gravity sensing, is supposed to be triggered by sedimentation of starch-filled plastids (amyloplasts) in statocytes such as root columella cells and shoot endodermal cells. For several decades, many scientists have focused on characterizing the role of the amyloplasts and observed their intracellular sedimentation in various plants. Recently, it has been discovered that the complex sedimentary movements of the amyloplasts are created not only by gravity but also by cytoskeletal/organelle dynamics, such as those of actin filaments and the vacuolar membrane. Thus, to understand how plants sense gravity, we need to analyze both amyloplast movements and their regulatory systems in statocytes. We have developed a vertical-stage confocal microscope that allows multicolor fluorescence imaging of amyloplasts, actin filaments and vacuolar membranes in vertically oriented plant tissues. We also developed a centrifuge microscope that allows bright-field imaging of amyloplasts during centrifugation. These microscope systems provide new insights into gravity-sensing mechanisms in Arabidopsis.

  2. Activities of the Remote Sensing Information Sciences Research Group

    NASA Technical Reports Server (NTRS)

    Estes, J. E.; Botkin, D.; Peuquet, D.; Smith, T.; Star, J. L. (Principal Investigator)

    1984-01-01

    Topics on the analysis and processing of remotely sensed data in the areas of vegetation analysis and modelling, georeferenced information systems, machine assisted information extraction from image data, and artificial intelligence are investigated. Discussions on support field data and specific applications of the proposed technologies are also included.

  3. Remote sensing. [land use mapping

    NASA Technical Reports Server (NTRS)

    Jinich, A.

    1979-01-01

    Various imaging techniques are outlined for use in mapping, land use, and land management in Mexico. Among the techniques discussed are pattern recognition and photographic processing. The utilization of information from remote sensing devices on satellites are studied. Multispectral band scanners are examined and software, hardware, and other program requirements are surveyed.

  4. BP fusion model for the detection of oil spills on the sea by remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Weiwei; An, Jubai; Zhang, Hande; Lin, Bin

    2003-06-01

    Oil spills are very serious marine pollution in many countries. In order to detect and identify the oil-spilled on the sea by remote sensor, scientists have to conduct a research work on the remote sensing image. As to the detection of oil spills on the sea, edge detection is an important technology in image processing. There are many algorithms of edge detection developed for image processing. These edge detection algorithms always have their own advantages and disadvantages in the image processing. Based on the primary requirements of edge detection of the oil spills" image on the sea, computation time and detection accuracy, we developed a fusion model. The model employed a BP neural net to fuse the detection results of simple operators. The reason we selected BP neural net as the fusion technology is that the relation between simple operators" result of edge gray level and the image"s true edge gray level is nonlinear, while BP neural net is good at solving the nonlinear identification problem. Therefore in this paper we trained a BP neural net by some oil spill images, then applied the BP fusion model on the edge detection of other oil spill images and obtained a good result. In this paper the detection result of some gradient operators and Laplacian operator are also compared with the result of BP fusion model to analysis the fusion effect. At last the paper pointed out that the fusion model has higher accuracy and higher speed in the processing oil spill image"s edge detection.

  5. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration

    PubMed Central

    Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun

    2016-01-01

    Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287

  6. High-speed imaging using compressed sensing and wavelength-dependent scattering (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shin, Jaewook; Bosworth, Bryan T.; Foster, Mark A.

    2017-02-01

    The process of multiple scattering has inherent characteristics that are attractive for high-speed imaging with high spatial resolution and a wide field-of-view. A coherent source passing through a multiple-scattering medium naturally generates speckle patterns with diffraction-limited features over an arbitrarily large field-of-view. In addition, the process of multiple scattering is deterministic allowing a given speckle pattern to be reliably reproduced with identical illumination conditions. Here, by exploiting wavelength dependent multiple scattering and compressed sensing, we develop a high-speed 2D time-stretch microscope. Highly chirped pulses from a 90-MHz mode-locked laser are sent through a 2D grating and a ground-glass diffuser to produce 2D speckle patterns that rapidly evolve with the instantaneous frequency of the chirped pulse. To image a scene, we first characterize the high-speed evolution of the generated speckle patterns. Subsequently we project the patterns onto the microscopic region of interest and collect the total light from the scene using a single high-speed photodetector. Thus the wavelength dependent speckle patterns serve as high-speed pseudorandom structured illumination of the scene. An image sequence is then recovered using the time-dependent signal received by the photodetector, the known speckle pattern evolution, and compressed sensing algorithms. Notably, the use of compressed sensing allows for reconstruction of a time-dependent scene using a highly sub-Nyquist number of measurements, which both increases the speed of the imager and reduces the amount of data that must be collected and stored. We will discuss our experimental demonstration of this approach and the theoretical limits on imaging speed.

  7. [Object-oriented segmentation and classification of forest gap based on QuickBird remote sensing image.

    PubMed

    Mao, Xue Gang; Du, Zi Han; Liu, Jia Qian; Chen, Shu Xin; Hou, Ji Yu

    2018-01-01

    Traditional field investigation and artificial interpretation could not satisfy the need of forest gaps extraction at regional scale. High spatial resolution remote sensing image provides the possibility for regional forest gaps extraction. In this study, we used object-oriented classification method to segment and classify forest gaps based on QuickBird high resolution optical remote sensing image in Jiangle National Forestry Farm of Fujian Province. In the process of object-oriented classification, 10 scales (10-100, with a step length of 10) were adopted to segment QuickBird remote sensing image; and the intersection area of reference object (RA or ) and intersection area of segmented object (RA os ) were adopted to evaluate the segmentation result at each scale. For segmentation result at each scale, 16 spectral characteristics and support vector machine classifier (SVM) were further used to classify forest gaps, non-forest gaps and others. The results showed that the optimal segmentation scale was 40 when RA or was equal to RA os . The accuracy difference between the maximum and minimum at different segmentation scales was 22%. At optimal scale, the overall classification accuracy was 88% (Kappa=0.82) based on SVM classifier. Combining high resolution remote sensing image data with object-oriented classification method could replace the traditional field investigation and artificial interpretation method to identify and classify forest gaps at regional scale.

  8. Variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le

    2018-01-01

    The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.

  9. Downscaling remotely sensed imagery using area-to-point cokriging and multiple-point geostatistical simulation

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Atkinson, Peter M.; Zhang, Jingxiong

    2015-03-01

    A cross-scale data integration method was developed and tested based on the theory of geostatistics and multiple-point geostatistics (MPG). The goal was to downscale remotely sensed images while retaining spatial structure by integrating images at different spatial resolutions. During the process of downscaling, a rich spatial correlation model in the form of a training image was incorporated to facilitate reproduction of similar local patterns in the simulated images. Area-to-point cokriging (ATPCK) was used as locally varying mean (LVM) (i.e., soft data) to deal with the change of support problem (COSP) for cross-scale integration, which MPG cannot achieve alone. Several pairs of spectral bands of remotely sensed images were tested for integration within different cross-scale case studies. The experiment shows that MPG can restore the spatial structure of the image at a fine spatial resolution given the training image and conditioning data. The super-resolution image can be predicted using the proposed method, which cannot be realised using most data integration methods. The results show that ATPCK-MPG approach can achieve greater accuracy than methods which do not account for the change of support issue.

  10. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    PubMed Central

    Wang, Guizhou; Liu, Jianbo; He, Guojin

    2013-01-01

    This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808

  11. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  12. Mapping wildland fuels for fire management across multiple scales: integrating remote sensing, GIS, and biophysical modeling

    USGS Publications Warehouse

    Keane, Robert E.; Burgan, Robert E.; Van Wagtendonk, Jan W.

    2001-01-01

    Fuel maps are essential for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. However, fuel mapping is an extremely difficult and complex process requiring expertise in remotely sensed image classification, fire behavior, fuels modeling, ecology, and geographical information systems (GIS). This paper first presents the challenges of mapping fuels: canopy concealment, fuelbed complexity, fuel type diversity, fuel variability, and fuel model generalization. Then, four approaches to mapping fuels are discussed with examples provided from the literature: (1) field reconnaissance; (2) direct mapping methods; (3) indirect mapping methods; and (4) gradient modeling. A fuel mapping method is proposed that uses current remote sensing and image processing technology. Future fuel mapping needs are also discussed which include better field data and fuel models, accurate GIS reference layers, improved satellite imagery, and comprehensive ecosystem models.

  13. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  14. Exploring Models and Data for Remote Sensing Image Caption Generation

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoqiang; Wang, Binqiang; Zheng, Xiangtao; Li, Xuelong

    2018-04-01

    Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, noticeable progress has been made in scene classification and target detection.However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at https://github.com/201528014227051/RSICD_optimal

  15. Fast perceptual image hash based on cascade algorithm

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly; Yavtushenko, Evgeniya

    2017-09-01

    In this paper, we propose a perceptual image hash algorithm based on cascade algorithm, which can be applied in image authentication, retrieval, and indexing. Image perceptual hash uses for image retrieval in sense of human perception against distortions caused by compression, noise, common signal processing and geometrical modifications. The main disadvantage of perceptual hash is high time expenses. In the proposed cascade algorithm of image retrieval initializes with short hashes, and then a full hash is applied to the processed results. Computer simulation results show that the proposed hash algorithm yields a good performance in terms of robustness, discriminability, and time expenses.

  16. Change detection from remotely sensed images: From pixel-based to object-based approaches

    NASA Astrophysics Data System (ADS)

    Hussain, Masroor; Chen, Dongmei; Cheng, Angela; Wei, Hui; Stanley, David

    2013-06-01

    The appetite for up-to-date information about earth's surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection.

  17. Near Real-Time Georeference of Umanned Aerial Vehicle Images for Post-Earthquake Response

    NASA Astrophysics Data System (ADS)

    Wang, S.; Wang, X.; Dou, A.; Yuan, X.; Ding, L.; Ding, X.

    2018-04-01

    The rapid collection of Unmanned Aerial Vehicle (UAV) remote sensing images plays an important role in the fast submitting disaster information and the monitored serious damaged objects after the earthquake. However, for hundreds of UAV images collected in one flight sortie, the traditional data processing methods are image stitching and three-dimensional reconstruction, which take one to several hours, and affect the speed of disaster response. If the manual searching method is employed, we will spend much more time to select the images and the find images do not have spatial reference. Therefore, a near-real-time rapid georeference method for UAV remote sensing disaster data is proposed in this paper. The UAV images are achieved georeference combined with the position and attitude data collected by UAV flight control system, and the georeferenced data is organized by means of world file which is developed by ESRI. The C # language is adopted to compile the UAV images rapid georeference software, combined with Geospatial Data Abstraction Library (GDAL). The result shows that it can realize rapid georeference of remote sensing disaster images for up to one thousand UAV images within one minute, and meets the demand of rapid disaster response, which is of great value in disaster emergency application.

  18. Building Development Monitoring in Multitemporal Remotely Sensed Image Pairs with Stochastic Birth-Death Dynamics.

    PubMed

    Benedek, C; Descombes, X; Zerubia, J

    2012-01-01

    In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.

  19. Evaluating the effect of remote sensing image spatial resolution on soil exchangeable potassium prediction models in smallholder farm settings.

    PubMed

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P

    2017-09-15

    Major end users of Digital Soil Mapping (DSM) such as policy makers and agricultural extension workers are faced with choosing the appropriate remote sensing data. The objective of this research is to analyze the spatial resolution effects of different remote sensing images on soil prediction models in two smallholder farms in Southern India called Kothapally (Telangana State), and Masuti (Karnataka State), and provide empirical guidelines to choose the appropriate remote sensing images in DSM. Bayesian kriging (BK) was utilized to characterize the spatial pattern of exchangeable potassium (K ex ) in the topsoil (0-15 cm) at different spatial resolutions by incorporating spectral indices from Landsat 8 (30 m), RapidEye (5 m), and WorldView-2/GeoEye-1/Pleiades-1A images (2 m). Some spectral indices such as band reflectances, band ratios, Crust Index and Atmospherically Resistant Vegetation Index from multiple images showed relatively strong correlations with soil K ex in two study areas. The research also suggested that fine spatial resolution WorldView-2/GeoEye-1/Pleiades-1A-based and RapidEye-based soil prediction models would not necessarily have higher prediction performance than coarse spatial resolution Landsat 8-based soil prediction models. The end users of DSM in smallholder farm settings need select the appropriate spectral indices and consider different factors such as the spatial resolution, band width, spectral resolution, temporal frequency, cost, and processing time of different remote sensing images. Overall, remote sensing-based Digital Soil Mapping has potential to be promoted to smallholder farm settings all over the world and help smallholder farmers implement sustainable and field-specific soil nutrient management scheme. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Software Suite to Support In-Flight Characterization of Remote Sensing Systems

    NASA Technical Reports Server (NTRS)

    Stanley, Thomas; Holekamp, Kara; Gasser, Gerald; Tabor, Wes; Vaughan, Ronald; Ryan, Robert; Pagnutti, Mary; Blonski, Slawomir; Kenton, Ross

    2014-01-01

    A characterization software suite was developed to facilitate NASA's in-flight characterization of commercial remote sensing systems. Characterization of aerial and satellite systems requires knowledge of ground characteristics, or ground truth. This information is typically obtained with instruments taking measurements prior to or during a remote sensing system overpass. Acquired ground-truth data, which can consist of hundreds of measurements with different data formats, must be processed before it can be used in the characterization. Accurate in-flight characterization of remote sensing systems relies on multiple field data acquisitions that are efficiently processed, with minimal error. To address the need for timely, reproducible ground-truth data, a characterization software suite was developed to automate the data processing methods. The characterization software suite is engineering code, requiring some prior knowledge and expertise to run. The suite consists of component scripts for each of the three main in-flight characterization types: radiometric, geometric, and spatial. The component scripts for the radiometric characterization operate primarily by reading the raw data acquired by the field instruments, combining it with other applicable information, and then reducing it to a format that is appropriate for input into MODTRAN (MODerate resolution atmospheric TRANsmission), an Air Force Research Laboratory-developed radiative transport code used to predict at-sensor measurements. The geometric scripts operate by comparing identified target locations from the remote sensing image to known target locations, producing circular error statistics defined by the Federal Geographic Data Committee Standards. The spatial scripts analyze a target edge within the image, and produce estimates of Relative Edge Response and the value of the Modulation Transfer Function at the Nyquist frequency. The software suite enables rapid, efficient, automated processing of ground truth data, which has been used to provide reproducible characterizations on a number of commercial remote sensing systems. Overall, this characterization software suite improves the reliability of ground-truth data processing techniques that are required for remote sensing system in-flight characterizations.

  1. Development of plenoptic infrared camera using low dimensional material based photodetectors

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.

  2. Short cavity active mode locking fiber laser for optical sensing and imaging

    NASA Astrophysics Data System (ADS)

    Lee, Hwi Don; Han, Ga Hee; Jeong, Syung Won; Jeong, Myung Yung; Kim, Chang-Seok; Shin, Jun Geun; Lee, Byeong Ha; Eom, Tae Joong

    2014-05-01

    We demonstrate a highly linear wavenumber- swept active mode locking (AML) fiber laser for optical sensing and imaging without any wavenumber-space resampling process. In this all-electric AML wavenumber-swept mechanism, a conventional wavelength selection filter is eliminated and, instead, the suitable programmed electric modulation signal is directly applied to the gain medium. Various types of wavenumber (or wavelength) tunings can be implemented because of the filter-less cavity configuration. Therefore, we successfully demonstrate a linearly wavenumber-swept AML fiber laser with 26.5 mW of output power to obtain an in-vivo OCT image at the 100 kHz swept rate.

  3. Towards a framework for agent-based image analysis of remote-sensing data

    PubMed Central

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-01-01

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916

  4. Towards a framework for agent-based image analysis of remote-sensing data.

    PubMed

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  5. Comprehensive UAV agricultural remote-sensing research at Texas A M University

    NASA Astrophysics Data System (ADS)

    Thomasson, J. Alex; Shi, Yeyin; Olsenholler, Jeffrey; Valasek, John; Murray, Seth C.; Bishop, Michael P.

    2016-05-01

    Unmanned aerial vehicles (UAVs) have advantages over manned vehicles for agricultural remote sensing. Flying UAVs is less expensive, is more flexible in scheduling, enables lower altitudes, uses lower speeds, and provides better spatial resolution for imaging. The main disadvantage is that, at lower altitudes and speeds, only small areas can be imaged. However, on large farms with contiguous fields, high-quality images can be collected regularly by using UAVs with appropriate sensing technologies that enable high-quality image mosaics to be created with sufficient metadata and ground-control points. In the United States, rules governing the use of aircraft are promulgated and enforced by the Federal Aviation Administration (FAA), and rules governing UAVs are currently in flux. Operators must apply for appropriate permissions to fly UAVs. In the summer of 2015 Texas A&M University's agricultural research agency, Texas A&M AgriLife Research, embarked on a comprehensive program of remote sensing with UAVs at its 568-ha Brazos Bottom Research Farm. This farm is made up of numerous fields where various crops are grown in plots or complete fields. The crops include cotton, corn, sorghum, and wheat. After gaining FAA permission to fly at the farm, the research team used multiple fixed-wing and rotary-wing UAVs along with various sensors to collect images over all parts of the farm at least once per week. This article reports on details of flight operations and sensing and analysis protocols, and it includes some lessons learned in the process of developing a UAV remote-sensing effort of this sort.

  6. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    NASA Astrophysics Data System (ADS)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  7. Imaging spectroscopy links aspen genotype with below-ground processes at landscape scales

    PubMed Central

    Madritch, Michael D.; Kingdon, Clayton C.; Singh, Aditya; Mock, Karen E.; Lindroth, Richard L.; Townsend, Philip A.

    2014-01-01

    Fine-scale biodiversity is increasingly recognized as important to ecosystem-level processes. Remote sensing technologies have great potential to estimate both biodiversity and ecosystem function over large spatial scales. Here, we demonstrate the capacity of imaging spectroscopy to discriminate among genotypes of Populus tremuloides (trembling aspen), one of the most genetically diverse and widespread forest species in North America. We combine imaging spectroscopy (AVIRIS) data with genetic, phytochemical, microbial and biogeochemical data to determine how intraspecific plant genetic variation influences below-ground processes at landscape scales. We demonstrate that both canopy chemistry and below-ground processes vary over large spatial scales (continental) according to aspen genotype. Imaging spectrometer data distinguish aspen genotypes through variation in canopy spectral signature. In addition, foliar spectral variation correlates well with variation in canopy chemistry, especially condensed tannins. Variation in aspen canopy chemistry, in turn, is correlated with variation in below-ground processes. Variation in spectra also correlates well with variation in soil traits. These findings indicate that forest tree species can create spatial mosaics of ecosystem functioning across large spatial scales and that these patterns can be quantified via remote sensing techniques. Moreover, they demonstrate the utility of using optical properties as proxies for fine-scale measurements of biodiversity over large spatial scales. PMID:24733949

  8. Radar image enhancement and simulation as an aid to interpretation and training

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Stiles, J. A.; Holtzman, J. C.; Dellwig, L. F.; Held, D. N.

    1980-01-01

    Greatly increased activity in the field of radar image applications in the coming years demands that techniques of radar image analysis, enhancement, and simulation be developed now. Since the statistical nature of radar imagery differs from that of photographic imagery, one finds that the required digital image processing algorithms (e.g., for improved viewing and feature extraction) differ from those currently existing. This paper addresses these problems and discusses work at the Remote Sensing Laboratory in image simulation and processing, especially for systems comparable to the formerly operational SEASAT synthetic aperture radar.

  9. Comparative performance evaluation of transform coding in image pre-processing

    NASA Astrophysics Data System (ADS)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  10. Removing sun glint from optical remote sensing images of shallow rivers

    USGS Publications Warehouse

    Overstreet, Brandon T.; Legleiter, Carl

    2017-01-01

    Sun glint is the specular reflection of light from the water surface, which often causes unusually bright pixel values that can dominate fluvial remote sensing imagery and obscure the water-leaving radiance signal of interest for mapping bathymetry, bottom type, or water column optical characteristics. Although sun glint is ubiquitous in fluvial remote sensing imagery, river-specific methods for removing sun glint are not yet available. We show that existing sun glint-removal methods developed for multispectral images of marine shallow water environments over-correct shallow portions of fluvial remote sensing imagery resulting in regions of unreliable data along channel margins. We build on existing marine glint-removal methods to develop a river-specific technique that removes sun glint from shallow areas of the channel without overcorrection by accounting for non-negligible water-leaving near-infrared radiance. This new sun glint-removal method can improve the accuracy of spectrally-based depth retrieval in cases where sun glint dominates the at-sensor radiance. For an example image of the gravel-bed Snake River, Wyoming, USA, observed-vs.-predicted R2 values for depth retrieval improved from 0.66 to 0.76 following sun glint removal. The methodology presented here is straightforward to implement and could be incorporated into image processing workflows for multispectral images that include a near-infrared band.

  11. Education and the Living Image: Reflections on Imagery, Fantasy, and the Art of Recognition.

    ERIC Educational Resources Information Center

    Abbs, Peter

    1981-01-01

    The educational role of the artist is close to that of the dreamer in the sense that they are active collaborators in the extraordinary process through which instinct and bodily function are converted into image and fantasy. The development of an image can release powerful flows of intellectual energy. (JN)

  12. Using hyperspectral remote sensing for land cover classification

    NASA Astrophysics Data System (ADS)

    Zhang, Wendy W.; Sriharan, Shobha

    2005-01-01

    This project used hyperspectral data set to classify land cover using remote sensing techniques. Many different earth-sensing satellites, with diverse sensors mounted on sophisticated platforms, are currently in earth orbit. These sensors are designed to cover a wide range of the electromagnetic spectrum and are generating enormous amounts of data that must be processed, stored, and made available to the user community. The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) collects data in 224 bands that are approximately 9.6 nm wide in contiguous bands between 0.40 and 2.45 mm. Hyperspectral sensors acquire images in many, very narrow, contiguous spectral bands throughout the visible, near-IR, and thermal IR portions of the spectrum. The unsupervised image classification procedure automatically categorizes the pixels in an image into land cover classes or themes. Experiments on using hyperspectral remote sensing for land cover classification were conducted during the 2003 and 2004 NASA Summer Faculty Fellowship Program at Stennis Space Center. Research Systems Inc.'s (RSI) ENVI software package was used in this application framework. In this application, emphasis was placed on: (1) Spectrally oriented classification procedures for land cover mapping, particularly, the supervised surface classification using AVIRIS data; and (2) Identifying data endmembers.

  13. Geospatial approach towards enumerative analysis of suspended sediment concentration for Ganges-Brahmaputra Bay

    NASA Astrophysics Data System (ADS)

    Pandey, Palak; Kunte, Pravin D.

    2016-10-01

    This study presents an easy, modular, user-friendly, and flexible software package for processing of Landsat 7 ETM and Landsat 8 OLI-TIRS data for estimating suspended particulate matter concentrations in the coastal waters. This package includes 1) algorithm developed using freely downloadable SCILAB package, 2) ERDAS Models for iterative processing of Landsat images and 3) ArcMAP tool for plotting and map making. Utilizing SCILAB package, a module is written for geometric corrections, radiometric corrections and obtaining normalized water-leaving reflectance by incorporating Landsat 8 OLI-TIRS and Landsat 7 ETM+ data. Using ERDAS models, a sequence of modules are developed for iterative processing of Landsat images and estimating suspended particulate matter concentrations. Processed images are used for preparing suspended sediment concentration maps. The applicability of this software package is demonstrated by estimating and plotting seasonal suspended sediment concentration maps off the Bengal delta. The software is flexible enough to accommodate other remotely sensed data like Ocean Color monitor (OCM) data, Indian Remote Sensing data (IRS), MODIS data etc. by replacing a few parameters in the algorithm, for estimating suspended sediment concentration in coastal waters.

  14. Distributed Sensing and Processing for Multi-Camera Networks

    NASA Astrophysics Data System (ADS)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  15. Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena

    2012-04-01

    Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an effective way for us to enhance the image quality at the matched regions between the prior and current images compared to the existing PICCS algorithm. Compared to the current CBCT imaging protocols, the APICCS algorithm allows an imaging dose reduction of 10-40 times due to the greatly reduced number of projections and lower x-ray tube current level coming from the low-dose protocol.

  16. Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS

    NASA Astrophysics Data System (ADS)

    Sofina, N.; Ehlers, M.

    2012-08-01

    High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.

  17. Satellite remote sensing facility for oceanograhic applications

    NASA Technical Reports Server (NTRS)

    Evans, R. H.; Kent, S. S.; Seidman, J. B.

    1980-01-01

    The project organization, design process, and construction of a Remote Sensing Facility at Scripps Institution of Oceanography at LaJolla, California are described. The facility is capable of receiving, processing, and displaying oceanographic data received from satellites. Data are primarily imaging data representing the multispectral ocean emissions and reflectances, and are accumulated during 8 to 10 minute satellite passes over the California coast. The most important feature of the facility is the reception and processing of satellite data in real time, allowing investigators to direct ships to areas of interest for on-site verifications and experiments.

  18. Coastline detection with time series of SAR images

    NASA Astrophysics Data System (ADS)

    Ao, Dongyang; Dumitru, Octavian; Schwarz, Gottfried; Datcu, Mihai

    2017-10-01

    For maritime remote sensing, coastline detection is a vital task. With continuous coastline detection results from satellite image time series, the actual shoreline, the sea level, and environmental parameters can be observed to support coastal management and disaster warning. Established coastline detection methods are often based on SAR images and wellknown image processing approaches. These methods involve a lot of complicated data processing, which is a big challenge for remote sensing time series. Additionally, a number of SAR satellites operating with polarimetric capabilities have been launched in recent years, and many investigations of target characteristics in radar polarization have been performed. In this paper, a fast and efficient coastline detection method is proposed which comprises three steps. First, we calculate a modified correlation coefficient of two SAR images of different polarization. This coefficient differs from the traditional computation where normalization is needed. Through this modified approach, the separation between sea and land becomes more prominent. Second, we set a histogram-based threshold to distinguish between sea and land within the given image. The histogram is derived from the statistical distribution of the polarized SAR image pixel amplitudes. Third, we extract continuous coastlines using a Canny image edge detector that is rather immune to speckle noise. Finally, the individual coastlines derived from time series of .SAR images can be checked for changes.

  19. Wavefront Sensing With Switched Lenses for Defocus Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    In an alternative hardware design for an apparatus used in image-based wavefront sensing, defocus diversity is introduced by means of fixed lenses that are mounted in a filter wheel (see figure) so that they can be alternately switched into a position in front of the focal plane of an electronic camera recording the image formed by the optical system under test. [The terms image-based, wavefront sensing, and defocus diversity are defined in the first of the three immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] Each lens in the filter wheel is designed so that the optical effect of placing it at the assigned position is equivalent to the optical effect of translating the camera a specified defocus distance along the optical axis. Heretofore, defocus diversity has been obtained by translating the imaging camera along the optical axis to various defocus positions. Because data must be taken at multiple, accurately measured defocus positions, it is necessary to mount the camera on a precise translation stage that must be calibrated for each defocus position and/or to use an optical encoder for measurement and feedback control of the defocus positions. Additional latency is introduced into the wavefront sensing process as the camera is translated to the various defocus positions. Moreover, if the optical system under test has a large focal length, the required defocus values are large, making it necessary to use a correspondingly bulky translation stage. By eliminating the need for translation of the camera, the alternative design simplifies and accelerates the wavefront-sensing process. This design is cost-effective in that the filterwheel/lens mechanism can be built from commercial catalog components. After initial calibration of the defocus value of each lens, a selected defocus value is introduced by simply rotating the filter wheel to place the corresponding lens in front of the camera. The rotation of the wheel can be automated by use of a motor drive, and further calibration is not necessary. Because a camera-translation stage is no longer needed, the size of the overall apparatus can be correspondingly reduced.

  20. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  1. Research on active imaging information transmission technology of satellite borne quantum remote sensing

    NASA Astrophysics Data System (ADS)

    Bi, Siwen; Zhen, Ming; Yang, Song; Lin, Xuling; Wu, Zhiqiang

    2017-08-01

    According to the development and application needs of Remote Sensing Science and technology, Prof. Siwen Bi proposed quantum remote sensing. Firstly, the paper gives a brief introduction of the background of quantum remote sensing, the research status and related researches at home and abroad on the theory, information mechanism and imaging experiments of quantum remote sensing and the production of principle prototype.Then, the quantization of pure remote sensing radiation field, the state function and squeezing effect of quantum remote sensing radiation field are emphasized. It also describes the squeezing optical operator of quantum light field in active imaging information transmission experiment and imaging experiments, achieving 2-3 times higher resolution than that of coherent light detection imaging and completing the production of quantum remote sensing imaging prototype. The application of quantum remote sensing technology can significantly improve both the signal-to-noise ratio of information transmission imaging and the spatial resolution of quantum remote sensing .On the above basis, Prof.Bi proposed the technical solution of active imaging information transmission technology of satellite borne quantum remote sensing, launched researches on its system composition and operation principle and on quantum noiseless amplifying devices, providing solutions and technical basis for implementing active imaging information technology of satellite borne Quantum Remote Sensing.

  2. Optical to optical interface device

    NASA Technical Reports Server (NTRS)

    Oliver, D. S.; Vohl, P.; Nisenson, P.

    1972-01-01

    The development, fabrication, and testing of a preliminary model of an optical-to-optical (noncoherent-to-coherent) interface device for use in coherent optical parallel processing systems are described. The developed device demonstrates a capability for accepting as an input a scene illuminated by a noncoherent radiation source and providing as an output a coherent light beam spatially modulated to represent the original noncoherent scene. The converter device developed under this contract employs a Pockels readout optical modulator (PROM). This is a photosensitive electro-optic element which can sense and electrostatically store optical images. The stored images can be simultaneously or subsequently readout optically by utilizing the electrostatic storage pattern to control an electro-optic light modulating property of the PROM. The readout process is parallel as no scanning mechanism is required. The PROM provides the functions of optical image sensing, modulation, and storage in a single active material.

  3. Direct-Solve Image-Based Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.

    2009-01-01

    A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.

  4. A Robust False Matching Points Detection Method for Remote Sensing Image Registration

    NASA Astrophysics Data System (ADS)

    Shan, X. J.; Tang, P.

    2015-04-01

    Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.

  5. Machine processing of remotely sensed data; Proceedings of the Conference, Purdue University, West Lafayette, Ind., October 16-18, 1973

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Topics discussed include the management and processing of earth resources information, special-purpose processors for the machine processing of remotely sensed data, digital image registration by a mathematical programming technique, the use of remote-sensor data in land classification (in particular, the use of ERTS-1 multispectral scanning data), the use of remote-sensor data in geometrical transformations and mapping, earth resource measurement with the aid of ERTS-1 multispectral scanning data, the use of remote-sensor data in the classification of turbidity levels in coastal zones and in the identification of ecological anomalies, the problem of feature selection and the classification of objects in multispectral images, the estimation of proportions of certain categories of objects, and a number of special systems and techniques. Individual items are announced in this issue.

  6. Improved Target Detection in Urban Structures Using Distributed Sensing and Fast Data Acquisition Techniques

    DTIC Science & Technology

    2013-04-01

    Trans. Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [83] A. Gurbuz, J. IVIcClellan, and W. Scott, "Compressive sensing for subsurface ... imaging using ground penetrating radar," Signal Pracess., vol. 89, no. 10, pp. 1959 -1972, 2009. [84] A. Gurbuz, J. McClellan, and W. Scott, "A

  7. Resource Management

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Summit Envirosolutions of Minneapolis, Minnesota, used remote sensing images as a source for groundwater resource management. Summit is a full-service environmental consulting service specializing in hydrogeologic, environmental management, engineering and remediation services. CRSP collected, processed and analyzed multispectral/thermal imagery and aerial photography to compare remote sensing and Geographic Information System approaches to more traditional methods of environmental impact assessments and monitoring.

  8. Stomatal conductance, canopy temperature, and leaf area index estimation using remote sensing and OBIA techniques

    Treesearch

    S. Panda; D.M. Amatya; G. Hoogenboom

    2014-01-01

    Remotely sensed images including LANDSAT, SPOT, NAIP orthoimagery, and LiDAR and relevant processing tools can be used to predict plant stomatal conductance (gs), leaf area index (LAI), and canopy temperature, vegetation density, albedo, and soil moisture using vegetation indices like normalized difference vegetation index (NDVI) or soil adjusted...

  9. Automated image processing of LANDSAT 2 digital data for watershed runoff prediction

    NASA Technical Reports Server (NTRS)

    Sasso, R. R.; Jensen, J. R.; Estes, J. E.

    1977-01-01

    The U.S. Soil Conservation Service (SCS) model for watershed runoff prediction uses soil and land cover information as its major drivers. Kern County Water Agency is implementing the SCS model to predict runoff for 10,400 sq cm of mountainous watershed in Kern County, California. The Remote Sensing Unit, University of California, Santa Barbara, was commissioned by KCWA to conduct a 230 sq cm feasibility study in the Lake Isabella, California region to evaluate remote sensing methodologies which could be ultimately extrapolated to the entire 10,400 sq cm Kern County watershed. Digital results indicate that digital image processing of Landsat 2 data will provide usable land cover required by KCWA for input to the SCS runoff model.

  10. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    NASA Astrophysics Data System (ADS)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  11. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder.

    PubMed

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-23

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  12. Cooperative studies between the United States of America and the People's Republic of China on applications of remote sensing to surveying and mapping

    USGS Publications Warehouse

    Lauer, Donald T.; Chu, Liangcai

    1992-01-01

    A Protocol established between the National Bureau of Surveying and Mapping, People's Republic of China (PRC) and the U.S. Geological Survey, United States of America (US), resulted in the exchange of scientific personnel, technical training, and exploration of the processing of remotely sensed data. These activities were directed toward the application of remotely sensed data to surveying and mapping. Data were processed and various products were generated for the Black Hills area in the US and the Ningxiang area of the PRC. The results of these investigations defined applicable processes in the creation of satellite image maps, land use maps, and the use of ancillary data for further map enhancements.

  13. Machine processing of remotely sensed data; Proceedings of the Fifth Annual Symposium, Purdue University, West Lafayette, Ind., June 27-29, 1979

    NASA Technical Reports Server (NTRS)

    Tendam, I. M. (Editor); Morrison, D. B.

    1979-01-01

    Papers are presented on techniques and applications for the machine processing of remotely sensed data. Specific topics include the Landsat-D mission and thematic mapper, data preprocessing to account for atmospheric and solar illumination effects, sampling in crop area estimation, the LACIE program, the assessment of revegetation on surface mine land using color infrared aerial photography, the identification of surface-disturbed features through a nonparametric analysis of Landsat MSS data, the extraction of soil data in vegetated areas, and the transfer of remote sensing computer technology to developing nations. Attention is also given to the classification of multispectral remote sensing data using context, the use of guided clustering techniques for Landsat data analysis in forest land cover mapping, crop classification using an interactive color display, and future trends in image processing software and hardware.

  14. Information surfing with the JHU/APL coherent imager

    NASA Astrophysics Data System (ADS)

    Ratto, Christopher R.; Shipley, Kara R.; Beagley, Nathaniel; Wolfe, Kevin C.

    2015-05-01

    The ability to perform remote forensics in situ is an important application of autonomous undersea vehicles (AUVs). Forensics objectives may include remediation of mines and/or unexploded ordnance, as well as monitoring of seafloor infrastructure. At JHU/APL, digital holography is being explored for the potential application to underwater imaging and integration with an AUV. In previous work, a feature-based approach was developed for processing the holographic imagery and performing object recognition. In this work, the results of the image processing method were incorporated into a Bayesian framework for autonomous path planning referred to as information surfing. The framework was derived assuming that the location of the object of interest is known a priori, but the type of object and its pose are unknown. The path-planning algorithm adaptively modifies the trajectory of the sensing platform based on historical performance of object and pose classification. The algorithm is called information surfing because the direction of motion is governed by the local information gradient. Simulation experiments were carried out using holographic imagery collected from submerged objects. The autonomous sensing algorithm was compared to a deterministic sensing CONOPS, and demonstrated improved accuracy and faster convergence in several cases.

  15. Low-cost digital image processing at the University of Oklahoma

    NASA Technical Reports Server (NTRS)

    Harrington, J. A., Jr.

    1981-01-01

    Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.

  16. Utilizing remote sensing of thematic mapper data to improve our understanding of estuarine processes and their influence on the productivity of estuarine-dependent fisheries

    NASA Technical Reports Server (NTRS)

    Browder, Joan A.; May, L. Nelson, Jr.; Rosenthal, Alan; Baumann, Robert H.; Gosselink, James G.

    1987-01-01

    A stochastic spatial computer model addressing coastal resource problems in Lousiana is being refined and validated using thematic mapper (TM) imagery. The TM images of brackish marsh sites were processed and data were tabulated on spatial parameters from TM images of the salt marsh sites. The Fisheries Image Processing Systems (FIPS) was used to analyze the TM scene. Activities were concentrated on improving the structure of the model and developing a structure and methodology for calibrating the model with spatial-pattern data from the TM imagery.

  17. Integration of geological remote-sensing techniques in subsurface analysis

    USGS Publications Warehouse

    Taranik, James V.; Trautwein, Charles M.

    1976-01-01

    Geological remote sensing is defined as the study of the Earth utilizing electromagnetic radiation which is either reflected or emitted from its surface in wavelengths ranging from 0.3 micrometre to 3 metres. The natural surface of the Earth is composed of a diversified combination of surface cover types, and geologists must understand the characteristics of surface cover types to successfully evaluate remotely-sensed data. In some areas landscape surface cover changes throughout the year, and analysis of imagery acquired at different times of year can yield additional geological information. Integration of different scales of analysis allows landscape features to be effectively interpreted. Interpretation of the static elements displayed on imagery is referred to as an image interpretation. Image interpretation is dependent upon: (1) the geologist's understanding of the fundamental aspects of image formation, and (2.) his ability to detect, delineate, and classify image radiometric data; recognize radiometric patterns; and identify landscape surface characteristics as expressed on imagery. A geologic interpretation integrates surface characteristics of the landscape with subsurface geologic relationships. Development of a geologic interpretation from imagery is dependent upon: (1) the geologist's ability to interpret geomorphic processes from their static surface expression as landscape characteristics on imagery, (2) his ability to conceptualize the dynamic processes responsible for the evolution 6f interpreted geologic relationships (his ability to develop geologic models). The integration of geologic remote-sensing techniques in subsurface analysis is illustrated by development of an exploration model for ground water in the Tucson area of Arizona, and by the development of an exploration model for mineralization in southwest Idaho.

  18. Translation-aware semantic segmentation via conditional least-square generative adversarial networks

    NASA Astrophysics Data System (ADS)

    Zhang, Mi; Hu, Xiangyun; Zhao, Like; Pang, Shiyan; Gong, Jinqi; Luo, Min

    2017-10-01

    Semantic segmentation has recently made rapid progress in the field of remote sensing and computer vision. However, many leading approaches cannot simultaneously translate label maps to possible source images with a limited number of training images. The core issue is insufficient adversarial information to interpret the inverse process and proper objective loss function to overcome the vanishing gradient problem. We propose the use of conditional least squares generative adversarial networks (CLS-GAN) to delineate visual objects and solve these problems. We trained the CLS-GAN network for semantic segmentation to discriminate dense prediction information either from training images or generative networks. We show that the optimal objective function of CLS-GAN is a special class of f-divergence and yields a generator that lies on the decision boundary of discriminator that reduces possible vanished gradient. We also demonstrate the effectiveness of the proposed architecture at translating images from label maps in the learning process. Experiments on a limited number of high resolution images, including close-range and remote sensing datasets, indicate that the proposed method leads to the improved semantic segmentation accuracy and can simultaneously generate high quality images from label maps.

  19. Contesting nonfiction: Fourth graders making sense of words and images in science information book discussions

    NASA Astrophysics Data System (ADS)

    Belfatti, Monica A.

    Recently developed common core standards echo calls by educators for ensuring that upper elementary students become proficient readers of informational texts. Informational texts have been theorized as causing difficulty for students because they contain linguistic and visual features different from more familiar narrative genres (Lemke, 2004). It has been argued that learning to read informational texts, particularly those with science subject matter, requires making sense of words, images, and the relationships among them (Pappas, 2006). Yet, conspicuously absent in the research are empirical studies documenting ways students make use of textual resources to build textual and conceptual understandings during classroom literacy instruction. This 10-month practitioner research study was designed to investigate the ways a group of ethnically and linguistically diverse fourth graders in one metropolitan school made sense of science information books during dialogically organized literature discussions. In this nontraditional instructional context, I wondered whether and how young students might make use of science informational text features, both words and images, in the midst of collaborative textual and conceptual inquiry. Drawing on methods of constructivist grounded theory and classroom discourse analysis, I analyzed student and teacher talk in 25 discussions of earth and life science books. Digital voice recordings and transcriptions served as the main data sources for this study. I found that, without teacher prompts or mandates to do so, fourth graders raised a wide range of textual and conceptual inquiries about words, images, scientific figures, and phenomena. In addition, my analysis yielded a typology of ways students constructed relationships between words and images within and across page openings of the information books read for their sense-making endeavors. The diversity of constructed word-image relationships aided students in raising, exploring, and contesting textual and conceptual ideas. Moreover, through their joint inquiries, students marshaled and evaluated a rich array of resources. Students' sense-making of information books was not contained by the words and images alone; it involved a situated, complex process of making sense of multiple texts, discourses, and epistemologies. These findings suggest educators, theorists, and policy makers reconsider acontextual, linear, hierarchical models for developing elementary students as sense-makers of nonfiction.

  20. Separating vegetation and soil temperature using airborne multiangular remote sensing image data

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Yan, Chunyan; Xiao, Qing; Yan, Guangjian; Fang, Li

    2012-07-01

    Land surface temperature (LST) is a key parameter in land process research. Many research efforts have been devoted to increase the accuracy of LST retrieval from remote sensing. However, because natural land surface is non-isothermal, component temperature is also required in applications such as evapo-transpiration (ET) modeling. This paper proposes a new algorithm to separately retrieve vegetation temperature and soil background temperature from multiangular thermal infrared (TIR) remote sensing data. The algorithm is based on the localized correlation between the visible/near-infrared (VNIR) bands and the TIR band. This method was tested on the airborne image data acquired during the Watershed Allied Telemetry Experimental Research (WATER) campaign. Preliminary validation indicates that the remote sensing-retrieved results can reflect the spatial and temporal trend of component temperatures. The accuracy is within three degrees while the difference between vegetation and soil temperature can be as large as twenty degrees.

  1. Dynamic magnetic resonance imaging method based on golden-ratio cartesian sampling and compressed sensing.

    PubMed

    Li, Shuo; Zhu, Yanchun; Xie, Yaoqin; Gao, Song

    2018-01-01

    Dynamic magnetic resonance imaging (DMRI) is used to noninvasively trace the movements of organs and the process of drug delivery. The results can provide quantitative or semiquantitative pathology-related parameters, thus giving DMRI great potential for clinical applications. However, conventional DMRI techniques suffer from low temporal resolution and long scan time owing to the limitations of the k-space sampling scheme and image reconstruction algorithm. In this paper, we propose a novel DMRI sampling scheme based on a golden-ratio Cartesian trajectory in combination with a compressed sensing reconstruction algorithm. The results of two simulation experiments, designed according to the two major DMRI techniques, showed that the proposed method can improve the temporal resolution and shorten the scan time and provide high-quality reconstructed images.

  2. A framework for farmland parcels extraction based on image classification

    NASA Astrophysics Data System (ADS)

    Liu, Guoying; Ge, Wenying; Song, Xu; Zhao, Hongdan

    2018-03-01

    It is very important for the government to build an accurate national basic cultivated land database. In this work, farmland parcels extraction is one of the basic steps. However, during the past years, people had to spend much time on determining an area is a farmland parcel or not, since they were bounded to understand remote sensing images only from the mere visual interpretation. In order to overcome this problem, in this study, a method was proposed to extract farmland parcels by means of image classification. In the proposed method, farmland areas and ridge areas of the classification map are semantically processed independently and the results are fused together to form the final results of farmland parcels. Experiments on high spatial remote sensing images have shown the effectiveness of the proposed method.

  3. Optical scanning holography based on compressive sensing using a digital micro-mirror device

    NASA Astrophysics Data System (ADS)

    A-qian, Sun; Ding-fu, Zhou; Sheng, Yuan; You-jun, Hu; Peng, Zhang; Jian-ming, Yue; xin, Zhou

    2017-02-01

    Optical scanning holography (OSH) is a distinct digital holography technique, which uses a single two-dimensional (2D) scanning process to record the hologram of a three-dimensional (3D) object. Usually, these 2D scanning processes are in the form of mechanical scanning, and the quality of recorded hologram may be affected due to the limitation of mechanical scanning accuracy and unavoidable vibration of stepper motor's start-stop. In this paper, we propose a new framework, which replaces the 2D mechanical scanning mirrors with a Digital Micro-mirror Device (DMD) to modulate the scanning light field, and we call it OSH based on Compressive Sensing (CS) using a digital micro-mirror device (CS-OSH). CS-OSH can reconstruct the hologram of an object through the use of compressive sensing theory, and then restore the image of object itself. Numerical simulation results confirm this new type OSH can get a reconstructed image with favorable visual quality even under the condition of a low sample rate.

  4. Environmental mapping and monitoring of Iceland by remote sensing (EMMIRS)

    NASA Astrophysics Data System (ADS)

    Pedersen, Gro B. M.; Vilmundardóttir, Olga K.; Falco, Nicola; Sigurmundsson, Friðþór S.; Rustowicz, Rose; Belart, Joaquin M.-C.; Gísladóttir, Gudrun; Benediktsson, Jón A.

    2016-04-01

    Iceland is exposed to rapid and dynamic landscape changes caused by natural processes and man-made activities, which impact and challenge the country. Fast and reliable mapping and monitoring techniques are needed on a big spatial scale. However, currently there is lack of operational advanced information processing techniques, which are needed for end-users to incorporate remote sensing (RS) data from multiple data sources. Hence, the full potential of the recent RS data explosion is not being fully exploited. The project Environmental Mapping and Monitoring of Iceland by Remote Sensing (EMMIRS) bridges the gap between advanced information processing capabilities and end-user mapping of the Icelandic environment. This is done by a multidisciplinary assessment of two selected remote sensing super sites, Hekla and Öræfajökull, which encompass many of the rapid natural and man-made landscape changes that Iceland is exposed to. An open-access benchmark repository of the two remote sensing supersites is under construction, providing high-resolution LIDAR topography and hyperspectral data for land-cover and landform classification. Furthermore, a multi-temporal and multi-source archive stretching back to 1945 allows a decadal evaluation of landscape and ecological changes for the two remote sensing super sites by the development of automated change detection techniques. The development of innovative pattern recognition and machine learning-based approaches to image classification and change detection is one of the main tasks of the EMMIRS project, aiming to extract and compute earth observation variables as automatically as possible. Ground reference data collected through a field campaign will be used to validate the implemented methods, which outputs are then inferred with geological and vegetation models. Here, preliminary results of an automatic land-cover classification based on hyperspectral image analysis are reported. Furthermore, the EMMIRS project investigates the complex landscape dynamics between geological and ecological processes. This is done through cross-correlation of mapping results and implementation of modelling techniques that simulate geological and ecological processes in order to extrapolate the landscape evolution

  5. Application of AIS Technology to Forest Mapping

    NASA Technical Reports Server (NTRS)

    Yool, S. R.; Star, J. L.

    1985-01-01

    Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.

  6. 1994 ASPRS/ACSM annual convention exposition. Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-01-01

    This report is Volume II of presented papers at the joint 1994 convention of the American Society for Photgrammetry and Remote Sensing and American Congress on Surveying and Mapping. Topic areas covered include the following: Data Base/GPS Issues; Survey Management Issues; Surveying computations; Surveying education; Digital mapping; global change, EOS and NALC issues; GPS issues; Battelle Research in Remote Sensing and in GIS; Advanced Image Processing;GIS Issues; Surveying and Geodesy Issues; water resource issues; Advanced applications of remote sensing; Landsat Pathfinder I.

  7. Landsat data availability from the EROS Data Center and status of future plans

    USGS Publications Warehouse

    Pohl, Russell A.; Metz, G.G.

    1977-01-01

    The Department of Interior's EROS Data Center, managed by the U.S. Geological Survey, was established in 1972, in Sioux Falls, South Dakota, to serve as a principal dissemination facility for Landsat and other remotely Sensed data. Through the middle of 1977, the Center has supplied approximately 1.7 million copies of images from the more than 5 million images of the Earth's surface archived at the Center. Landsat accounted for half of these images plus approximately 5,800 computer-compatible tapes of Landsat data were also supplied to users. New methods for processing data products to make them more useful are being developed, and new accession aids for determining data availability are being placed in operation. The Center also provides assistance and training to resource specialists and land managers in the use of Landsat and other remotely sensed data. A Data Analysis Laboratory is operated at the Center to provide both digital and analog multispectral/multitemporal image analysis capabilities in support of the training and assistance programs. In addition to conventionally processed data products, radiometrically enhanced Landsat imagery are now available from the Center in limited quantities. In mid-1978, the Center will convert to an all-digital processing system for Landsat data that will provide improved products for user analysis in production quantities. The Department of Interior and NASA are currently studying concepts that use communication satellites to relay Landsat data between U.S. ground stations, Goddard Space Flight Center and the EROS Data Center which would improve the timeliness of data availability. The Data Center also works closely with the remote sensing programs and Landsat data receiving and processing facilities being developed in foreign countries.

  8. Image processing techniques and applications to the Earth Resources Technology Satellite program

    NASA Technical Reports Server (NTRS)

    Polge, R. J.; Bhagavan, B. K.; Callas, L.

    1973-01-01

    The Earth Resources Technology Satellite system is studied, with emphasis on sensors, data processing requirements, and image data compression using the Fast Fourier and Hadamard transforms. The ERTS-A system and the fundamentals of remote sensing are discussed. Three user applications (forestry, crops, and rangelands) are selected and their spectral signatures are described. It is shown that additional sensors are needed for rangeland management. An on-board information processing system is recommended to reduce the amount of data transmitted.

  9. Processing techniques for digital sonar images from GLORIA.

    USGS Publications Warehouse

    Chavez, P.S.

    1986-01-01

    Image processing techniques have been developed to handle data from one of the newest members of the remote sensing family of digital imaging systems. This paper discusses software to process data collected by the GLORIA (Geological Long Range Inclined Asdic) sonar imaging system, designed and built by the Institute of Oceanographic Sciences (IOS) in England, to correct for both geometric and radiometric distortions that exist in the original 'raw' data. Preprocessing algorithms that are GLORIA-specific include corrections for slant-range geometry, water column offset, aspect ratio distortion, changes in the ship's velocity, speckle noise, and shading problems caused by the power drop-off which occurs as a function of range.-from Author

  10. The SENSE-Isomorphism Theoretical Image Voxel Estimation (SENSE-ITIVE) Model for Reconstruction and Observing Statistical Properties of Reconstruction Operators

    PubMed Central

    Bruce, Iain P.; Karaman, M. Muge; Rowe, Daniel B.

    2012-01-01

    The acquisition of sub-sampled data from an array of receiver coils has become a common means of reducing data acquisition time in MRI. Of the various techniques used in parallel MRI, SENSitivity Encoding (SENSE) is one of the most common, making use of a complex-valued weighted least squares estimation to unfold the aliased images. It was recently shown in Bruce et al. [Magn. Reson. Imag. 29(2011):1267–1287] that when the SENSE model is represented in terms of a real-valued isomorphism, it assumes a skew-symmetric covariance between receiver coils, as well as an identity covariance structure between voxels. In this manuscript, we show that not only is the skew-symmetric coil covariance unlike that of real data, but the estimated covariance structure between voxels over a time series of experimental data is not an identity matrix. As such, a new model, entitled SENSE-ITIVE, is described with both revised coil and voxel covariance structures. Both the SENSE and SENSE-ITIVE models are represented in terms of real-valued isomorphisms, allowing for a statistical analysis of reconstructed voxel means, variances, and correlations resulting from the use of different coil and voxel covariance structures used in the reconstruction processes to be conducted. It is shown through both theoretical and experimental illustrations that the miss-specification of the coil and voxel covariance structures in the SENSE model results in a lower standard deviation in each voxel of the reconstructed images, and thus an artificial increase in SNR, compared to the standard deviation and SNR of the SENSE-ITIVE model where both the coil and voxel covariances are appropriately accounted for. It is also shown that there are differences in the correlations induced by the reconstruction operations of both models, and consequently there are differences in the correlations estimated throughout the course of reconstructed time series. These differences in correlations could result in meaningful differences in interpretation of results. PMID:22617147

  11. BOREAS RSS-7 Landsat TM LAI IMages of the SSA and NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Chen, Jing; Cihlar, Josef

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study Remote Sensing Science (BOREAS RSS-7) team used Landsat Thematic Mapper (TM) images processed at CCRS to produce images of Leaf Area Index (LAI) for the BOREAS study areas. Two images acquired on 06-Jun and 09-Aug-1991 were used for the SSA, and one image acquired on 09-Jun-1994 was used for the NSA. The LAI images are based on ground measurements and Landsat TM Reduced Simple Ratio (RSR) images. The data are stored in binary image-format files.

  12. Present status and trends of image fusion

    NASA Astrophysics Data System (ADS)

    Xiang, Dachao; Fu, Sheng; Cai, Yiheng

    2009-10-01

    Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.

  13. Detecting Edges in Images by Use of Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2003-01-01

    A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images

  14. End-to-end remote sensing at the Science and Technology Laboratory of John C. Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Kelly, Patrick; Rickman, Douglas; Smith, Eric

    1991-01-01

    The Science and Technology Laboratory (STL) of Stennis Space Center (SSC) was developing an expertise in remote sensing for more than a decade. Capabilities at SSC/STL include all major areas of the field. STL includes the Sensor Development Laboratory (SDL), Image Processing Center, a Learjet 23 flight platform, and on-staff scientific investigators.

  15. Boundary Layer Remote Sensing with Combined Active and Passive Techniques: GPS Radio Occultation and High-Resolution Stereo Imaging (WindCam) Small Satellite Concept

    NASA Technical Reports Server (NTRS)

    Mannucci, A.J.; Wu, D.L.; Teixeira, J.; Ao, C.O.; Xie, F.; Diner, D.J.; Wood, R.; Turk, Joe

    2012-01-01

    Objective: significant progress in understanding low-cloud boundary layer processes. This is the Single largest uncertainty in climate projections. Radio occultation has unique features suited to boundary layer remote sensing (1) Cloud penetrating (2) Very high vertical resolution (approximately 50m-100m) (3) Sensitivity to thermodynamic variables

  16. Side information in coded aperture compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.

    2017-02-01

    Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.

  17. Survey of the Pompeii (IT) archaeological Regions with the multispectral thermal airborne TASI data

    NASA Astrophysics Data System (ADS)

    Pignatti, Stefano; Palombo, Angelo; Pascucci, Simone; Santini, Federico; Laneve, Giovanni

    2017-04-01

    Thermal remote sensing, as a tool for analyzing environmental variables with regards to archaeological prospecting, has been growing ever mainly because airborne surveys allow to provide to archaeologists images at meter scale. The importance of this study lies in the evaluation of TIR imagery in view of the use of unmanned aerial vehicles (UAVs) imagery, for the Conservation of Cultural Heritage, that should provide at low cost very high spatial resolution thermal imaging. The research aims at analyzing the potential of the thermal imaging [1] on some selected areas of the Pompeii archaeological park. To this purpose, on December the 7th, 2015, a TASI-600, an [2] airborne multispectral thermal imagery (32 channels from 8 to 11.5 nm with a spectral resolution of 100nm and a spatial resolution of 1m/pixel) has surveyed the archaeological Pompeii Regions. Thermal images have been corrected, calibrated in order to obtain land surface temperatures (LST) and emissivity data set to be applied for the further analysis. The thermal data pre-processing has included: ii) radiometric calibration of the raw data and the correction of the blinking pixel; ii) atmospheric correction performed by using MODTRAN; iii) Temperature Emissivity Separation (TES) to obtain emissivity and LST maps [3]. Our objective is to shows the major results of the IR survey, the pre-processing of the multispectral thermal imagery. LST and emissivity maps have been analysed to describe the thermal/emissivity pattern of the different Regions as function of the presence, in first subsurface, of archaeological features. The obtained preliminary results are encouraging, even though, the vegetation cover, covering the different Pompeii Regions, is one of the major issues affecting the usefulness of the TIR sensing. Of course, LST anomalies and emissivity maps need to be further integrated with the classical geophysical investigation techniques to have a complete validation and to better evaluate the usefulness of the IR sensing References 1. Pascucci S., Cavalli R M., Palombo A. & Pignatti S. (2010), Suitability of CASI and ATM airborne remote sensing data for archaeological subsurface structure detection under different land cover: the Arpi case study (Italy). In Journal of Geophysics and Engineering, Vol. 7 (2), pp. 183-189. 2. Pignatti, S.; Lapenna, V.; Palombo, A.; Pascucci, S.; Pergola, N.; Cuomo, V. 2011. An advanced tool of the CNR IMAA EO facilities: Overview of the TASI-600 hyperspectral thermal spectrometer. 3rd Hyperspectral Image and Signal Processing: Evolution in Remote Sensing Conference (WHISPERS), 2011; DOI 10.1109/WHISPERS.2011.6080890. 3. Z.L. Li, F. Becker, M.P Stoll and Z. Wan. 1999. Evaluation of six methods for extracting relative emissivity spectra from thermal infrared images. Remote Sensing of Environment, vol. 69, 197-214.

  18. Geologic analyses of LANDSAT-1 multispectral imagery of a possible power plant site employing digital and analog image processing. [in Pennsylvania

    NASA Technical Reports Server (NTRS)

    Lovegreen, J. R.; Prosser, W. J.; Millet, R. A.

    1975-01-01

    A site in the Great Valley subsection of the Valley and Ridge physiographic province in eastern Pennsylvania was studied to evaluate the use of digital and analog image processing for geologic investigations. Ground truth at the site was obtained by a field mapping program, a subsurface exploration investigation and a review of available published and unpublished literature. Remote sensing data were analyzed using standard manual techniques. LANDSAT-1 imagery was analyzed using digital image processing employing the multispectral Image 100 system and using analog color processing employing the VP-8 image analyzer. This study deals primarily with linears identified employing image processing and correlation of these linears with known structural features and with linears identified manual interpretation; and the identification of rock outcrops in areas of extensive vegetative cover employing image processing. The results of this study indicate that image processing can be a cost-effective tool for evaluating geologic and linear features for regional studies encompassing large areas such as for power plant siting. Digital image processing can be an effective tool for identifying rock outcrops in areas of heavy vegetative cover.

  19. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  20. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  1. Building Shadow Detection from Ghost Imagery

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Sha, J.; Yue, T.; Wang, Q.; Liu, X.; Huang, S.; Pan, Q.; Wei, J.

    2018-05-01

    Shadow is one of the basic features of remote sensing image, it expresses a lot of information of the object which is loss or interference, and the removal of shadow is always a difficult problem to remote sensing image processing. In this paper, it is mainly analyzes the characteristics and properties of shadows from the ghost image (traditional orthorectification). The DBM and the interior and exterior orientation elements of the image are used to calculate the zenith angle of sun. Then this paper combines the scope of the architectural shadows which has be determined by the zenith angle of sun with the region growing method to make the detection of architectural shadow areas. This method lays a solid foundation for the shadow of the repair from the ghost image later. It will greatly improve the accuracy of shadow detection from buildings and make it more conducive to solve the problem of urban large-scale aerial imagines.

  2. A neuromorphic approach to satellite image understanding

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Perakakis, Manolis

    2014-05-01

    Remote sensing satellite imagery provides high altitude, top viewing aspects of large geographic regions and as such the depicted features are not always easily recognizable. Nevertheless, geoscientists familiar to remote sensing data, gradually gain experience and enhance their satellite image interpretation skills. The aim of this study is to devise a novel computational neuro-centered classification approach for feature extraction and image understanding. Object recognition through image processing practices is related to a series of known image/feature based attributes including size, shape, association, texture, etc. The objective of the study is to weight these attribute values towards the enhancement of feature recognition. The key cognitive experimentation concern is to define the point when a user recognizes a feature as it varies in terms of the above mentioned attributes and relate it with their corresponding values. Towards this end, we have set up an experimentation methodology that utilizes cognitive data from brain signals (EEG) and eye gaze data (eye tracking) of subjects watching satellite images of varying attributes; this allows the collection of rich real-time data that will be used for designing the image classifier. Since the data are already labeled by users (using an input device) a first step is to compare the performance of various machine-learning algorithms on the collected data. On the long-run, the aim of this work would be to investigate the automatic classification of unlabeled images (unsupervised learning) based purely on image attributes. The outcome of this innovative process is twofold: First, in an abundance of remote sensing image datasets we may define the essential image specifications in order to collect the appropriate data for each application and improve processing and resource efficiency. E.g. for a fault extraction application in a given scale a medium resolution 4-band image, may be more effective than costly, multispectral, very high resolution imagery. Second, we attempt to relate the experienced against the non-experienced user understanding in order to indirectly assess the possible limits of purely computational systems. In other words, obtain the conceptual limits of computation vs human cognition concerning feature recognition from satellite imagery. Preliminary results of this pilot study show relations between collected data and differentiation of the image attributes which indicates that our methodology can lead to important results.

  3. Landslide Life-Cycle Monitoring and Failure Prediction using Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Bouali, E. H. Y.; Oommen, T.; Escobar-Wolf, R. P.

    2017-12-01

    The consequences of slope instability are severe across the world: the US Geological Survey estimates that, each year, the United States spends $3.5B to repair damages caused by landslides, 25-50 deaths occur, real estate values in affected areas are reduced, productivity decreases, and natural environments are destroyed. A 2012 study by D.N. Petley found that loss of life is typically underestimated and, between 2004 and 2010, 2,620 fatal landslides caused 32,322 deaths around the world. These statistics have led research into the study of landslide monitoring and forecasting. More specifically, this presentation focuses on assessing the potential for using satellite-based optical and radar imagery toward overall landslide life-cycle monitoring and prediction. Radar images from multiple satellites (ERS-1, ERS-2, ENVISAT, and COSMO-SkyMed) are processed using the Persistent Scatterer Interferometry (PSI) technique. Optical images, from the Worldview-2 satellite, are orthorectified and processed using the Co-registration of Optically Sensed Images and Correlation (COSI-Corr) algorithm. Both approaches, process stacks of respective images, yield ground displacement rate values. Ground displacement information is used to generate `inverse-velocity vs time' plots, a proxy relationship that is used to estimate landslide occurrence (slope failure) and derived from a relationship quantified by T. Fukuzono in 1985 and B. Voight in 1988 between a material's time of failure and the strain rate applied to that material. Successful laboratory tests have demonstrated the usefulness of `inverse-velocity vs time' plots. This presentation will investigate the applicability of this approach with remote sensing on natural landslides in the western United States.

  4. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  5. Sugarcane Crop Extraction Using Object-Oriented Method from ZY-3 High Resolution Satellite Tlc Image

    NASA Astrophysics Data System (ADS)

    Luo, H.; Ling, Z. Y.; Shao, G. Z.; Huang, Y.; He, Y. Q.; Ning, W. Y.; Zhong, Z.

    2018-04-01

    Sugarcane is one of the most important crops in Guangxi, China. As the development of satellite remote sensing technology, more remotely sensed images can be used for monitoring sugarcane crop. With the help of Three Line Camera (TLC) images, wide coverage and stereoscopic mapping ability, Chinese ZY-3 high resolution stereoscopic mapping satellite is useful in attaining more information for sugarcane crop monitoring, such as spectral, shape, texture difference between forward, nadir and backward images. Digital surface model (DSM) derived from ZY-3 TLC images are also able to provide height information for sugarcane crop. In this study, we make attempt to extract sugarcane crop from ZY-3 images, which are acquired in harvest period. Ortho-rectified TLC images, fused image, DSM are processed for our extraction. Then Object-oriented method is used in image segmentation, example collection, and feature extraction. The results of our study show that with the help of ZY-3 TLC image, the information of sugarcane crop in harvest time can be automatic extracted, with an overall accuracy of about 85.3 %.

  6. NDVI and Panchromatic Image Correlation Using Texture Analysis

    DTIC Science & Technology

    2010-03-01

    6 Figure 5. Spectral reflectance of vegetation and soil from 0.4 to 1.1 mm (From Perry...should help the classification methods to be able to classify kelp. Figure 5. Spectral reflectance of vegetation and soil from 0.4 to 1.1 mm...1988). Image processing software for imaging spectrometry analysis. Remote Sensing of Enviroment , 24: 201–210. Perry, C., & Lautenschlager, L. F

  7. A fuzzy structural matching scheme for space robotics vision

    NASA Technical Reports Server (NTRS)

    Naka, Masao; Yamamoto, Hiromichi; Homma, Khozo; Iwata, Yoshitaka

    1994-01-01

    In this paper, we propose a new fuzzy structural matching scheme for space stereo vision which is based on the fuzzy properties of regions of images and effectively reduces the computational burden in the following low level matching process. Three dimensional distance images of a space truss structural model are estimated using this scheme from stereo images sensed by Charge Coupled Device (CCD) TV cameras.

  8. FEX: A Knowledge-Based System For Planimetric Feature Extraction

    NASA Astrophysics Data System (ADS)

    Zelek, John S.

    1988-10-01

    Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.

  9. Relative radiometric calibration for multispectral remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Hsuan

    2006-10-01

    Our environment has been changed continuously by nature causes or human activities. In order to identify what has been changed during certain time period, we need to spend enormous resources to collect all kinds of data and analyze them. With remote sensing images, change detection has become one efficient and inexpensive technique. It has wide applications including disaster management, agriculture analysis, environmental monitoring and military reconnaissance. To detect the changes between two remote sensing images collected at different time, radiometric calibration is one of the most important processes. Under the different weather and atmosphere conditions, even the same material might be resulting distinct radiance spectrum in two images. In this case, they will be misclassified as changes and false alarm rate will also increase. To achieve absolute calibration, i.e., to convert the radiance to reflectance spectrum, the information about the atmosphere condition or ground reference materials with known reflectance spectrum is needed but rarely available. In this paper, we present relative radiometric calibration methods which transform image pair into similar atmospheric effect instead of remove it in absolutely calibration, so that the information of atmosphere condition is not required. A SPOT image pair will be used for experiment to demonstrate the performance.

  10. Automated Formosat Image Processing System for Rapid Response to International Disasters

    NASA Astrophysics Data System (ADS)

    Cheng, M. C.; Chou, S. C.; Chen, Y. C.; Chen, B.; Liu, C.; Yu, S. J.

    2016-06-01

    FORMOSAT-2, Taiwan's first remote sensing satellite, was successfully launched in May of 2004 into the Sun-synchronous orbit at 891 kilometers of altitude. With the daily revisit feature, the 2-m panchromatic, 8-m multi-spectral resolution images captured have been used for researches and operations in various societal benefit areas. This paper details the orchestration of various tasks conducted in different institutions in Taiwan in the efforts responding to international disasters. The institutes involved including its space agency-National Space Organization (NSPO), Center for Satellite Remote Sensing Research of National Central University, GIS Center of Feng-Chia University, and the National Center for High-performance Computing. Since each institution has its own mandate, the coordinated tasks ranged from receiving emergency observation requests, scheduling and tasking of satellite operation, downlink to ground stations, images processing including data injection, ortho-rectification, to delivery of image products. With the lessons learned from working with international partners, the FORMOSAT Image Processing System has been extensively automated and streamlined with a goal to shorten the time between request and delivery in an efficient manner. The integrated team has developed an Application Interface to its system platform that provides functions of search in archive catalogue, request of data services, mission planning, inquiry of services status, and image download. This automated system enables timely image acquisition and substantially increases the value of data product. Example outcome of these efforts in recent response to support Sentinel Asia in Nepal Earthquake is demonstrated herein.

  11. Investigation related to multispectral imaging systems

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Erickson, J. D.

    1974-01-01

    A summary of technical progress made during a five year research program directed toward the development of operational information systems based on multispectral sensing and the use of these systems in earth-resource survey applications is presented. Efforts were undertaken during this program to: (1) improve the basic understanding of the many facets of multispectral remote sensing, (2) develop methods for improving the accuracy of information generated by remote sensing systems, (3) improve the efficiency of data processing and information extraction techniques to enhance the cost-effectiveness of remote sensing systems, (4) investigate additional problems having potential remote sensing solutions, and (5) apply the existing and developing technology for specific users and document and transfer that technology to the remote sensing community.

  12. Gender differences in autobiographical memory for everyday events: retrieval elicited by SenseCam images versus verbal cues.

    PubMed

    St Jacques, Peggy L; Conway, Martin A; Cabeza, Roberto

    2011-10-01

    Gender differences are frequently observed in autobiographical memory (AM). However, few studies have investigated the neural basis of potential gender differences in AM. In the present functional MRI (fMRI) study we investigated gender differences in AMs elicited using dynamic visual images vs verbal cues. We used a novel technology called a SenseCam, a wearable device that automatically takes thousands of photographs. SenseCam differs considerably from other prospective methods of generating retrieval cues because it does not disrupt the ongoing experience. This allowed us to control for potential gender differences in emotional processing and elaborative rehearsal, while manipulating how the AMs were elicited. We predicted that males would retrieve more richly experienced AMs elicited by the SenseCam images vs the verbal cues, whereas females would show equal sensitivity to both cues. The behavioural results indicated that there were no gender differences in subjective ratings of reliving, importance, vividness, emotion, and uniqueness, suggesting that gender differences in brain activity were not due to differences in these measures of phenomenological experience. Consistent with our predictions, the fMRI results revealed that males showed a greater difference in functional activity associated with the rich experience of SenseCam vs verbal cues, than did females.

  13. Electromagnetic Imaging Methods for Nondestructive Evaluation Applications

    PubMed Central

    Deng, Yiming; Liu, Xin

    2011-01-01

    Electromagnetic nondestructive tests are important and widely used within the field of nondestructive evaluation (NDE). The recent advances in sensing technology, hardware and software development dedicated to imaging and image processing, and material sciences have greatly expanded the application fields, sophisticated the systems design and made the potential of electromagnetic NDE imaging seemingly unlimited. This review provides a comprehensive summary of research works on electromagnetic imaging methods for NDE applications, followed by the summary and discussions on future directions. PMID:22247693

  14. TEM Video Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into amore » single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions. Figure 1 highlights the results from the Pd nanoparticle experiment. On the left, 10 frames are reconstructed from a single coded frame—the original frames are shown for comparison. On the right a selection of three frames are shown from reconstructions at compression levels 10,20,30. The reconstructions, which are not post-processed, are true to the original and degrade in a straightforward manner. The final choice of compression level will obviously depend on both the temporal and spatial resolution required for a specific imaging task, but the results indicate that an increase in speed of better than an order of magnitude should be possible for all experiments. References: [1] P Llull, X Liao, X Yuan et al. Optics express 21(9), (2013), p. 10526. [2] J Yang, X Yuan, X Liao et al. Image Processing, IEEE Trans 23(11), (2014), p. 4863. [3] X Yuan, J Yang, P Llull et al. In ICIP 2013 (IEEE), p. 14. [4] X Yuan, P Llull, X Liao et al. In CVPR 2014. p. 3318. [5] EJ Candès, J Romberg and T Tao. Information Theory, IEEE Trans 52(2), (2006), p. 489. [6] P Binev, W Dahmen, R DeVore et al. In Modeling Nanoscale Imaging in Electron Microscopy, eds. T Vogt, W Dahmen and P Binev (Springer US), Nanostructure Science and Technology (2012). p. 73. [7] A Stevens, H Yang, L Carin et al. Microscopy 63(1), (2014), pp. 41.« less

  15. Spatial dependence of predictions from image segmentation: a methods to determine appropriate scales for producing land-management information

    USDA-ARS?s Scientific Manuscript database

    A challenge in ecological studies is defining scales of observation that correspond to relevant ecological scales for organisms or processes. Image segmentation has been proposed as an alternative to pixel-based methods for scaling remotely-sensed data into ecologically-meaningful units. However, to...

  16. Investigation into image quality difference between total variation and nonlinear sparsifying transform based compressed sensing

    NASA Astrophysics Data System (ADS)

    Dong, Jian; Kudo, Hiroyuki

    2017-03-01

    Compressed sensing (CS) is attracting growing concerns in sparse-view computed tomography (CT) image reconstruction. The most standard approach of CS is total variation (TV) minimization. However, images reconstructed by TV usually suffer from distortions, especially in reconstruction of practical CT images, in forms of patchy artifacts, improper serrate edges and loss of image textures. Most existing CS approaches including TV achieve image quality improvement by applying linear transforms to object image, but linear transforms usually fail to take discontinuities into account, such as edges and image textures, which is considered to be the key reason for image distortions. Actually, discussions on nonlinear filter based image processing has a long history, leading us to clarify that the nonlinear filters yield better results compared to linear filters in image processing task such as denoising. Median root prior was first utilized by Alenius as nonlinear transform in CT image reconstruction, with significant gains obtained. Subsequently, Zhang developed the application of nonlocal means-based CS. A fact is gradually becoming clear that the nonlinear transform based CS has superiority in improving image quality compared with the linear transform based CS. However, it has not been clearly concluded in any previous paper within the scope of our knowledge. In this work, we investigated the image quality differences between the conventional TV minimization and nonlinear sparsifying transform based CS, as well as image quality differences among different nonlinear sparisying transform based CSs in sparse-view CT image reconstruction. Additionally, we accelerated the implementation of nonlinear sparsifying transform based CS algorithm.

  17. Using fuzzy fractal features of digital images for the material surface analisys

    NASA Astrophysics Data System (ADS)

    Privezentsev, D. G.; Zhiznyakov, A. L.; Astafiev, A. V.; Pugin, E. V.

    2018-01-01

    Edge detection is an important task in image processing. There are a lot of approaches in this area: Sobel, Canny operators and others. One of the perspective techniques in image processing is the use of fuzzy logic and fuzzy sets theory. They allow us to increase processing quality by representing information in its fuzzy form. Most of the existing fuzzy image processing methods switch to fuzzy sets on very late stages, so this leads to some useful information loss. In this paper, a novel method of edge detection based on fuzzy image representation and fuzzy pixels is proposed. With this approach, we convert the image to fuzzy form on the first step. Different approaches to this conversion are described. Several membership functions for fuzzy pixel description and requirements for their form and view are given. A novel approach to edge detection based on Sobel operator and fuzzy image representation is proposed. Experimental testing of developed method was performed on remote sensing images.

  18. Low dose reconstruction algorithm for differential phase contrast imaging.

    PubMed

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  19. Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian

    2016-05-01

    Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).

  20. Visualizing Airborne and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Bierwirth, Victoria A.

    2011-01-01

    Remote sensing is a process able to provide information about Earth to better understand Earth's processes and assist in monitoring Earth's resources. The Cloud Absorption Radiometer (CAR) is one remote sensing instrument dedicated to the cause of collecting data on anthropogenic influences on Earth as well as assisting scientists in understanding land-surface and atmospheric interactions. Landsat is a satellite program dedicated to collecting repetitive coverage of the continental Earth surfaces in seven regions of the electromagnetic spectrum. Combining these two aircraft and satellite remote sensing instruments will provide a detailed and comprehensive data collection able to provide influential information and improve predictions of changes in the future. This project acquired, interpreted, and created composite images from satellite data acquired from Landsat 4-5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper plus (ETM+). Landsat images were processed for areas covered by CAR during the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCT AS), Cloud and Land Surface Interaction Campaign (CLASIC), Intercontinental Chemical Transport Experiment-Phase B (INTEXB), and Southern African Regional Science Initiative (SAFARI) 2000 missions. The acquisition of Landsat data will provide supplemental information to assist in visualizing and interpreting airborne and satellite imagery.

  1. Parallel Algorithm for GPU Processing; for use in High Speed Machine Vision Sensing of Cotton Lint Trash.

    PubMed

    Pelletier, Mathew G

    2008-02-08

    One of the main hurdles standing in the way of optimal cleaning of cotton lint isthe lack of sensing systems that can react fast enough to provide the control system withreal-time information as to the level of trash contamination of the cotton lint. This researchexamines the use of programmable graphic processing units (GPU) as an alternative to thePC's traditional use of the central processing unit (CPU). The use of the GPU, as analternative computation platform, allowed for the machine vision system to gain asignificant improvement in processing time. By improving the processing time, thisresearch seeks to address the lack of availability of rapid trash sensing systems and thusalleviate a situation in which the current systems view the cotton lint either well before, orafter, the cotton is cleaned. This extended lag/lead time that is currently imposed on thecotton trash cleaning control systems, is what is responsible for system operators utilizing avery large dead-band safety buffer in order to ensure that the cotton lint is not undercleaned.Unfortunately, the utilization of a large dead-band buffer results in the majority ofthe cotton lint being over-cleaned which in turn causes lint fiber-damage as well assignificant losses of the valuable lint due to the excessive use of cleaning machinery. Thisresearch estimates that upwards of a 30% reduction in lint loss could be gained through theuse of a tightly coupled trash sensor to the cleaning machinery control systems. Thisresearch seeks to improve processing times through the development of a new algorithm forcotton trash sensing that allows for implementation on a highly parallel architecture.Additionally, by moving the new parallel algorithm onto an alternative computing platform,the graphic processing unit "GPU", for processing of the cotton trash images, a speed up ofover 6.5 times, over optimized code running on the PC's central processing unit "CPU", wasgained. The new parallel algorithm operating on the GPU was able to process a 1024x1024image in less than 17ms. At this improved speed, the image processing system's performance should now be sufficient to provide a system that would be capable of realtimefeed-back control that is in tight cooperation with the cleaning equipment.

  2. The Role of Remote Sensing Displays in Earth Climate and Planetary Atmospheric Research

    NASA Technical Reports Server (NTRS)

    DelGenio, Anthony D.; Hansen, James E. (Technical Monitor)

    2001-01-01

    The communities of scientists who study the Earth's climate and the atmospheres of the other planets barely overlap, but the types of questions they pose and the resulting implications for the use and interpretation of remote sensing data sets have much in common. Both seek to determine the characteristic behavior of three-dimensional fluids that also evolve in time. Climate researchers want to know how and why the general patterns that define our climate today might be different in the next century. Planetary scientists try to understand why circulation patterns and clouds on Mars, Venus, or Jupiter are different from those on Earth. Both disciplines must aggregate large amounts of data covering long time periods and several altitudes to have a representative picture of the rapidly changing atmosphere they are studying. This emphasis separates climate scientists from weather forecasters, who focus at any one time on a limited number of images. Likewise, it separates planetary atmosphere researchers from planetary geologists, who rely primarily on single images (or mosaics of images covering the globe) to study two-dimensional planetary surfaces that are mostly static over the duration of a spacecraft mission yet reveal dynamic processes acting over thousands to millions of years. Remote sensing displays are usually two-dimensional projections that capture an atmosphere at an instant in time. How scientists manipulate and display such data, how they interpret what they see, and how they thereby understand the physical processes that cause what they see, are the challenges I discuss in this chapter. I begin by discussing differences in how novices and experts in the field relate displays of data to the real world. This leads to a discussion of the use and abuse of image enhancement and color in remote sensing displays. I then show some examples of techniques used by scientists in climate and planetary research to both convey information and design research strategies using remote sensing displays.

  3. GLCF: FAQ

    Science.gov Websites

    my image processing program. How do I do this? I am using ArcView, how do I import your imagery? I am remote sensing and computer science professionals who have designed a suite of in-house processing order to indicate processing level - not format - we have added the "L1G" extension. The

  4. Interactive object recognition assistance: an approach to recognition starting from target objects

    NASA Astrophysics Data System (ADS)

    Geisler, Juergen; Littfass, Michael

    1999-07-01

    Recognition of target objects in remotely sensed imagery required detailed knowledge about the target object domain as well as about mapping properties of the sensing system. The art of object recognition is to combine both worlds appropriately and to provide models of target appearance with respect to sensor characteristics. Common approaches to support interactive object recognition are either driven from the sensor point of view and address the problem of displaying images in a manner adequate to the sensing system. Or they focus on target objects and provide exhaustive encyclopedic information about this domain. Our paper discusses an approach to assist interactive object recognition based on knowledge about target objects and taking into account the significance of object features with respect to characteristics of the sensed imagery, e.g. spatial and spectral resolution. An `interactive recognition assistant' takes the image analyst through the interpretation process by indicating step-by-step the respectively most significant features of objects in an actual set of candidates. The significance of object features is expressed by pregenerated trees of significance, and by the dynamic computation of decision relevance for every feature at each step of the recognition process. In the context of this approach we discuss the question of modeling and storing the multisensorial/multispectral appearances of target objects and object classes as well as the problem of an adequate dynamic human-machine-interface that takes into account various mental models of human image interpretation.

  5. Airborne multidimensional integrated remote sensing system

    NASA Astrophysics Data System (ADS)

    Xu, Weiming; Wang, Jianyu; Shu, Rong; He, Zhiping; Ma, Yanhua

    2006-12-01

    In this paper, we present a kind of airborne multidimensional integrated remote sensing system that consists of an imaging spectrometer, a three-line scanner, a laser ranger, a position & orientation subsystem and a stabilizer PAV30. The imaging spectrometer is composed of two sets of identical push-broom high spectral imager with a field of view of 22°, which provides a field of view of 42°. The spectral range of the imaging spectrometer is from 420nm to 900nm, and its spectral resolution is 5nm. The three-line scanner is composed of two pieces of panchromatic CCD and a RGB CCD with 20° stereo angle and 10cm GSD(Ground Sample Distance) with 1000m flying height. The laser ranger can provide height data of three points every other four scanning lines of the spectral imager and those three points are calibrated to match the corresponding pixels of the spectral imager. The post-processing attitude accuracy of POS/AV 510 used as the position & orientation subsystem, which is the aerial special exterior parameters measuring product of Canadian Applanix Corporation, is 0.005° combined with base station data. The airborne multidimensional integrated remote sensing system was implemented successfully, performed the first flying experiment on April, 2005, and obtained satisfying data.

  6. Implication of relationship between natural impacts and land use/land cover (LULC) changes of urban area in Mongolia

    NASA Astrophysics Data System (ADS)

    Gantumur, Byambakhuu; Wu, Falin; Zhao, Yan; Vandansambuu, Battsengel; Dalaibaatar, Enkhjargal; Itiritiphan, Fareda; Shaimurat, Dauryenbyek

    2017-10-01

    Urban growth can profoundly alter the urban landscape structure, ecosystem processes, and local climates. Timely and accurate information on the status and trends of urban ecosystems is critical to develop strategies for sustainable development and to improve the urban residential environment and living quality. Ulaanbaatar city was urbanized very rapidly caused by herders and farmers, many of them migrating from rural places, have played a big role in this urban expansion (sprawl). Today, 1.3 million residents for about 40% of total population are living in the Ulaanbaatar region. Those human activities influenced stronger to green environments. Therefore, the aim of this study is determined to change detection of land use/land cover (LULC) and estimating their areas for the trend of future by remote sensing and statistical methods. The implications of analysis were provided by change detection methods of LULC, remote sensing spectral indices including normalized difference vegetation index (NDVI), normalized difference water index (NDWI) and normalized difference built-up index (NDBI). In addition, it can relate to urban heat island (UHI) provided by Land surface temperature (LST) with local climate issues. Statistical methods for image processing used to define relations between those spectral indices and change detection images and regression analysis for time series trend in future. Remote sensing data are used by Landsat (TM/ETM+/OLI) satellite images over the period between 1990 and 2016 by 5 years. The advantages of this study are very useful remote sensing approaches with statistical analysis and important to detecting changes of LULC. The experimental results show that the LULC changes can image on the present and after few years and determined relations between impacts of environmental conditions.

  7. A remote sensing computer-assisted learning tool developed using the unified modeling language

    NASA Astrophysics Data System (ADS)

    Friedrich, J.; Karslioglu, M. O.

    The goal of this work has been to create an easy-to-use and simple-to-make learning tool for remote sensing at an introductory level. Many students struggle to comprehend what seems to be a very basic knowledge of digital images, image processing and image arithmetic, for example. Because professional programs are generally too complex and overwhelming for beginners and often not tailored to the specific needs of a course regarding functionality, a computer-assisted learning (CAL) program was developed based on the unified modeling language (UML), the present standard for object-oriented (OO) system development. A major advantage of this approach is an easier transition from modeling to coding of such an application, if modern UML tools are being used. After introducing the constructed UML model, its implementation is briefly described followed by a series of learning exercises. They illustrate how the resulting CAL tool supports students taking an introductory course in remote sensing at the author's institution.

  8. [Study on application of low altitude remote sensing to Chinese herb medicinal sustainable utilization].

    PubMed

    Zhou, Ying-Qun; Chen, Shi-Lin; Zhao, Run-Huai; Xie, Cai-Xiang; Li, Ying

    2008-04-01

    Sustainable utilization and bio-diversity protection of traditional Chinese medicine (TCM) have been a hotspot of the TCM study at present, in which the choice of appropriate method is one of the primary problems confronted. This paper described the technical system, equipment and image processing of low altitude remote sensing, and analyzed its future application in Chinese herb medicinal sustainable utilization.

  9. Compressive sensing using optimized sensing matrix for face verification

    NASA Astrophysics Data System (ADS)

    Oey, Endra; Jeffry; Wongso, Kelvin; Tommy

    2017-12-01

    Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.

  10. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  11. A robust nonlinear filter for image restoration.

    PubMed

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  12. Image sensor for testing refractive error of eyes

    NASA Astrophysics Data System (ADS)

    Li, Xiangning; Chen, Jiabi; Xu, Longyun

    2000-05-01

    It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.

  13. Processing Digital Imagery to Enhance Perceptions of Realism

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2003-01-01

    Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.

  14. Suppressing the image smear of the vibration modulation transfer function for remote-sensing optical cameras.

    PubMed

    Li, Jin; Liu, Zilong; Liu, Si

    2017-02-20

    In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.

  15. Assessment of landscape change associated with tropical cyclone phenomena in Baja California Sur, Mexico, using satellite remote sensing

    NASA Astrophysics Data System (ADS)

    Martinez-Gutierrez, Genaro

    Baja California Sur (Mexico), as well as mainland Mexico, is affected by tropical cyclone storms, which originate in the eastern north Pacific. Historical records show that Baja has been damaged by intense summer storms. An arid to semiarid climate characterizes the study area, where precipitation mainly occurs during the summer and winter seasons. Natural and anthropogenic changes have impacted the landscape of southern Baja. The present research documents the effects of tropical storms over the southern region of Baja California for a period of approximately twenty-six years. The goal of the research is to demonstrate how remote sensing can be used to detect the important effects of tropical storms including: (a) evaluation of change detection algorithms, and (b) delineating changes to the landscape including coastal modification, fluvial erosion and deposition, vegetation change, river avulsion using change detection algorithms. Digital image processing methods with temporal Landsat satellite remotely sensed data from the North America Landscape Characterization archive (NALC), Thematic Mapper (TM), and Enhanced Thematic Mapper (ETM) images were used to document the landscape change. Two image processing methods were tested including Image differencing (ID), and Principal Component Analysis (PCA). Landscape changes identified with the NALC archive and TM images showed that the major changes included a rapid change of land use in the towns of San Jose del Cabo and Cabo San Lucas between 1973 and 1986. The features detected using the algorithms included flood deposits within the channels of active streams, erosion banks, and new channels caused by channel avulsion. Despite the 19 year period covered by the NALC data and approximately 10 year intervals between acquisition dates, there were changed features that could be identified in the images. The TM images showed that flooding from Hurricane Isis (1998) produced new large deposits within the stream channels. This research has shown that remote sensing based change detection can delineate the effects of flooding on the landscape at scales down to the nominal resolution of the sensor. These findings indicate that many other applications for change detection are both viable and important. These include disaster response, flood hazard planning, geomorphic studies, water supply management in deserts.

  16. Multispectral thermal airborne TASI-600 data to study the Pompeii (IT) archaeological area

    NASA Astrophysics Data System (ADS)

    Palombo, Angelo; Pascucci, Simone; Pergola, Nicola; Pignatti, Stefano; Santini, Federico; Soldovieri, Francesco

    2016-04-01

    The management of archaeological areas refers to the conservation of the ruins/buildings and the eventual prospection of new areas having an archaeological potential. In this framework, airborne remote sensing is a well-developed geophysical tool for supporting the archaeological surveys of wide areas. The spectral regions applied in archaeological remote sensing spans from the VNIR to the TIR. In particular, the archaeological thermal imaging considers that materials absorb, emit, transmit, and reflect the thermal infrared radiation at different rate according to their composition, density and moisture content. Despite its potential, thermal imaging in archaeological applications are scarce. Among them, noteworthy are the ones related to the use of Landsat and ASTER [1] and airborne remote sensing [2, 3, 4 and 5]. In view of these potential in Cultural Heritage applications, the present study aims at analysing the usefulness of the high spatial resolution thermal imaging on the Pompeii archaeological park. To this purpose TASI-600 [6] airborne multispectral thermal imagery (32 channels from 8 to 11.5 nm with a spectral resolution of 100nm and a spatial resolution of 1m/pixel) was acquired on December the 7th, 2015. Airborne survey has been acquired to get useful information on the building materials (both ancient and of consolidation) characteristics and, whenever possible, to retrieve quick indicators on their conservation status. Thermal images will be, moreover, processed to have an insight of the critical environmental issues impacting the structures (e.g. moisture). The proposed study shows the preliminary results of the airborne deployments, the pre-processing of the multispectral thermal imagery and the retrieving of accurate land surface temperatures (LST). LST map will be analysed to describe the thermal pattern of the city of Pompeii and detect any thermal anomalies. As far as the ongoing TASI-600 sensors pre-processing, it will include: (a) radiometric calibration of the raw data by using the RADCORR software provided by ITRES (Canada) and the application of a new correction tool for blinking pixel correction, developed by CNR (Italy); (b) atmospheric compensation of the TIR data by applying the ISAC (In-Scene Atmospheric Compensation) algorithm [7]; (c) Temperature Emissivity Separation (TES) according to the methods described by [8] to obtain a LST map. The obtained preliminary results are encouraging, even though, suitable integration approaches with the classical geophysical investigation techniques have to be improved for a rapid and cost-effective assessment of the buildings status. The importance of this study, moreover, is related to the evaluation of the impact of the unmanned aerial vehicles (UAVs) imaging in the Conservation of Cultural Heritage that can provide: i) low cost imaging; ii) very high spatial resolution thermal imaging. References 1. Scollar, I., Tabbagh, A., Hesse, A., Herzog, A., 1990. Archaeological Prospecting andRemote Sensing. Cambridge University Press, Cambridge.Seitz, C., Altenbach, H., 2011. Project ARCHEYE: the quadrocopter as the archaeologists eye. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 38 2. Sever, T.L., Wagner, D.W., 1991. Analysis of prehistoric roadways in Chaco Canyonusing remotely sensed data. In: Trombold, C.D. (Ed.), Ancient Road Networksand Settlement Hierarchies in the New World. Cambridge University Press,Cambridge, pp. 42 3. Pascucci S., Cavalli R M., Palombo A. & Pignatti S. (2010), Suitability of CASI and ATM airborne remote sensing data for archaeological subsurface structure detection under different land cover: the Arpi case study (Italy). In Journal of Geophysics and Engineering, Vol. 7 (2), pp. 183-189. 4. Bassani C., Cavalli R.M., Goffredo, R., Palombo A., Pascucci S. & Pignatti S. (2009), Specific spectral bands for different land cover contexts to improve the efficiency of remote sensing archaeological prospection: The Arpi case study. In Journal of Cultural Heritage, Vol. 10, pp. 41-48 5. Cavalli R.M., Marino C.M. & Pignatti S. (2000), Environmental Studies Through Active and Passive Airborne Remote Sensing Systems. In Non-Destructive Techniques Applied to Landscape Archaeology, The Archaeology Mediterranean Landscapes 4, Oxbow Books, Oxford, pp. 31-37, ISBN 1900188740; 6. Pignatti, S.; Lapenna, V.; Palombo, A.; Pascucci, S.; Pergola, N.; Cuomo, V. 2011. An advanced tool of the CNR IMAA EO facilities: Overview of the TASI-600 hyperspectral thermal spectrometer. 3rd Hyperspectral Image and Signal Processing: Evolution in Remote Sensing Conference (WHISPERS), 2011; DOI 10.1109/WHISPERS.2011.6080890. 7. Johnson, B. R. and S. J. Young, 1998. In-Scene Atmospheric Compensation: Application to SEBASS Data Collected at the ARM Site. Technical Report, Space and Environment Technology Center, The Aerospace Corporation, May 1998. 8. Z.L. Li, F. Becker, M.P Stoll and Z. Wan. 1999. Evaluation of six methods for extracting relative emissivity spectra from thermal infrared images. Remote Sensing of Environment, vol. 69, 197-214.

  17. Fast variogram analysis of remotely sensed images in HPC environment

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Cortés, Anna; Masó, Joan; Pons, Xavier

    2013-04-01

    Exploring and describing spatial variation of images is one of the main applications of geostatistics to remote sensing. The variogram is a very suitable tool to carry out this spatial pattern analysis. Variogram analysis is composed of two steps: empirical variogram generation and fitting a variogram model. The empirical variogram generation is a very quick procedure for most analyses of irregularly distributed samples, but time consuming increases quite significantly for remotely sensed images, because number of samples (pixels) involved is usually huge (more than 30 million for a Landsat TM scene), basically depending on extension and spatial resolution of images. In several remote sensing applications this type of analysis is repeated for each image, sometimes hundreds of scenes and sometimes for each radiometric band (high number in the case of hyperspectral images) so that there is a need for a fast implementation. In order to reduce this high execution time, we carried out a parallel solution of the variogram analyses. The solution adopted is the master/worker programming paradigm in which the master process distributes and coordinates the tasks executed by the worker processes. The code is written in ANSI-C language, including MPI (Message Passing Interface) as a message-passing library in order to communicate the master with the workers. This solution (ANSI-C + MPI) guarantees portability between different computer platforms. The High Performance Computing (HPC) environment is formed by 32 nodes, each with two Dual Core Intel(R) Xeon (R) 3.0 GHz processors with 12 Gb of RAM, communicated with integrated dual gigabit Ethernet. This IBM cluster is located in the research laboratory of the Computer Architecture and Operating Systems Department of the Universitat Autònoma de Barcelona. The performance results for a 15km x 15km subcene of 198-31 path-row Landsat TM image are shown in table 1. The proximity between empirical speedup behaviour and theoretical linear speedup confirms a suitable parallel design and implementation applied. N Workers Time (s) Speedup 0 2975.03 2 2112.33 1.41 4 1067.45 2.79 8 534.18 5.57 12 357.54 8.32 16 269.00 11.06 20 216.24 13.76 24 186.31 15.97 Furthermore, very similar performance results are obtained for CASI images (hyperspectral and finer spatial resolution than Landsat), showed in table 2, and demonstrating that the distributed load design is not specifically defined and optimized for unique type of images, but it is a flexible design that maintains a good balance and scalability suitable for different range of image dimensions. N Workers Time (s) Speedup 0 5485.03 2 3847.47 1.43 4 1921.62 2.85 8 965.55 5.68 12 644.26 8.51 16 483.40 11.35 20 393.67 13.93 24 347.15 15.80 28 306.33 17.91 32 304.39 18.02 Finally, we conclude that this significant time reduction underlines the utility of distributed environments for processing large amount of data as remotely sensed images.

  18. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  19. Automatic Geo-location Correction of Satellite Imagery

    DTIC Science & Technology

    2014-09-25

    orientation of large stereo satellite image blocks.," Int. Arch. Photogrammetry and Remote Sensing Spatial Inf. Sci, vol. 39, pp. 209-214, 2012. [6...Coefficient (RPC) model to represent both the internal and external orientation of a satellite image in one Automatic Geo-location Correction of Satellite...Applications of Digital Image Processing VI, vol. 432, 1983. [9] Edward M Mikhail, James S Bethel, and J C McGlone, Introduction to Modern Photogrammetry

  20. Mapping Foliar Traits Across Biomes Using Imaging Spectroscopy: A Synthesis

    NASA Astrophysics Data System (ADS)

    Townsend, P. A.; Singh, A.; Wang, Z.

    2016-12-01

    One of the great promises of imaging spectroscopy - also known as hyperspectral remote sensing - is the ability to map the spatial variation in foliar functional traits, such as nitrogen concentration, pigments, leaf structure, photosynthetic capacity and secondary biochemistry, that drive terrestrial ecosystem processes. A remote-sensing approach enables characterization of within- and between-biome variations that may be crucial to understanding ecosystem responses to pests, pathogens and environmental change. We provide a synthesis of the foliar traits that can be mapped from imaging spectroscopy, as well as an overview of both the major applications of trait maps derived from hyperspectral imagery and current gaps in our knowledge and capacity. Specifically, we make the case that a global imaging spectroscopy mission will provide unique and urgent measurements necessary to understand the response of agricultural and natural systems to rapid global changes. Finally, we present a quantitative framework to utilize imaging spectroscopy to characterize spatial and temporal variation in foliar traits within and between biomes. From this we can infer the dynamics of vegetation function across ecosystems, especially in transition zones and environmentally sensitive systems. Eventual launch of a global imaging spectroscopy mission will enable collection of narrowband VSWIR measurements that will help close major gaps in our understanding of biogeochemical cycles and improve representation of vegetated biomes in Earth system process models.

  1. Surveillance of Arthropod Vector-Borne Infectious Diseases Using Remote Sensing Techniques: A Review

    PubMed Central

    Kalluri, Satya; Gilruth, Peter; Rogers, David; Szczur, Martha

    2007-01-01

    Epidemiologists are adopting new remote sensing techniques to study a variety of vector-borne diseases. Associations between satellite-derived environmental variables such as temperature, humidity, and land cover type and vector density are used to identify and characterize vector habitats. The convergence of factors such as the availability of multi-temporal satellite data and georeferenced epidemiological data, collaboration between remote sensing scientists and biologists, and the availability of sophisticated, statistical geographic information system and image processing algorithms in a desktop environment creates a fertile research environment. The use of remote sensing techniques to map vector-borne diseases has evolved significantly over the past 25 years. In this paper, we review the status of remote sensing studies of arthropod vector-borne diseases due to mosquitoes, ticks, blackflies, tsetse flies, and sandflies, which are responsible for the majority of vector-borne diseases in the world. Examples of simple image classification techniques that associate land use and land cover types with vector habitats, as well as complex statistical models that link satellite-derived multi-temporal meteorological observations with vector biology and abundance, are discussed here. Future improvements in remote sensing applications in epidemiology are also discussed. PMID:17967056

  2. Object-oriented recognition of high-resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan

    2016-01-01

    With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .

  3. Real-Time Digital Bright Field Technology for Rapid Antibiotic Susceptibility Testing.

    PubMed

    Canali, Chiara; Spillum, Erik; Valvik, Martin; Agersnap, Niels; Olesen, Tom

    2018-01-01

    Optical scanning through bacterial samples and image-based analysis may provide a robust method for bacterial identification, fast estimation of growth rates and their modulation due to the presence of antimicrobial agents. Here, we describe an automated digital, time-lapse, bright field imaging system (oCelloScope, BioSense Solutions ApS, Farum, Denmark) for rapid and higher throughput antibiotic susceptibility testing (AST) of up to 96 bacteria-antibiotic combinations at a time. The imaging system consists of a digital camera, an illumination unit and a lens where the optical axis is tilted 6.25° relative to the horizontal plane of the stage. Such tilting grants more freedom of operation at both high and low concentrations of microorganisms. When considering a bacterial suspension in a microwell, the oCelloScope acquires a sequence of 6.25°-tilted images to form an image Z-stack. The stack contains the best-focus image, as well as the adjacent out-of-focus images (which contain progressively more out-of-focus bacteria, the further the distance from the best-focus position). The acquisition process is repeated over time, so that the time-lapse sequence of best-focus images is used to generate a video. The setting of the experiment, image analysis and generation of time-lapse videos can be performed through a dedicated software (UniExplorer, BioSense Solutions ApS). The acquired images can be processed for online and offline quantification of several morphological parameters, microbial growth, and inhibition over time.

  4. Summaries of the thematic conferences on remote sensing for exploration geology

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The Thematic Conference series was initiated to address the need for concentrated discussion of particular remote sensing applications. The program is primarily concerned with the application of remote sensing to mineral and hydrocarbon exploration, with special emphasis on data integration, methodologies, and practical solutions for geologists. Some fifty invited papers are scheduled for eleven plenary sessions, formulated to address such important topics as basement tectonics and their surface expressions, spectral geology, applications for hydrocarbon exploration, and radar applications and future systems. Other invited presentations will discuss geobotanical remote sensing, mineral exploration, engineering and environmental applications, advanced image processing, and integration and mapping.

  5. A Software Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.

    2007-01-01

    Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.

  6. Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing

    NASA Astrophysics Data System (ADS)

    Oliveira, R. A.; Khoramshahi, E.; Suomalainen, J.; Hakala, T.; Viljanen, N.; Honkavaara, E.

    2018-05-01

    The use of drones and photogrammetric technologies are increasing rapidly in different applications. Currently, drone processing workflow is in most cases based on sequential image acquisition and post-processing, but there are great interests towards real-time solutions. Fast and reliable real-time drone data processing can benefit, for instance, environmental monitoring tasks in precision agriculture and in forest. Recent developments in miniaturized and low-cost inertial measurement systems and GNSS sensors, and Real-time kinematic (RTK) position data are offering new perspectives for the comprehensive remote sensing applications. The combination of these sensors and light-weight and low-cost multi- or hyperspectral frame sensors in drones provides the opportunity of creating near real-time or real-time remote sensing data of target object. We have developed a system with direct georeferencing onboard drone to be used combined with hyperspectral frame cameras in real-time remote sensing applications. The objective of this study is to evaluate the real-time georeferencing comparing with post-processing solutions. Experimental data sets were captured in agricultural and forested test sites using the system. The accuracy of onboard georeferencing data were better than 0.5 m. The results showed that the real-time remote sensing is promising and feasible in both test sites.

  7. Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor

    PubMed Central

    Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki

    2015-01-01

    This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760

  8. The pan-sharpening of satellite and UAV imagery for agricultural applications

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Woroszkiewicz, Malgorzata

    2016-10-01

    Remote sensing techniques are widely used in many different areas of interest, i.e. urban studies, environmental studies, agriculture, etc., due to fact that they provide rapid, accurate and information over large areas with optimal time, spatial and spectral resolutions. Agricultural management is one of the most common application of remote sensing methods nowadays. Monitoring of agricultural sites and creating information regarding spatial distribution and characteristics of crops are important tasks to provide data for precision agriculture, crop management and registries of agricultural lands. For monitoring of cultivated areas many different types of remote sensing data can be used- most popular are multispectral satellites imagery. Such data allow for generating land use and land cover maps, based on various methods of image processing and remote sensing methods. This paper presents fusion of satellite and unnamed aerial vehicle (UAV) imagery for agricultural applications, especially for distinguishing crop types. Authors in their article presented chosen data fusion methods for satellite images and data obtained from low altitudes. Moreover the authors described pan- sharpening approaches and applied chosen pan- sharpening methods for multiresolution image fusion of satellite and UAV imagery. For such purpose, satellite images from Landsat- 8 OLI sensor and data collected within various UAV flights (with mounted RGB camera) were used. In this article, the authors not only had shown the potential of fusion of satellite and UAV images, but also presented the application of pan- sharpening in crop identification and management.

  9. Research of building information extraction and evaluation based on high-resolution remote-sensing imagery

    NASA Astrophysics Data System (ADS)

    Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang

    2016-09-01

    Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection

  10. An Approach of Registration between Remote Sensing Image and Electronic Chart Based on Coastal Line

    NASA Astrophysics Data System (ADS)

    Li, Ying; Yu, Shuiming; Li, Chuanlong

    Remote sensing plays an important role marine oil spill emergency. In order to implement a timely and effective countermeasure, it is important to provide exact position of oil spills. Therefore it is necessary to match remote sensing image and electronic chart properly. Variance ordinarily exists between oil spill image and electronic chart, although geometric correction is applied to remote sensing image. It is difficult to find the steady control points on sea to make exact rectification of remote sensing image. An improved relaxation algorithm was developed for finding the control points along the coastline since oil spills occurs generally near the coast. A conversion function is created with the least square, and remote sensing image can be registered with the vector map based on this function. SAR image was used as the remote sensing data and shape format map as the electronic chart data. The results show that this approach can guarantee the precision of the registration, which is essential for oil spill monitoring.

  11. Near-infrared fluorescence imaging with a mobile phone (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ghassemi, Pejhman; Wang, Bohan; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, T. Joshua

    2017-03-01

    Mobile phone cameras employ sensors with near-infrared (NIR) sensitivity, yet this capability has not been exploited for biomedical purposes. Removing the IR-blocking filter from a phone-based camera opens the door to a wide range of techniques and applications for inexpensive, point-of-care biophotonic imaging and sensing. This study provides proof of principle for one of these modalities - phone-based NIR fluorescence imaging. An imaging system was assembled using a 780 nm light source along with excitation and emission filters with 800 nm and 825 nm cut-off wavelengths, respectively. Indocyanine green (ICG) was used as an NIR fluorescence contrast agent in an ex vivo rodent model, a resolution test target and a 3D-printed, tissue-simulating vascular phantom. Raw and processed images for red, green and blue pixel channels were analyzed for quantitative evaluation of fundamental performance characteristics including spectral sensitivity, detection linearity and spatial resolution. Mobile phone results were compared with a scientific CCD. The spatial resolution of CCD system was consistently superior to the phone, and green phone camera pixels showed better resolution than blue or green channels. The CCD exhibited similar sensitivity as processed red and blue pixels channels, yet a greater degree of detection linearity. Raw phone pixel data showed lower sensitivity but greater linearity than processed data. Overall, both qualitative and quantitative results provided strong evidence of the potential of phone-based NIR imaging, which may lead to a wide range of applications from cancer detection to glucose sensing.

  12. A generic FPGA-based detector readout and real-time image processing board

    NASA Astrophysics Data System (ADS)

    Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant

    2016-07-01

    For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.

  13. Miniaturized optical wavelength sensors

    NASA Astrophysics Data System (ADS)

    Kung, Helen Ling-Ning

    Recently semiconductor processing technology has been applied to the miniaturization of optical wavelength sensors. Compact sensors enable new applications such as integrated diode-laser wavelength monitors and frequency lockers, portable chemical and biological detection, and portable and adaptive hyperspectral imaging arrays. Small sensing systems have trade-offs between resolution, operating range, throughput, multiplexing and complexity. We have developed a new wavelength sensing architecture that balances these parameters for applications involving hyperspectral imaging spectrometer arrays. In this thesis we discuss and demonstrate two new wavelength-sensing architectures whose single-pixel designs can easily be extended into spectrometer arrays. The first class of devices is based on sampling a standing wave. These devices are based on measuring the wavelength-dependent period of optical standing waves formed by the interference of forward and reflected waves at a mirror. We fabricated two different devices based on this principle. The first device is a wavelength monitor, which measures the wavelength and power of a monochromatic source. The second device is a spectrometer that can also act as a selective spectral coherence sensor. The spectrometer contains a large displacement piston-motion MEMS mirror and a thin GaAs photodiode flip-chip bonded to a quartz substrate. The performance of this spectrometer is similar to that of a Michelson in resolution, operating range, throughput and multiplexing but with the added advantages of fewer components and one-dimensional architecture. The second class of devices is based on the Talbot self-imaging effect. The Talbot effect occurs when a periodic object is illuminated with a spatially coherent wave. Periodically spaced self-images are formed behind the object. The spacing of the self-images is proportional to wavelength of the incident light. We discuss and demonstrate how this effect can be used for spectroscopy. In the conclusion we compare these two new miniaturized spectrometer architectures to existing miniaturized spectrometers. We believe that the combination of miniaturized wavelength sensors and smart processing should facilitate the development real-time, adaptive and portable sensing systems.

  14. Oceanographic satellite remote sensing: Registration, rectification, and data integration requirements

    NASA Technical Reports Server (NTRS)

    Nichols, D. A.

    1982-01-01

    The problem of data integration in oceanography is discussed. Recommendations are made for technique development and evaluation, understanding requirements, and packaging techniques for speed, efficiency and ease of use. The primary satellite sensors of interest to oceanography are summarized. It is concluded that imaging type sensors make image processing an important tool for oceanographic studies.

  15. An evaluation of remote sensing technologies for the detection of fugitive contamination at selected Superfund hazardous waste sites in Pennsylvania

    USGS Publications Warehouse

    Slonecker, E. Terrence; Fisher, Gary B.

    2014-01-01

    This evaluation was conducted to assess the potential for using both traditional remote sensing, such as aerial imagery, and emerging remote sensing technology, such as hyperspectral imaging, as tools for postclosure monitoring of selected hazardous waste sites. Sixteen deleted Superfund (SF) National Priorities List (NPL) sites in Pennsylvania were imaged with a Civil Air Patrol (CAP) Airborne Real-Time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor between 2009 and 2012. Deleted sites are those sites that have been remediated and removed from the NPL. The imagery was processed to radiance and atmospherically corrected to relative reflectance with standard software routines using the Environment for Visualizing Imagery (ENVI, ITT–VIS, Boulder, Colorado) software. Standard routines for anomaly detection, endmember collection, vegetation stress, and spectral analysis were applied.

  16. Utility of shallow-water ATRIS images in defining biogeologic processes and self-similarity in skeletal scleractinia, Florida reefs

    USGS Publications Warehouse

    Lidz, B.H.; Brock, J.C.; Nagle, D.B.

    2008-01-01

    A recently developed remote-sensing instrument acquires high-quality digital photographs in shallow-marine settings within water depths of 15 m. The technology, known as the Along-Track Reef-Imaging System, provides remarkably clear, georeferenced imagery that allows visual interpretation of benthic class (substrates, organisms) for mapping coral reef habitats, as intended. Unforeseen, however, are functions new to the initial technologic purpose: interpr??table evidence for real-time biogeologic processes and for perception of scaled-up skeletal self-similarity of scleractinian microstructure. Florida reef sea trials lacked the grid structure required to map contiguous habitat and submarine topography. Thus, only general observations could be made relative to times and sites of imagery. Degradation of corals was nearly universal; absence of reef fish was profound. However, ???1% of more than 23,600 sea-trial images examined provided visual evidence for local environs and processes. Clarity in many images was so exceptional that small tracks left by organisms traversing fine-grained carbonate sand were visible. Other images revealed a compelling sense, not yet fully understood, of the microscopic wall structure characteristic of scleractinian corals. Conclusions drawn from classifiable images are that demersal marine animals, where imaged, are oblivious to the equipment and that the technology has strong capabilities beyond mapping habitat. Imagery acquired along predetermined transects that cross a variety of geomorphic features within depth limits will ( 1) facilitate construction of accurate contour maps of habitat and bathymetry without need for ground-truthing, (2) contain a strong geologic component of interpreted real-time processes as they relate to imaged topography and regional geomorphology, and (3) allow cost-effective monitoring of regional- and local-scale changes in an ecosystem by use of existing-image global-positioning system coordinates to re-image areas. Details revealed in the modern setting have taphonomic implications for what is often found in the geologic record.

  17. Exploiting passive polarimetric imagery for remote sensing applications

    NASA Astrophysics Data System (ADS)

    Vimal Thilak Krishna, Thilakam

    Polarization is a property of light or electromagnetic radiation that conveys information about the orientation of the transverse electric and magnetic fields. The polarization of reflected light complements other electromagnetic radiation attributes such as intensity, frequency, or spectral characteristics. A passive polarization based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. The polarization due to surface reflections from such objects contains information about the targets that can be exploited in remote sensing applications such as target detection, target classification, object recognition and shape extraction/recognition. In recent years, there has been renewed interest in the use of passive polarization information in remote sensing applications. The goal of our research is to design image processing algorithms for remote sensing applications by utilizing physics-based models that describe the polarization imparted by optical scattering from an object. In this dissertation, we present a method to estimate the complex index of refraction and reflection angle from multiple polarization measurements. This method employs a polarimetric bidirectional reflectance distribution function (pBRDF) that accounts for polarization due to specular scattering. The parameters of interest are derived by utilizing a nonlinear least squares estimation algorithm, and computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Furthermore, laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle. We also study the use of extracted index of refraction as a feature vector in designing two important image processing applications, namely image segmentation and material classification so that the resulting systems are largely invariant to illumination source location. This is in contrast to most passive polarization-based image processing algorithms proposed in the literature that employ quantities such as Stokes vectors and the degree of polarization and which are not robust to changes in illumination conditions. The estimated index of refraction, on the other hand, is invariant to illumination conditions and hence can be used as an input to image processing algorithms. The proposed estimation framework also is extended to the case where the position of the observer (camera) moves between measurements while that of the source remains fixed. Finally, we explore briefly the topic of parameter estimation for a generalized model that accounts for both specular and volumetric scattering. A combination of simulation and experimental results are provided to evaluate the effectiveness of the above methods.

  18. Science plan for the Alaska SAR facility program. Phase 1: Data from the first European sensing satellite, ERS-1

    NASA Technical Reports Server (NTRS)

    Carsey, Frank D.

    1989-01-01

    Science objectives, opportunities and requirements are discussed for the utilization of data from the Synthetic Aperture Radar (SAR) on the European First Remote Sensing Satellite, to be flown by the European Space Agency in the early 1990s. The principal applications of the imaging data are in studies of geophysical processes taking place within the direct-reception area of the Alaska SAR Facility in Fairbanks, Alaska, essentially the area within 2000 km of the receiver. The primary research that will be supported by these data include studies of the oceanography and sea ice phenomena of Alaskan and adjacent polar waters and the geology, glaciology, hydrology, and ecology of the region. These studies focus on the area within the reception mask of ASF, and numerous connections are made to global processes and thus to the observation and understanding of global change. Processes within the station reception area both affect and are affected by global phenomena, in some cases quite critically. Requirements for data processing and archiving systems, prelaunch research, and image processing for geophysical product generation are discussed.

  19. Optimization of spectral bands for hyperspectral remote sensing of forest vegetation

    NASA Astrophysics Data System (ADS)

    Dmitriev, Egor V.; Kozoderov, Vladimir V.

    2013-10-01

    Optimization principles of accounting for the most informative spectral channels in hyperspectral remote sensing data processing serve to enhance the efficiency of the employed high-productive computers. The problem of pattern recognition of the remotely sensed land surface objects with the accent on the forests is outlined from the point of view of the spectral channels optimization on the processed hyperspectral images. The relevant computational procedures are tested using the images obtained by the produced in Russia hyperspectral camera that was installed on a gyro-stabilized platform to conduct the airborne flight campaigns. The Bayesian classifier is used for the pattern recognition of the forests with different tree species and age. The probabilistically optimal algorithm constructed on the basis of the maximum likelihood principle is described to minimize the probability of misclassification given by this classifier. The classification error is the major category to estimate the accuracy of the applied algorithm by the known holdout cross-validation method. Details of the related techniques are presented. Results are shown of selecting the spectral channels of the camera while processing the images having in mind radiometric distortions that diminish the classification accuracy. The spectral channels are selected of the obtained subclasses extracted from the proposed validation techniques and the confusion matrices are constructed that characterize the age composition of the classified pine species as well as the broad age-class recognition for the pine and birch species with the fully illuminated parts of their crowns.

  20. Impact of remote sensing upon the planning, management, and development of water resources

    NASA Technical Reports Server (NTRS)

    Castruccio, P. A.; Loats, H. L.; Fowler, T. R.; Frech, S. L.

    1975-01-01

    Principal water resources users were surveyed to determine the impact of remote data streams on hydrologic computer models. Analysis of responses demonstrated that: most water resources effort suitable to remote sensing inputs is conducted through federal agencies or through federally stimulated research; and, most hydrologic models suitable to remote sensing data are federally developed. Computer usage by major water resources users was analyzed to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era.

  1. Hi-Tech for Archeology

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Remote sensing is the process of acquiring physical information from a distance, obtaining data on Earth features from a satellite or an airplane. Advanced remote sensing instruments detect radiations not visible to the ordinary camera or the human eye in several bands of the spectrum. These data are computer processed to produce multispectral images that can provide enormous amounts of information about Earth objects or phenomena. Since every object on Earth emits or reflects radiation in its own unique signature, remote sensing data can be interpreted to tell the difference between one type of vegetation and another, between densely populated urban areas and lightly populated farmland, between clear and polluted water or in the archeological application between rain forest and hidden man made structures.

  2. Low-cost compact thermal imaging sensors for body temperature measurement

    NASA Astrophysics Data System (ADS)

    Han, Myung-Soo; Han, Seok Man; Kim, Hyo Jin; Shin, Jae Chul; Ahn, Mi Sook; Kim, Hyung Won; Han, Yong Hee

    2013-06-01

    This paper presents a 32x32 microbolometer thermal imaging sensor for human body temperature measurement. Waferlevel vacuum packaging technology allows us to get a low cost and compact imaging sensor chip. The microbolometer uses V-W-O film as sensing material and ROIC has been designed 0.35-um CMOS process in UMC. A thermal image of a human face and a hand using f/1 lens convinces that it has a potential of human body temperature for commercial use.

  3. Using multi-level remote sensing and ground data to estimate forest biomass resources in remote regions: a case study in the boreal forests of interior Alaska

    Treesearch

    Hans-Erik Andersen; Strunk Jacob; Hailemariam Temesgen; Donald Atwood; Ken Winterberger

    2012-01-01

    The emergence of a new generation of remote sensing and geopositioning technologies, as well as increased capabilities in image processing, computing, and inferential techniques, have enabled the development and implementation of increasingly efficient and cost-effective multilevel sampling designs for forest inventory. In this paper, we (i) describe the conceptual...

  4. Direct Reflectance Measurements from Drones: Sensor Absolute Radiometric Calibration and System Tests for Forest Reflectance Characterization.

    PubMed

    Hakala, Teemu; Markelin, Lauri; Honkavaara, Eija; Scott, Barry; Theocharous, Theo; Nevalainen, Olli; Näsi, Roope; Suomalainen, Juha; Viljanen, Niko; Greenwell, Claire; Fox, Nigel

    2018-05-03

    Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK).

  5. Direct Reflectance Measurements from Drones: Sensor Absolute Radiometric Calibration and System Tests for Forest Reflectance Characterization

    PubMed Central

    Hakala, Teemu; Scott, Barry; Theocharous, Theo; Näsi, Roope; Suomalainen, Juha; Greenwell, Claire; Fox, Nigel

    2018-01-01

    Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK). PMID:29751560

  6. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  7. Study of the wide area of a lake with remote sensing

    NASA Astrophysics Data System (ADS)

    Lazaridou, Maria A.; Karagianni, Aikaterini C.

    2016-08-01

    Water bodies are particularly important for environment and development issues. Their study requires multiple information. Remote sensing has been proven useful in the above study. This paper concerns the wide area of Lake Orestiada in the region of Western Macedonia in Greece. The area is of particular interest because Lake Orestiada is included in the Natura 2000 network and is surrounded by diverse landcovers as built up areas and agricultural land. Multispectral and thermal Landsat 5 satellite images of two time periods are being used. Their processing is being done by Erdas Imagine software. The general physiognomy of the area and the lake shore are examined after image enhancement techniques and image interpretation. Directions of the study concern geomorphological aspects, land covers, estimation of surface temperature as well as changes through time.

  8. Extraction of Greenhouse Areas with Image Processing Methods in Karabuk Province

    NASA Astrophysics Data System (ADS)

    Yildirima, M. Z.; Ozcan, C.

    2017-11-01

    Greenhouses provide the environmental conditions to be controlled and regulated as desired while allowing agricultural products to be produced without being affected by external environmental conditions. High quality and a wide variety of agricultural products can be produced throughout the year. In addition, mapping and detection of these areas has great importance in terms of factors such as yield analysis, natural resource management and environmental impact. Various remote sensing techniques are currently available for extraction of greenhouse areas. These techniques are based on the automatic detection and interpretation of objects on remotely sensed images. In this study, greenhouse areas were determined from optical images obtained from Landsat. The study was carried out in the greenhouse areas in Karabuk province. The obtained results are presented with figures and tables.

  9. Ultrafast water sensing and thermal imaging by a metal-organic framework with switchable luminescence

    NASA Astrophysics Data System (ADS)

    Chen, Ling; Ye, Jia-Wen; Wang, Hai-Ping; Pan, Mei; Yin, Shao-Yun; Wei, Zhang-Wen; Zhang, Lu-Yin; Wu, Kai; Fan, Ya-Nan; Su, Cheng-Yong

    2017-06-01

    A convenient, fast and selective water analysis method is highly desirable in industrial and detection processes. Here a robust microporous Zn-MOF (metal-organic framework, Zn(hpi2cf)(DMF)(H2O)) is assembled from a dual-emissive H2hpi2cf (5-(2-(5-fluoro-2-hydroxyphenyl)-4,5-bis(4-fluorophenyl)-1H-imidazol-1-yl)isophthalic acid) ligand that exhibits characteristic excited state intramolecular proton transfer (ESIPT). This Zn-MOF contains amphipathic micropores (<3 Å) and undergoes extremely facile single-crystal-to-single-crystal transformation driven by reversible removal/uptake of coordinating water molecules simply stimulated by dry gas blowing or gentle heating at 70 °C, manifesting an excellent example of dynamic reversible coordination behaviour. The interconversion between the hydrated and dehydrated phases can turn the ligand ESIPT process on or off, resulting in sensitive two-colour photoluminescence switching over cycles. Therefore, this Zn-MOF represents an excellent PL water-sensing material, showing a fast (on the order of seconds) and highly selective response to water on a molecular level. Furthermore, paper or in situ grown ZnO-based sensing films have been fabricated and applied in humidity sensing (RH<1%), detection of traces of water (<0.05% v/v) in various organic solvents, thermal imaging and as a thermometer.

  10. Seamless contiguity method for parallel segmentation of remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Geng; Wang, Guanghui; Yu, Mei; Cui, Chengling

    2015-12-01

    Seamless contiguity is the key technology for parallel segmentation of remote sensing data with large quantities. It can be effectively integrate fragments of the parallel processing into reasonable results for subsequent processes. There are numerous methods reported in the literature for seamless contiguity, such as establishing buffer, area boundary merging and data sewing. et. We proposed a new method which was also based on building buffers. The seamless contiguity processes we adopt are based on the principle: ensuring the accuracy of the boundary, ensuring the correctness of topology. Firstly, block number is computed based on data processing ability, unlike establishing buffer on both sides of block line, buffer is established just on the right side and underside of the line. Each block of data is segmented respectively and then gets the segmentation objects and their label value. Secondly, choose one block(called master block) and do stitching on the adjacent blocks(called slave block), process the rest of the block in sequence. Through the above processing, topological relationship and boundaries of master block are guaranteed. Thirdly, if the master block polygons boundaries intersect with buffer boundary and the slave blocks polygons boundaries intersect with block line, we adopt certain rules to merge and trade-offs them. Fourthly, check the topology and boundary in the buffer area. Finally, a set of experiments were conducted and prove the feasibility of this method. This novel seamless contiguity algorithm provides an applicable and practical solution for efficient segmentation of massive remote sensing image.

  11. Handheld hyperspectral imager for standoff detection of chemical and biological aerosols

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Jensen, James O.; McAnally, Gerard

    2004-02-01

    Pacific Advanced Technology has developed a small hand held imaging spectrometer, Sherlock, for gas leak and aerosol detection and imaging. The system is based on a patent technique that uses diffractive optics and image processing algorithms to detect spectral information about objects in the scene of the camera (IMSS Image Multi-spectral Sensing). This camera has been tested at Dugway Proving Ground and Dstl Porton Down facility looking at Chemical and Biological agent simulants. The camera has been used to investigate surfaces contaminated with chemical agent simulants. In addition to Chemical and Biological detection the camera has been used for environmental monitoring of green house gases and is currently undergoing extensive laboratory and field testing by the Gas Technology Institute, British Petroleum and Shell Oil for applications for gas leak detection and repair. The camera contains an embedded Power PC and a real time image processor for performing image processing algorithms to assist in the detection and identification of gas phase species in real time. In this paper we will present an over view of the technology and show how it has performed for different applications, such as gas leak detection, surface contamination, remote sensing and surveillance applications. In addition a sampling of the results form TRE field testing at Dugway in July of 2002 and Dstl at Porton Down in September of 2002 will be given.

  12. Quantitative optical imaging and sensing by joint design of point spread functions and estimation algorithms

    NASA Astrophysics Data System (ADS)

    Quirin, Sean Albert

    The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.

  13. Recent advances in remote sensing; Proceedings of the First International Geoscience and Remote Sensing Symposium, Washington, DC, June 8-10, 1981

    NASA Technical Reports Server (NTRS)

    Mcintosh, R.

    1982-01-01

    The state of the art in remote sensing of the earth and the planets was discussed in terms of sensor performance, signal processing, and data interpretation. Particular attention was given to lidar for characterizing atmospheric particulates, the modulation of short waves by long ocean gravity waves, and runoff modeling for snow-covered areas. The use of NOAA-6 spacecraft AVHRR data to explore hydrologic land surface features, the effects of soil moisture and vegetation canopies on microwave and thermal microwave emissions, and regional scale evapotranspiration rate determination through satellite IR data are examined. A Shuttle experiment to demonstrate high accuracy global time and frequency transfer is described, along with features of the proposed Gravsat, radar image processing for rock-type discrimination, and passive microwave sensing of temperature and salinity in coastal zones.

  14. Target detection method by airborne and spaceborne images fusion based on past images

    NASA Astrophysics Data System (ADS)

    Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng

    2017-11-01

    To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.

  15. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  16. Mathematical model of a DIC position sensing system within an optical trap

    NASA Astrophysics Data System (ADS)

    Wulff, Kurt D.; Cole, Daniel G.; Clark, Robert L.

    2005-08-01

    The quantitative study of displacements and forces of motor proteins and processes that occur at the microscopic level and below require a high level of sensitivity. For optical traps, two techniques for position sensing have been accepted and used quite extensively: quadrant photodiodes and an interferometric position sensing technique based on DIC imaging. While quadrant photodiodes have been studied in depth and mathematically characterized, a mathematical characterization of the interferometric position sensor has not been presented to the authors' knowledge. The interferometric position sensing method works off of the DIC imaging capabilities of a microscope. Circularly polarized light is sent into the microscope and the Wollaston prism used for DIC imaging splits the beam into its orthogonal components, displacing them by a set distance determined by the user. The distance between the axes of the beams is set so the beams overlap at the specimen plane and effectively share the trapped microsphere. A second prism then recombines the light beams and the exiting laser light's polarization is measured and related to position. In this paper we outline the mathematical characterization of a microsphere suspended in an optical trap using a DIC position sensing method. The sensitivity of this mathematical model is then compared to the QPD model. The mathematical model of a microsphere in an optical trap can serve as a calibration curve for an experimental setup.

  17. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm.

    PubMed

    Yang, Mengzhao; Song, Wei; Mei, Haibin

    2017-07-23

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient.

  18. Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm

    PubMed Central

    Song, Wei; Mei, Haibin

    2017-01-01

    The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient. PMID:28737699

  19. "Proximal Sensing" capabilities for snow cover monitoring

    NASA Astrophysics Data System (ADS)

    Valt, Mauro; Salvatori, Rosamaria; Plini, Paolo; Salzano, Roberto; Giusti, Marco; Montagnoli, Mauro; Sigismondi, Daniele; Cagnati, Anselmo

    2013-04-01

    The seasonal snow cover represents one of the most important land cover class in relation to environmental studies in mountain areas, especially considering its variation during time. Snow cover and its extension play a relevant role for the studies on the atmospheric dynamics and the evolution of climate. It is also important for the analysis and management of water resources and for the management of touristic activities in mountain areas. Recently, webcam images collected at daily or even hourly intervals are being used as tools to observe the snow covered areas; those images, properly processed, can be considered a very important environmental data source. Images captured by digital cameras become a useful tool at local scale providing images even when the cloud coverage makes impossible the observation by satellite sensors. When suitably processed these images can be used for scientific purposes, having a good resolution (at least 800x600x16 million colours) and a very good sampling frequency (hourly images taken through the whole year). Once stored in databases, those images represent therefore an important source of information for the study of recent climatic changes, to evaluate the available water resources and to analyse the daily surface evolution of the snow cover. The Snow-noSnow software has been specifically designed to automatically detect the extension of snow cover collected from webcam images with a very limited human intervention. The software was tested on images collected on Alps (ARPAV webcam network) and on Apennine in a pilot station properly equipped for this project by CNR-IIA. The results obtained through the use of Snow-noSnow are comparable to the one achieved by photo-interpretation and could be considered as better as the ones obtained using the image segmentation routine implemented into image processing commercial softwares. Additionally, Snow-noSnow operates in a semi-automatic way and has a reduced processing time. The analysis of this kind of images could represent an useful element to support the interpretation of remote sensing images, especially those provided by high spatial resolution sensors. Keywords: snow cover monitoring, digital images, software, Alps, Apennines.

  20. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  1. Region of interest extraction based on multiscale visual saliency analysis for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yinggang; Zhang, Libao; Yu, Xianchuan

    2015-01-01

    Region of interest (ROI) extraction is an important component of remote sensing image processing. However, traditional ROI extraction methods are usually prior knowledge-based and depend on classification, segmentation, and a global searching solution, which are time-consuming and computationally complex. We propose a more efficient ROI extraction model for remote sensing images based on multiscale visual saliency analysis (MVS), implemented in the CIE L*a*b* color space, which is similar to visual perception of the human eye. We first extract the intensity, orientation, and color feature of the image using different methods: the visual attention mechanism is used to eliminate the intensity feature using a difference of Gaussian template; the integer wavelet transform is used to extract the orientation feature; and color information content analysis is used to obtain the color feature. Then, a new feature-competition method is proposed that addresses the different contributions of each feature map to calculate the weight of each feature image for combining them into the final saliency map. Qualitative and quantitative experimental results of the MVS model as compared with those of other models show that it is more effective and provides more accurate ROI extraction results with fewer holes inside the ROI.

  2. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  3. 1984 European Conference on Optics, Optical Systems and Applications, Amsterdam, Netherlands, October 9-12, 1984, Proceedings

    NASA Astrophysics Data System (ADS)

    Boelger, B.; Ferwerda, H. A.

    Various papers on optics, optical systems, and their applications are presented. The general topics addressed include: laser systems, optical and electrooptical materials and devices; novel spectroscopic techniques and applications; inspection, remote sensing, velocimetry, and gauging; optical design and image formation; holography, image processing, and storage; and integrated and fiber optics. Also discussed are: nonlinear optics; nonlinear photorefractive materials; scattering and diffractions applications in materials processing, deposition, and machining; medical and biological applications; and focus on industry.

  4. An Improved Image Matching Method Based on Surf Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.

    2018-04-01

    Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.

  5. A research of road centerline extraction algorithm from high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Yushan; Xu, Tingfa

    2017-09-01

    Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.

  6. Automated training site selection for large-area remote-sensing image analysis

    NASA Astrophysics Data System (ADS)

    McCaffrey, Thomas M.; Franklin, Steven E.

    1993-11-01

    A computer program is presented to select training sites automatically from remotely sensed digital imagery. The basic ideas are to guide the image analyst through the process of selecting typical and representative areas for large-area image classifications by minimizing bias, and to provide an initial list of potential classes for which training sites are required to develop a classification scheme or to verify classification accuracy. Reducing subjectivity in training site selection is achieved by using a purely statistical selection of homogeneous sites which then can be compared to field knowledge, aerial photography, or other remote-sensing imagery and ancillary data to arrive at a final selection of sites to be used to train the classification decision rules. The selection of the homogeneous sites uses simple tests based on the coefficient of variance, the F-statistic, and the Student's i-statistic. Comparisons of site means are conducted with a linear growing list of previously located homogeneous pixels. The program supports a common pixel-interleaved digital image format and has been tested on aerial and satellite optical imagery. The program is coded efficiently in the C programming language and was developed under AIX-Unix on an IBM RISC 6000 24-bit color workstation.

  7. Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images

    PubMed Central

    Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence

    2013-01-01

    Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421

  8. A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI

    PubMed Central

    Lyu, Mengye; Liu, Yilong; Xie, Victor B.; Feng, Yanqiu; Guo, Hua; Wu, Ed X.

    2017-01-01

    PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient. PMID:28205602

  9. A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI.

    PubMed

    Lyu, Mengye; Liu, Yilong; Xie, Victor B; Feng, Yanqiu; Guo, Hua; Wu, Ed X

    2017-02-16

    PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient.

  10. Tools and Methods for the Registration and Fusion of Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Goshtasby, Arthur Ardeshir; LeMoigne, Jacqueline

    2010-01-01

    Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given.

  11. [Study on the modeling of earth-atmosphere coupling over rugged scenes for hyperspectral remote sensing].

    PubMed

    Zhao, Hui-Jie; Jiang, Cheng; Jia, Guo-Rui

    2014-01-01

    Adjacency effects may introduce errors in the quantitative applications of hyperspectral remote sensing, of which the significant item is the earth-atmosphere coupling radiance. However, the surrounding relief and shadow induce strong changes in hyperspectral images acquired from rugged terrain, which is not accurate to describe the spectral characteristics. Furthermore, the radiative coupling process between the earth and the atmosphere is more complex over the rugged scenes. In order to meet the requirements of real-time processing in data simulation, an equivalent reflectance of background was developed by taking into account the topography and the geometry between surroundings and targets based on the radiative transfer process. The contributions of the coupling to the signal at sensor level were then evaluated. This approach was integrated to the sensor-level radiance simulation model and then validated through simulating a set of actual radiance data. The results show that the visual effect of simulated images is consistent with that of observed images. It was also shown that the spectral similarity is improved over rugged scenes. In addition, the model precision is maintained at the same level over flat scenes.

  12. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  13. Analysis On Land Cover In Municipality Of Malang With Landsat 8 Image Through Unsupervised Classification

    NASA Astrophysics Data System (ADS)

    Nahari, R. V.; Alfita, R.

    2018-01-01

    Remote sensing technology has been widely used in the geographic information system in order to obtain data more quickly, accurately and affordably. One of the advantages of using remote sensing imagery (satellite imagery) is to analyze land cover and land use. Satellite image data used in this study were images from the Landsat 8 satellite combined with the data from the Municipality of Malang government. The satellite image was taken in July 2016. Furthermore, the method used in this study was unsupervised classification. Based on the analysis towards the satellite images and field observations, 29% of the land in the Municipality of Malang was plantation, 22% of the area was rice field, 12% was residential area, 10% was land with shrubs, and the remaining 2% was water (lake/reservoir). The shortcoming of the methods was 25% of the land in the area was unidentified because it was covered by cloud. It is expected that future researchers involve cloud removal processing to minimize unidentified area.

  14. Acquisition of STEM Images by Adaptive Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash

    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5]more » are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However, they all beat the original CS as more of the “most informative” pixels are sampled. One can also argue that CS equipped with active learning requires less sampled pixels to achieve the same value of PSNR than CS with pixels randomly sampled, since all the three PSNR curves with active learning grow at a faster pace than that without active learning. For this particular STEM image, by observing the reconstructed images and the sensing masks, we find that while the method based on RBF kernel acquires samples more uniformly, the one on entropy samples more areas of significant change, thus less uniformly. The KL-divergence method performs the best in terms of reconstruction error (PSNR) for this example [8].« less

  15. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  16. Wave-Optics Analysis of Pupil Imaging

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Bos, Brent J.

    2006-01-01

    Pupil imaging performance is analyzed from the perspective of physical optics. A multi-plane diffraction model is constructed by propagating the scalar electromagnetic field, surface by surface, along the optical path comprising the pupil imaging optical system. Modeling results are compared with pupil images collected in the laboratory. The experimental setup, although generic for pupil imaging systems in general, has application to the James Webb Space Telescope (JWST) optical system characterization where the pupil images are used as a constraint to the wavefront sensing and control process. Practical design considerations follow from the diffraction modeling which are discussed in the context of the JWST Observatory.

  17. Image Processing

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The Computer Graphics Center of North Carolina State University uses LAS, a COSMIC program, to analyze and manipulate data from Landsat and SPOT providing information for government and commercial land resource application projects. LAS is used to interpret aircraft/satellite data and enables researchers to improve image-based classification accuracies. The system is easy to use and has proven to be a valuable remote sensing training tool.

  18. Report to the Congress on the Strategic Defense Initiative, 1991

    DTIC Science & Technology

    1991-05-01

    ultraviolet, and infrared radiation-hardened charge-coupled device images , step-stare sensor signal processing algorithms , and processor...Demonstration Experiment (LODE) resolved central issues associated with wavefront sensing and control and the 4-meter I Large Advanced Mirror Program (LAMP...21 Figure 4-16 Firepond CO 2 Imaging Radar Demonstration .......................... 4-22 Figure 4-17 IBSS and the Shuttle

  19. The application of unmanned aerial vehicle remote sensing for monitoring secondary geological disasters after earthquakes

    NASA Astrophysics Data System (ADS)

    Lei, Tianjie; Zhang, Yazhen; Wang, Xingyong; Fu, Jun'e.; Li, Lin; Pang, Zhiguo; Zhang, Xiaolei; Kan, Guangyuan

    2017-07-01

    Remote sensing system fitted on Unmanned Aerial Vehicle (UAV) can obtain clear images and high-resolution aerial photographs. It has advantages of strong real-time, flexibility and convenience, free from influence of external environment, low cost, low-flying under clouds and ability to work full-time. When an earthquake happened, it could go deep into the places safely and reliably which human staff can hardly approach, such as secondary geological disasters hit areas. The system can be timely precise in response to secondary geological disasters monitoring by a way of obtaining first-hand information as quickly as possible, producing a unique emergency response capacity to provide a scientific basis for overall decision-making processes. It can greatly enhance the capability of on-site disaster emergency working team in data collection and transmission. The great advantages of UAV remote sensing system played an irreplaceable role in monitoring secondary geological disaster dynamics and influences. Taking the landslides and barrier lakes for example, the paper explored the basic application and process of UAV remote sensing in the disaster emergency relief. UAV high-resolution remote sensing images had been exploited to estimate the situation of disaster-hit areas and monitor secondary geological disasters rapidly, systematically and continuously. Furthermore, a rapid quantitative assessment on the distribution and size of landslides and barrier lakes was carried out. Monitoring results could support relevant government departments and rescue teams, providing detailed and reliable scientific evidence for disaster relief and decision-making.

  20. Text recognition and correction for automated data collection by mobile devices

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.

  1. Performance assessment of a compressive sensing single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Preece, Bradley L.

    2017-04-01

    Conventional sensors measure the light incident at each pixel in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process to recover the image. CS has the potential to acquire imagery with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations, reconstruction accuracy, and reconstruction speed to meet operational requirements. Performance modeling of CS imagers is challenging due to the complexity and nonlinearity of the system and reconstruction algorithm. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of a two-handheld object target set was collected using an shortwave infrared single-pixel CS camera for various ranges and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques are discussed.

  2. Wavelength calibration of imaging spectrometer using atmospheric absorption features

    NASA Astrophysics Data System (ADS)

    Zhou, Jiankang; Chen, Yuheng; Chen, Xinhua; Ji, Yiqun; Shen, Weimin

    2012-11-01

    Imaging spectrometer is a promising remote sensing instrument widely used in many filed, such as hazard forecasting, environmental monitoring and so on. The reliability of the spectral data is the determination to the scientific communities. The wavelength position at the focal plane of the imaging spectrometer will change as the pressure and temperature vary, or the mechanical vibration. It is difficult for the onboard calibration instrument itself to keep the spectrum reference accuracy and it also occupies weight and the volume of the remote sensing platform. Because the spectral images suffer from the atmospheric effects, the carbon oxide, water vapor, oxygen and solar Fraunhofer line, the onboard wavelength calibration can be processed by the spectral images themselves. In this paper, wavelength calibration is based on the modeled and measured atmospheric absorption spectra. The modeled spectra constructed by the atmospheric radiative transfer code. The spectral angle is used to determine the best spectral similarity between the modeled spectra and measured spectra and estimates the wavelength position. The smile shape can be obtained when the matching process across all columns of the data. The present method is successful applied on the Hyperion data. The value of the wavelength shift is obtained by shape matching of oxygen absorption feature and the characteristics are comparable to that of the prelaunch measurements.

  3. Developing a novel UAV (Unmanned Aerial Vehicle) helicopter platform for very high resolution environmental monitoring of catchment processes

    NASA Astrophysics Data System (ADS)

    Freer, J. E.; Richardson, T.; Yang, Z.

    2012-12-01

    Recent advances in remote sensing and geographic information has led the way for the development of hyperspectral sensors and cloud scanning LIDAR (Light Detection And Ranging). Both these technologies can be used to sense environmental processes and capture detailed spatial information, they are often deployed in ground, aircraft and satellite based systems. Hyperspectral remote sensing, also known as imaging spectroscopy, is a relatively new technology that is currently being investigated by researchers and scientists with regard to the detection and identification of landscapes, terrestrial vegetation, and manmade materials and backgrounds. There are many applications that could take advantages of hyperspectral remote sensing coupled to detailed surface feature mapping using LIDAR. This embryonic project involves developing the engineering solutions and post processing techniques needed to realise an ultra high resolution helicopter based environmental sensing platform which can fly at lower altitudes than aircraft systems and can be deployed more frequently. We aim to present this new technology platform in this special session (the only one of it's kind in the UK). Initial applications are planned on a range of environmental sensing problems that would benefit from such complex and detailed data.We look forward to being able to display and discuss this initiative with colleagues and any potential interest in future collaborative projects.

  4. Developing a novel UAV (Unmanned Aerial Vehicle) helicopter platform for very high resolution environmental monitoring of catchment processes

    NASA Astrophysics Data System (ADS)

    Freer, J.; Richardson, T. S.

    2012-04-01

    Recent advances in remote sensing and geographic information has led the way for the development of hyperspectral sensors and cloud scanning LIDAR (Light Detection And Ranging). Both these technologies can be used to sense environmental processes and capture detailed spatial information, they are often deployed in ground, aircraft and satellite based systems. Hyperspectral remote sensing, also known as imaging spectroscopy, is a relatively new technology that is currently being investigated by researchers and scientists with regard to the detection and identification of landscapes, terrestrial vegetation, and manmade materials and backgrounds. There are many applications that could take advantages of hyperspectral remote sensing coupled to detailed surface feature mapping using LIDAR. This embryonic project involves developing the engineering solutions and post processing techniques needed to realise an ultra high resolution helicopter based environmental sensing platform which can fly at lower altitudes than aircraft systems and can be deployed more frequently. We aim to display this new technology platform in this special session (the only one of it's kind in the UK). Initial applications are planned on a range of environmental sensing problems that would benefit from such complex and detailed data. We look forward to being able to display and discuss this initiative with colleagues and any potential interest in future collaborative projects.

  5. Airborne remote sensing for geology and the environment; present and future

    USGS Publications Warehouse

    Watson, Ken; Knepper, Daniel H.

    1994-01-01

    In 1988, a group of leading experts from government, academia, and industry attended a workshop on airborne remote sensing sponsored by the U.S. Geological Survey (USGS) and hosted by the Branch of Geophysics. The purpose of the workshop was to examine the scientific rationale for airborne remote sensing in support of government earth science in the next decade. This report has arranged the six resulting working-group reports under two main headings: (1) Geologic Remote Sensing, for the reports on geologic mapping, mineral resources, and fossil fuels and geothermal resources; and (2) Environmental Remote Sensing, for the reports on environmental geology, geologic hazards, and water resources. The intent of the workshop was to provide an evaluation of demonstrated capabilities, their direct extensions, and possible future applications, and this was the organizational format used for the geologic remote sensing reports. The working groups in environmental remote sensing chose to present their reports in a somewhat modified version of this format. A final section examines future advances and limitations in the field. There is a large, complex, and often bewildering array of remote sensing data available. Early remote sensing studies were based on data collected from airborne platforms. Much of that technology was later extended to satellites. The original 80-m-resolution Landsat Multispectral Scanner System (MSS) has now been largely superseded by the 30-m-resolution Thematic Mapper (TM) system that has additional spectral channels. The French satellite SPOT provides higher spatial resolution for channels equivalent to MSS. Low-resolution (1 km) data are available from the National Oceanographic and Atmospheric Administration's AVHRR system, which acquires reflectance and day and night thermal data daily. Several experimental satellites have acquired limited data, and there are extensive plans for future satellites including those of Japan (JERS), Europe (ESA), Canada (Radarsat), and the United States (EOS). There are currently two national airborne remote sensing programs (photography, radar) with data archived at the USGS' EROS Data Center. Airborne broadband multispectral data (comparable to Landsat MSS and TM but involving several more channels) for limited geographic areas also are available for digital processing and analysis. Narrow-band imaging spectrometer data are available for some NASA experiment sites and can be acquired for other locations commercially. Remote sensing data and derivative images, because of the uniform spatial coverage, availability at different resolutions, and digital format, are becoming important data sets for geographic information system (GIS) analyses. Examples range from overlaying digitized geologic maps on remote sensing images and draping these over topography, to maps of mineral distribution and inferred abundance. A large variety of remote sensing data sets are available, with costs ranging from a few dollars per square mile for satellite digital data to a few hundred dollars per square mile for airborne imaging spectrometry. Computer processing and analysis costs routinely surpass these expenses because of the equipment and expertise necessary for information extraction and interpretation. Effective use requires both an understanding of the current methodology and an appreciation of the most cost-effective solution.

  6. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  7. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.

  8. A NDVI assisted remote sensing image adaptive scale segmentation method

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  9. Shear Stress Sensing with Elastic Microfence Structures

    NASA Technical Reports Server (NTRS)

    Cisotto, Alexxandra; Palmieri, Frank L.; Saini, Aditya; Lin, Yi; Thurman, Christopher S; Kim, Jinwook; Kim, Taeyang; Connell, John W.; Zhu, Yong; Gopalarathnam, Ashok; hide

    2015-01-01

    In this work, elastic microfences were generated for the purpose of measuring shear forces acting on a wind tunnel model. The microfences were fabricated in a two part process involving laser ablation patterning to generate a template in a polymer film followed by soft lithography with a two-part silicone. Incorporation of a fluorescent dye was demonstrated as a method to enhance contrast between the sensing elements and the substrate. Sensing elements consisted of multiple microfences prepared at different orientations to enable determination of both shear force and directionality. Microfence arrays were integrated into an optical microscope with sub-micrometer resolution. Initial experiments were conducted on a flat plate wind tunnel model. Both image stabilization algorithms and digital image correlation were utilized to determine the amount of fence deflection as a result of airflow. Initial free jet experiments indicated that the microfences could be readily displaced and this displacement was recorded through the microscope.

  10. Remote Sensing: Analyzing Satellite Images to Create Higher Order Thinking Skills.

    ERIC Educational Resources Information Center

    Marks, Steven K.; And Others

    1996-01-01

    Presents a unit that uses remote-sensing images from satellites and other spacecraft to provide new perspectives of the earth and generate greater global awareness. Relates the levels of Bloom's hierarchy to different aspects of the remote sensing unit to confirm that the concepts and principles of remote sensing and related images belong in…

  11. The sky is the limit: reconstructing physical geography fieldwork from an aerial perspective

    NASA Astrophysics Data System (ADS)

    Williams, R.; Tooth, S.; Gibson, M.; Barrett, B.

    2017-12-01

    In an era of rapid geographical data acquisition, interpretations of remote sensing products (e.g. aerial photographs, satellite images, digital elevation models) are an integral part of many undergraduate geography degree schemes but there are fewer opportunities for collection and processing of primary remote sensing data. Unmanned aerial vehicles (UAVs) provide a relatively cheap opportunity to introduce the principles and practice of airborne remote sensing into fieldcourses, enabling students to learn about image acquisition, data processing and interpretation of derived products. Three case studies illustrate how a low cost DJI Phantom UAV can be used by students to acquire images that can be processed using off the shelf Structure-from-Motion photogrammetry software. Two case studies are drawn from an international fieldcourse that takes students to field sites that are the focus of current funded research whilst a third case study is from a course in topographic mapping. Results from a student questionnaire and analysis of assessed student reports showed that using UAVs in fieldwork enhanced student engagement with themes on their fieldcourse and equipped them with data processing skills. The derivation of bespoke orthophotos and Digital Elevation Models also provided students with opportunities to gain insight into the various data quality issues that are associated with aerial imagery acquisition and topographic reconstruction, although additional training is required to maximise this potential. Recognition of the successes and limitations of this teaching intervention provides scope for improving exercises that use UAVs and other technologies in future fieldcourses. UAVs are enabling both a reconstruction of how we measure the Earth's surface and a reconstruction of how students do fieldwork.

  12. NASA Cold Land Processes Experiment (CLPX 2002/03): Spaceborne remote sensing

    Treesearch

    Robert E. Davis; Thomas H. Painter; Don Cline; Richard Armstrong; Terry Haran; Kyle McDonald; Rick Forster; Kelly Elder

    2008-01-01

    This paper describes satellite data collected as part of the 2002/03 Cold Land Processes Experiment (CLPX). These data include multispectral and hyperspectral optical imaging, and passive and active microwave observations of the test areas. The CLPX multispectral optical data include the Advanced Very High Resolution Radiometer (AVHRR), the Landsat Thematic Mapper/...

  13. NASA Cold Land Processes Experiment (CLPX 2002/03): Airborne remote sensing

    Treesearch

    Don Cline; Simon Yueh; Bruce Chapman; Boba Stankov; Al Gasiewski; Dallas Masters; Kelly Elder; Richard Kelly; Thomas H. Painter; Steve Miller; Steve Katzberg; Larry Mahrt

    2009-01-01

    This paper describes the airborne data collected during the 2002 and 2003 Cold Land Processes Experiment (CLPX). These data include gamma radiation observations, multi- and hyperspectral optical imaging, optical altimetry, and passive and active microwave observations of the test areas. The gamma observations were collected with the NOAA/National Weather Service Gamma...

  14. Experiments with Image Theatre: Accessing and Giving Meaning to Sensory Experiences in Social Anthropology

    ERIC Educational Resources Information Center

    Strauss, Annika

    2017-01-01

    This article puts forward an experiential teaching method for becoming aware of, getting access to, and giving meaning to the sensory experiences that constitute and shape learning processes during social anthropological fieldwork. While social anthropologists use all their senses in the field, the preparation and processing of fieldwork are…

  15. Superresolution parallel magnetic resonance imaging: Application to functional and spectroscopic imaging

    PubMed Central

    Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan

    2009-01-01

    Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804

  16. Development of multiple source data processing for structural analysis at a regional scale. [digital remote sensing in geology

    NASA Technical Reports Server (NTRS)

    Carrere, Veronique

    1990-01-01

    Various image processing techniques developed for enhancement and extraction of linear features, of interest to the structural geologist, from digital remote sensing, geologic, and gravity data, are presented. These techniques include: (1) automatic detection of linear features and construction of rose diagrams from Landsat MSS data; (2) enhancement of principal structural directions using selective filters on Landsat MSS, Spacelab panchromatic, and HCMM NIR data; (3) directional filtering of Spacelab panchromatic data using Fast Fourier Transform; (4) detection of linear/elongated zones of high thermal gradient from thermal infrared data; and (5) extraction of strong gravimetric gradients from digitized Bouguer anomaly maps. Processing results can be compared to each other through the use of a geocoded database to evaluate the structural importance of each lineament according to its depth: superficial structures in the sedimentary cover, or deeper ones affecting the basement. These image processing techniques were successfully applied to achieve a better understanding of the transition between Provence and the Pyrenees structural blocks, in southeastern France, for an improved structural interpretation of the Mediterranean region.

  17. Fully Integrated Optical Spectrometer in Visible and Near-IR in CMOS.

    PubMed

    Hong, Lingyu; Sengupta, Kaushik

    2017-12-01

    Optical spectrometry in the visible and near-infrared range has a wide range of applications in healthcare, sensing, imaging, and diagnostics. This paper presents the first fully integrated optical spectrometer in standard bulk CMOS process without custom fabrication, postprocessing, or any external optical passive structure such as lenses, gratings, collimators, or mirrors. The architecture exploits metal interconnect layers available in CMOS processes with subwavelength feature sizes to guide, manipulate, control, diffract light, integrated photodetector, and read-out circuitry to detect dispersed light, and then back-end signal processing for robust spectral estimation. The chip, realized in bulk 65-nm low power-CMOS process, measures 0.64 mm 0.56 mm in active area, and achieves 1.4 nm in peak detection accuracy for continuous wave excitations between 500 and 830 nm. This paper demonstrates the ability to use these metal-optic nanostructures to miniaturize complex optical instrumentation into a new class of optics-free CMOS-based systems-on-chip in the visible and near-IR for various sensing and imaging applications.

  18. Motion analysis report

    NASA Technical Reports Server (NTRS)

    Badler, N. I.

    1985-01-01

    Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.

  19. Computer vision in roadway transportation systems: a survey

    NASA Astrophysics Data System (ADS)

    Loce, Robert P.; Bernal, Edgar A.; Wu, Wencheng; Bala, Raja

    2013-10-01

    There is a worldwide effort to apply 21st century intelligence to evolving our transportation networks. The goals of smart transportation networks are quite noble and manifold, including safety, efficiency, law enforcement, energy conservation, and emission reduction. Computer vision is playing a key role in this transportation evolution. Video imaging scientists are providing intelligent sensing and processing technologies for a wide variety of applications and services. There are many interesting technical challenges including imaging under a variety of environmental and illumination conditions, data overload, recognition and tracking of objects at high speed, distributed network sensing and processing, energy sources, as well as legal concerns. This paper presents a survey of computer vision techniques related to three key problems in the transportation domain: safety, efficiency, and security and law enforcement. A broad review of the literature is complemented by detailed treatment of a few selected algorithms and systems that the authors believe represent the state-of-the-art.

  20. A component-based system for agricultural drought monitoring by remote sensing.

    PubMed

    Dong, Heng; Li, Jun; Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.

  1. A component-based system for agricultural drought monitoring by remote sensing

    PubMed Central

    Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China’s Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring. PMID:29236700

  2. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  3. The Feasibility Evaluation of Land Use Change Detection Using GAOFEN-3 Data

    NASA Astrophysics Data System (ADS)

    Huang, G.; Sun, Y.; Zhao, Z.

    2018-04-01

    GaoFen-3 (GF-3) satellite, is the first C band and multi-polarimetric Synthetic Aperture Radar (SAR) satellite in China. In order to explore the feasibility of GF-3 satellite in remote sensing interpretation and land-use remote sensing change detection, taking Guangzhou, China as a study area, the full polarimetric image of GF-3 satellite with 8 m resolution of two temporal as the data source. Firstly, the image is pre-processed by orthorectification, image registration and mosaic, and the land-use remote sensing digital orthophoto map (DOM) in 2017 is made according to the each county. Then the classification analysis and judgment of ground objects on the image are carried out by means of ArcGIS combining with the auxiliary data and using artificial visual interpretation, to determine the area of changes and the category of change objects. According to the unified change information extraction principle to extract change areas. Finally, the change detection results are compared with 3 m resolution TerraSAR-X data and 2 m resolution multi-spectral image, and the accuracy is evaluated. Experimental results show that the accuracy of the GF-3 data is over 75 % in detecting the change of ground objects, and the detection capability of new filling soil is better than that of TerraSAR-X data, verify the detection and monitoring capability of GF-3 data to the change information extraction, also, it shows that GF-3 can provide effective data support for the remote sensing detection of land resources.

  4. Novel ray tracing method for stray light suppression from ocean remote sensing measurements.

    PubMed

    Oh, Eunsong; Hong, Jinsuk; Kim, Sug-Whan; Park, Young-Je; Cho, Seong-Ick

    2016-05-16

    We developed a new integrated ray tracing (IRT) technique to analyze the stray light effect in remotely sensed images. Images acquired with the Geostationary Ocean Color Imager show a radiance level discrepancy at the slot boundary, which is suspected to be a stray light effect. To determine its cause, we developed and adjusted a novel in-orbit stray light analysis method, which consists of three simulated phases (source, target, and instrument). Each phase simulation was performed in a way that used ray information generated from the Sun and reaching the instrument detector plane efficiently. This simulation scheme enabled the construction of the real environment from the remote sensing data, with a focus on realistic phenomena. In the results, even in a cloud-free environment, a background stray light pattern was identified at the bottom of each slot. Variations in the stray light effect and its pattern according to bright target movement were simulated, with a maximum stray light ratio of 8.5841% in band 2 images. To verify the proposed method and simulation results, we compared the results with the real acquired remotely sensed image. In addition, after correcting for abnormal phenomena in specific cases, we confirmed that the stray light ratio decreased from 2.38% to 1.02% in a band 6 case, and from 1.09% to 0.35% in a band 8 case. IRT-based stray light analysis enabled clear determination of the stray light path and candidates in in-orbit circumstances, and the correction process aided recovery of the radiometric discrepancy.

  5. Synchronous atmospheric radiation correction of GF-2 satellite multispectral image

    NASA Astrophysics Data System (ADS)

    Bian, Fuqiang; Fan, Dongdong; Zhang, Yan; Wang, Dandan

    2018-02-01

    GF-2 remote sensing products have been widely used in many fields for its high-quality information, which provides technical support for the the macroeconomic decisions. Atmospheric correction is the necessary part in the data preprocessing of the quantitative high resolution remote sensing, which can eliminate the signal interference in the radiation path caused by atmospheric scattering and absorption, and reducting apparent reflectance into real reflectance of the surface targets. Aiming at the problem that current research lack of atmospheric date which are synchronization and region matching of the surface observation image, this research utilize the MODIS Level 1B synchronous data to simulate synchronized atmospheric condition, and write programs to implementation process of aerosol retrieval and atmospheric correction, then generate a lookup table of the remote sensing image based on the radioactive transfer model of 6S (second simulation of a satellite signal in the solar spectrum) to correct the atmospheric effect of multispectral image from GF-2 satellite PMS-1 payload. According to the correction results, this paper analyzes the pixel histogram of the reflectance spectrum of the 4 spectral bands of PMS-1, and evaluates the correction results of different spectral bands. Then conducted a comparison experiment on the same GF-2 image based on the QUAC. According to the different targets respectively statistics the average value of NDVI, implement a comparative study of NDVI from two different results. The degree of influence was discussed by whether to adopt synchronous atmospheric date. The study shows that the result of the synchronous atmospheric parameters have significantly improved the quantitative application of the GF-2 remote sensing data.

  6. Coarse-to-fine wavelet-based airport detection

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun

    2015-10-01

    Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.

  7. Onboard Classification of Hyperspectral Data on the Earth Observing One Mission

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Tran, Daniel; Schaffer, Steve; Rabideau, Gregg; Davies, Ashley Gerard; Doggett, Thomas; Greeley, Ronald; Ip, Felipe; Baker, Victor; Doubleday, Joshua; hide

    2009-01-01

    Remote-sensed hyperspectral data represents significant challenges in downlink due to its large data volumes. This paper describes a research program designed to process hyperspectral data products onboard spacecraft to (a) reduce data downlink volumes and (b) decrease latency to provide key data products (often by enabling use of lower data rate communications systems). We describe efforts to develop onboard processing to study volcanoes, floods, and cryosphere, using the Hyperion hyperspectral imager and onboard processing for the Earth Observing One (EO-1) mission as well as preliminary work targeting the Hyperspectral Infrared Imager (HyspIRI) mission.

  8. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    NASA Technical Reports Server (NTRS)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  9. Comparison of Open Source Compression Algorithms on Vhr Remote Sensing Images for Efficient Storage Hierarchy

    NASA Astrophysics Data System (ADS)

    Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.

    2016-06-01

    High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.

  10. The application of the unmanned aerial vehicle remote sensing technology in the FAST project construction

    NASA Astrophysics Data System (ADS)

    Zhu, Boqin

    2015-08-01

    The purpose of using unmanned aerial vehicle (UAV) remote sensing application in Five-hundred-meter aperture spherical telescope (FAST) project is to dynamically record the construction process with high resolution image, monitor the environmental impact, and provide services for local environmental protection and the reserve immigrants. This paper introduces the use of UAV remote sensing system and the course design and implementation for the FAST site. Through the analysis of the time series data, we found that: (1) since the year 2012, the project has been widely carried out; (2) till 2013, the internal project begun to take shape;(3) engineering excavation scope was kept stable in 2014, and the initial scale of the FAST engineering construction has emerged as in the meantime, the vegetation recovery went well on the bare soil area; (4) in 2015, none environmental problems caused by engineering construction and other engineering geological disaster were found in the work area through the image interpretation of UAV images. This paper also suggested that the UAV technology need some improvements to fulfill the requirements of surveying and mapping specification., including a new data acquisition and processing measures assigned with the background of highly diverse elevation, usage of telephoto camera, hierarchical photography with different flying height, and adjustment with terrain using the joint empty three settlement method.

  11. Smartphone-based simultaneous pH and nitrite colorimetric determination for paper microfluidic devices.

    PubMed

    Lopez-Ruiz, Nuria; Curto, Vincenzo F; Erenas, Miguel M; Benito-Lopez, Fernando; Diamond, Dermot; Palma, Alberto J; Capitan-Vallvey, Luis F

    2014-10-07

    In this work, an Android application for measurement of nitrite concentration and pH determination in combination with a low-cost paper-based microfluidic device is presented. The application uses seven sensing areas, containing the corresponding immobilized reagents, to produce selective color changes when a sample solution is placed in the sampling area. Under controlled conditions of light, using the flash of the smartphone as a light source, the image captured with the built-in camera is processed using a customized algorithm for multidetection of the colored sensing areas. The developed image-processing allows reducing the influence of the light source and the positioning of the microfluidic device in the picture. Then, the H (hue) and S (saturation) coordinates of the HSV color space are extracted and related to pH and nitrite concentration, respectively. A complete characterization of the sensing elements has been carried out as well as a full description of the image analysis for detection. The results show good use of a mobile phone as an analytical instrument. For the pH, the resolution obtained is 0.04 units of pH, 0.09 of accuracy, and a mean squared error of 0.167. With regard to nitrite, 0.51% at 4.0 mg L(-1) of resolution and 0.52 mg L(-1) as the limit of detection was achieved.

  12. Feature extraction from multiple data sources using genetic programming

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Brumby, Steven P.; Pope, Paul A.; Eads, Damian R.; Esch-Mosher, Diana M.; Galassi, Mark C.; Harvey, Neal R.; McCulloch, Hersey D.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Bloch, Jeffrey J.; David, Nancy A.

    2002-08-01

    Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniquesmore » to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.« less

  14. An active co-phasing imaging testbed with segmented mirrors

    NASA Astrophysics Data System (ADS)

    Zhao, Weirui; Cao, Genrui

    2011-06-01

    An active co-phasing imaging testbed with high accurate optical adjustment and control in nanometer scale was set up to validate the algorithms of piston and tip-tilt error sensing and real-time adjusting. Modularization design was adopted. The primary mirror was spherical and divided into three sub-mirrors. One of them was fixed and worked as reference segment, the others were adjustable respectively related to the fixed segment in three freedoms (piston, tip and tilt) by using sensitive micro-displacement actuators in the range of 15mm with a resolution of 3nm. The method of twodimension dispersed fringe analysis was used to sense the piston error between the adjacent segments in the range of 200μm with a repeatability of 2nm. And the tip-tilt error was gained with the method of centroid sensing. Co-phasing image could be realized by correcting the errors measured above with the sensitive micro-displacement actuators driven by a computer. The process of co-phasing error sensing and correcting could be monitored in real time by a scrutiny module set in this testbed. A FISBA interferometer was introduced to evaluate the co-phasing performance, and finally a total residual surface error of about 50nm rms was achieved.

  15. SenseCam: A new tool for memory rehabilitation?

    PubMed

    Dubourg, L; Silva, A R; Fitamen, C; Moulin, C J A; Souchay, C

    2016-12-01

    The emergence of life-logging technologies has led neuropsychologist to focus on understanding how this new technology could help patients with memory disorders. Despite the growing number of studies using life-logging technologies, a theoretical framework supporting its effectiveness is lacking. This review focuses on the use of life-logging in the context of memory rehabilitation, particularly the use of SenseCam, a wearable camera allowing passive image capture. In our opinion, reviewing SenseCam images can be effective for memory rehabilitation only if it provides more than an assessment of prior occurrence in ways that reinstates previous thoughts, feelings and sensory information, thus stimulating recollection. Considering the fact that, in memory impairment, self-initiated processes are impaired, we propose that the environmental support hypothesis can explain the value of SenseCam for memory retrieval. Twenty-five research studies were selected for this review and despite the general acceptance of the value of SenseCam as a memory technique, only a small number of studies focused on recollection. We discuss the usability of this tool to improve episodic memory and in particular, recollection. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  16. Advanced processing for high-bandwidth sensor systems

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.

    2000-11-01

    Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.

  17. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  18. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  19. WinASEAN for remote sensing data analysis

    NASA Astrophysics Data System (ADS)

    Duong, Nguyen Dinh; Takeuchi, Shoji

    The image analysis system ASEAN (Advanced System for Environmental ANalysis with Remote Sensing Data) was designed and programmed by a software development group, ImaSOFr, Department of Remote Sensing Technology and GIS, Institute for Geography, National Centre for Natural Science and Technology of Vietnam under technical cooperation with the Remote Sensing Technology Centre of Japan and financial support from the National Space Development Agency of Japan. ASEAN has been in continuous development since 1989, with different versions ranging from the simplest one for MS-DOS with standard VGA 320×200×256 colours, through versions supporting SpeedStar 1.0 and SpeedStar PRO 2.0 true colour graphics cards, up to the latest version named WinASEAN, which is designed for the Windows 3.1 operating system. The most remarkable feature of WinASEAN is the use of algorithms that speed up the image analysis process, even on PC platforms. Today WinASEAN is continuously improved in cooperation with NASDA (National Space Development Agency of Japan), RESTEC (Remote Sensing Technology Center of Japan) and released as public domain software for training, research and education through the Regional Remote Sensing Seminar on Tropical Eco-system Management which is organised by NASDA and ESCAR In this paper, the authors describe the functionality of WinASEAN, some of the relevant analysis algorithms, and discuss its possibilities of computer-assisted teaching and training of remote sensing.

  20. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  1. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  2. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  3. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE PAGES

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  4. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  5. Automatic archaeological feature extraction from satellite VHR images

    NASA Astrophysics Data System (ADS)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were applied to different archaeological sites in Turkmenistan (Nisa) and in Iraq (Babylon); a further change detection analysis was applied to the Babylon site using two HR images as a pre-post second gulf war. We had different results or outputs, taking into consideration the fact that the operative scale of sensed data determines the final result of the elaboration and the output of the information quality, because each of them was sensitive to specific shapes in each input image, we had mapped linear and nonlinear objects, updating archaeological cartography, automatic change detection analysis for the Babylon site. The discussion of these techniques has the objective to provide the archaeological team with new instruments for the orientation and the planning of a remote sensing application.

  6. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  7. Research on Method of Interactive Segmentation Based on Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Li, H.; Han, Y.; Yu, F.

    2017-09-01

    In this paper, we aim to solve the object extraction problem in remote sensing images using interactive segmentation tools. Firstly, an overview of the interactive segmentation algorithm is proposed. Then, our detailed implementation of intelligent scissors and GrabCut for remote sensing images is described. Finally, several experiments on different typical features (water area, vegetation) in remote sensing images are performed respectively. Compared with the manual result, it indicates that our tools maintain good feature boundaries and show good performance.

  8. Spectral Imaging from Uavs Under Varying Illumination Conditions

    NASA Astrophysics Data System (ADS)

    Hakala, T.; Honkavaara, E.; Saari, H.; Mäkynen, J.; Kaivosoja, J.; Pesonen, L.; Pölönen, I.

    2013-08-01

    Rapidly developing unmanned aerial vehicles (UAV) have provided the remote sensing community with a new rapidly deployable tool for small area monitoring. The progress of small payload UAVs has introduced greater demand for light weight aerial payloads. For applications requiring aerial images, a simple consumer camera provides acceptable data. For applications requiring more detailed spectral information about the surface, a new Fabry-Perot interferometer based spectral imaging technology has been developed. This new technology produces tens of successive images of the scene at different wavelength bands in very short time. These images can be assembled in spectral data cubes with stereoscopic overlaps. On field the weather conditions vary and the UAV operator often has to decide between flight in sub optimal conditions and no flight. Our objective was to investigate methods for quantitative radiometric processing of images taken under varying illumination conditions, thus expanding the range of weather conditions during which successful imaging flights can be made. A new method that is based on insitu measurement of irradiance either in UAV platform or in ground was developed. We tested the methods in a precision agriculture application using realistic data collected in difficult illumination conditions. Internal homogeneity of the original image data (average coefficient of variation in overlapping images) was 0.14-0.18. In the corrected data, the homogeneity was 0.10-0.12 with a correction based on broadband irradiance measured in UAV, 0.07-0.09 with a correction based on spectral irradiance measurement on ground, and 0.05-0.08 with a radiometric block adjustment based on image data. Our results were very promising, indicating that quantitative UAV based remote sensing could be operational in diverse conditions, which is prerequisite for many environmental remote sensing applications.

  9. Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera

    NASA Astrophysics Data System (ADS)

    Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.

    2016-08-01

    Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.

  10. Better Pictures in a Snap

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Retinex Imaging Processing, winner of NASA's 1999 Space Act Award, is commercially available through TruView Imaging Company. With this technology, amateur photographers use their personal computers to improve the brightness, scene contrast, detail, and overall sharpness of images with increased ease. The process was originally developed for remote sensing of the Earth by researchers at Langley Research Center and Science and Technology Corporation (STC). It automatically enhances a digital image in terms of dynamic range compression, color independence from the spectral distribution of the scene illuminant, and color/lightness rendition. As a result, the enhanced digital image is much closer to the scene perceived by the human visual system, under all kinds and levels of lighting variations. TruView believes there are other applications for the software in medical imaging, forensics, security, recognizance, mining, assembly, and other industrial areas.

  11. A survey of landmine detection using hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Makki, Ihab; Younes, Rafic; Francis, Clovis; Bianchi, Tiziano; Zucchetti, Massimo

    2017-02-01

    Hyperspectral imaging is a trending technique in remote sensing that finds its application in many different areas, such as agriculture, mapping, target detection, food quality monitoring, etc. This technique gives the ability to remotely identify the composition of each pixel of the image. Therefore, it is a natural candidate for the purpose of landmine detection, thanks to its inherent safety and fast response time. In this paper, we will present the results of several studies that employed hyperspectral imaging for the purpose of landmine detection, discussing the different signal processing techniques used in this framework for hyperspectral image processing and target detection. Our purpose is to highlight the progresses attained in the detection of landmines using hyperspectral imaging and to identify possible perspectives for future work, in order to achieve a better detection in real-time operation mode.

  12. Feasibility study of a novel miniaturized spectral imaging system architecture in UAV surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Shuyang; Zhou, Tao; Jia, Xiaodong; Cui, Hushan; Huang, Chengjun

    2016-01-01

    The spectral imaging technology is able to analysis the spectral and spatial geometric character of the target at the same time. To break through the limitation brought by the size, weight and cost of the traditional spectral imaging instrument, a miniaturized novel spectral imaging based on CMOS processing has been introduced in the market. This technology has enabled the possibility of applying spectral imaging in the UAV platform. In this paper, the relevant technology and the related possible applications have been presented to implement a quick, flexible and more detailed remote sensing system.

  13. Spectral imaging applications: Remote sensing, environmental monitoring, medicine, military operations, factory automation and manufacturing

    NASA Technical Reports Server (NTRS)

    Gat, N.; Subramanian, S.; Barhen, J.; Toomarian, N.

    1996-01-01

    This paper reviews the activities at OKSI related to imaging spectroscopy presenting current and future applications of the technology. The authors discuss the development of several systems including hardware, signal processing, data classification algorithms and benchmarking techniques to determine algorithm performance. Signal processing for each application is tailored by incorporating the phenomenology appropriate to the process, into the algorithms. Pixel signatures are classified using techniques such as principal component analyses, generalized eigenvalue analysis and novel very fast neural network methods. The major hyperspectral imaging systems developed at OKSI include the Intelligent Missile Seeker (IMS) demonstration project for real-time target/decoy discrimination, and the Thermal InfraRed Imaging Spectrometer (TIRIS) for detection and tracking of toxic plumes and gases. In addition, systems for applications in medical photodiagnosis, manufacturing technology, and for crop monitoring are also under development.

  14. Integrating Remote Sensing Information Into A Distributed Hydrological Model for Improving Water Budget Predictions in Large-scale Basins through Data Assimilation.

    PubMed

    Qin, Changbo; Jia, Yangwen; Su, Z; Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen

    2008-07-29

    This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems.

  15. Integrating Remote Sensing Information Into A Distributed Hydrological Model for Improving Water Budget Predictions in Large-scale Basins through Data Assimilation

    PubMed Central

    Qin, Changbo; Jia, Yangwen; Su, Z.(Bob); Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen

    2008-01-01

    This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems. PMID:27879946

  16. Optimization of illuminating system to detect optical properties inside a finger

    NASA Astrophysics Data System (ADS)

    Sano, Emiko; Shikai, Masahiro; Shiratsuki, Akihide; Maeda, Takuji; Matsushita, Masahito; Sasakawa, Koichi

    2007-01-01

    Biometrics performs personal authentication using individual bodily features including fingerprints, faces, etc. These technologies have been studied and developed for many years. In particular, fingerprint authentication has evolved over many years, and fingerprinting is currently one of world's most established biometric authentication techniques. Not long ago this technique was only used for personal identification in criminal investigations and high-security facilities. In recent years, however, various biometric authentication techniques have appeared in everyday applications. Even though providing great convenience, they have also produced a number of technical issues concerning operation. Generally, fingerprint authentication is comprised of a number of component technologies: (1) sensing technology for detecting the fingerprint pattern; (2) image processing technology for converting the captured pattern into feature data that can be used for verification; (3) verification technology for comparing the feature data with a reference and determining whether it matches. Current fingerprint authentication issues, revealed in research results, originate with fingerprint sensing technology. Sensing methods for detecting a person's fingerprint pattern for image processing are particularly important because they impact overall fingerprint authentication performance. The following are the current problems concerning sensing methods that occur in some cases: Some fingers whose fingerprints used to be difficult to detect by conventional sensors. Fingerprint patterns are easily affected by the finger's surface condition, such noise as discontinuities and thin spots can appear in fingerprint patterns obtained from wrinkled finger, sweaty finger, and so on. To address these problems, we proposed a novel fingerprint sensor based on new scientific knowledge. A characteristic of this new method is that obtained fingerprint patterns are not easily affected by the finger's surface condition because it detects the fingerprint pattern inside the finger using transmitted light. We examined optimization of illumination system of this novel fingerprint sensor to detect contrasty fingerprint pattern from wide area and to improve image processing at (2).

  17. Ultrafast water sensing and thermal imaging by a metal-organic framework with switchable luminescence

    PubMed Central

    Chen, Ling; Ye, Jia-Wen; Wang, Hai-Ping; Pan, Mei; Yin, Shao-Yun; Wei, Zhang-Wen; Zhang, Lu-Yin; Wu, Kai; Fan, Ya-Nan; Su, Cheng-Yong

    2017-01-01

    A convenient, fast and selective water analysis method is highly desirable in industrial and detection processes. Here a robust microporous Zn-MOF (metal–organic framework, Zn(hpi2cf)(DMF)(H2O)) is assembled from a dual-emissive H2hpi2cf (5-(2-(5-fluoro-2-hydroxyphenyl)-4,5-bis(4-fluorophenyl)-1H-imidazol-1-yl)isophthalic acid) ligand that exhibits characteristic excited state intramolecular proton transfer (ESIPT). This Zn-MOF contains amphipathic micropores (<3 Å) and undergoes extremely facile single-crystal-to-single-crystal transformation driven by reversible removal/uptake of coordinating water molecules simply stimulated by dry gas blowing or gentle heating at 70 °C, manifesting an excellent example of dynamic reversible coordination behaviour. The interconversion between the hydrated and dehydrated phases can turn the ligand ESIPT process on or off, resulting in sensitive two-colour photoluminescence switching over cycles. Therefore, this Zn-MOF represents an excellent PL water-sensing material, showing a fast (on the order of seconds) and highly selective response to water on a molecular level. Furthermore, paper or in situ grown ZnO-based sensing films have been fabricated and applied in humidity sensing (RH<1%), detection of traces of water (<0.05% v/v) in various organic solvents, thermal imaging and as a thermometer. PMID:28665406

  18. SENSOR: a tool for the simulation of hyperspectral remote sensing systems

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel

    The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.

  19. Multitask SVM learning for remote sensing data classification

    NASA Astrophysics Data System (ADS)

    Leiva-Murillo, Jose M.; Gómez-Chova, Luis; Camps-Valls, Gustavo

    2010-10-01

    Many remote sensing data processing problems are inherently constituted by several tasks that can be solved either individually or jointly. For instance, each image in a multitemporal classification setting could be taken as an individual task but relation to previous acquisitions should be properly considered. In such problems, different modalities of the data (temporal, spatial, angular) gives rise to changes between the training and test distributions, which constitutes a difficult learning problem known as covariate shift. Multitask learning methods aim at jointly solving a set of prediction problems in an efficient way by sharing information across tasks. This paper presents a novel kernel method for multitask learning in remote sensing data classification. The proposed method alleviates the dataset shift problem by imposing cross-information in the classifiers through matrix regularization. We consider the support vector machine (SVM) as core learner and two regularization schemes are introduced: 1) the Euclidean distance of the predictors in the Hilbert space; and 2) the inclusion of relational operators between tasks. Experiments are conducted in the challenging remote sensing problems of cloud screening from multispectral MERIS images and for landmine detection.

  20. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    NASA Astrophysics Data System (ADS)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  1. A light and faster regional convolutional neural network for object detection in optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan

    2018-07-01

    Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.

  2. Ontology-based classification of remote sensing images using spectral rules

    NASA Astrophysics Data System (ADS)

    Andrés, Samuel; Arvor, Damien; Mougenot, Isabelle; Libourel, Thérèse; Durieux, Laurent

    2017-05-01

    Earth Observation data is of great interest for a wide spectrum of scientific domain applications. An enhanced access to remote sensing images for "domain" experts thus represents a great advance since it allows users to interpret remote sensing images based on their domain expert knowledge. However, such an advantage can also turn into a major limitation if this knowledge is not formalized, and thus is difficult for it to be shared with and understood by other users. In this context, knowledge representation techniques such as ontologies should play a major role in the future of remote sensing applications. We implemented an ontology-based prototype to automatically classify Landsat images based on explicit spectral rules. The ontology is designed in a very modular way in order to achieve a generic and versatile representation of concepts we think of utmost importance in remote sensing. The prototype was tested on four subsets of Landsat images and the results confirmed the potential of ontologies to formalize expert knowledge and classify remote sensing images.

  3. Study on Mosaic and Uniform Color Method of Satellite Image Fusion in Large Srea

    NASA Astrophysics Data System (ADS)

    Liu, S.; Li, H.; Wang, X.; Guo, L.; Wang, R.

    2018-04-01

    Due to the improvement of satellite radiometric resolution and the color difference for multi-temporal satellite remote sensing images and the large amount of satellite image data, how to complete the mosaic and uniform color process of satellite images is always an important problem in image processing. First of all using the bundle uniform color method and least squares mosaic method of GXL and the dodging function, the uniform transition of color and brightness can be realized in large area and multi-temporal satellite images. Secondly, using Color Mapping software to color mosaic images of 16bit to mosaic images of 8bit based on uniform color method with low resolution reference images. At last, qualitative and quantitative analytical methods are used respectively to analyse and evaluate satellite image after mosaic and uniformity coloring. The test reflects the correlation of mosaic images before and after coloring is higher than 95 % and image information entropy increases, texture features are enhanced which have been proved by calculation of quantitative indexes such as correlation coefficient and information entropy. Satellite image mosaic and color processing in large area has been well implemented.

  4. Automated railroad reconstruction from remote sensing image based on texture filter

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Lu, Kaixia

    2018-03-01

    Techniques of remote sensing have been improved incredibly in recent years and very accurate results and high resolution images can be acquired. There exist possible ways to use such data to reconstruct railroads. In this paper, an automated railroad reconstruction method from remote sensing images based on Gabor filter was proposed. The method is divided in three steps. Firstly, the edge-oriented railroad characteristics (such as line features) in a remote sensing image are detected using Gabor filter. Secondly, two response images with the filtering orientations perpendicular to each other are fused to suppress the noise and acquire a long stripe smooth region of railroads. Thirdly, a set of smooth regions can be extracted by firstly computing global threshold for the previous result image using Otsu's method and then converting it to a binary image based on the previous threshold. This workflow is tested on a set of remote sensing images and was found to deliver very accurate results in a quickly and highly automated manner.

  5. A fractal concentration area method for assigning a color palette for image representation

    NASA Astrophysics Data System (ADS)

    Cheng, Qiuming; Li, Qingmou

    2002-05-01

    Displaying the remotely sensed image with a proper color palette is the first task in any kind of image processing and pattern recognition in GIS and image processing environments. The purpose of displaying the image should be not only to provide a visual representation of the variance of the image, although this has been the primary objective of most conventional methods, but also the color palette should reflect real-world features on the ground which must be the primary objective of employing remotely sensed data. Although most conventional methods focus only on the first purpose of image representation, the concentration-area ( C- A plot) fractal method proposed in this paper aims to meet both purposes on the basis of pixel values and pixel value frequency distribution as well as spatial and geometrical properties of image patterns. The C- A method can be used to establish power-law relationships between the area A(⩾ s) with the pixel values greater than s and the pixel value s itself after plotting these values on log-log paper. A number of straight-line segments can be manually or automatically fitted to the points on the log-log paper, each representing a power-law relationship between the area A and the cutoff pixel value for s in a particular range. These straight-line segments can yield a group of cutoff values on the basis of which the image can be classified into discrete classes or zones. These zones usually correspond to the real-world features on the ground. A Windows program has been prepared in ActiveX format for implementing the C- A method and integrating it into other GIS and image processing systems. A case study of Landsat TM band 5 has been used to demonstrate the application of the method and the flexibility of the computer program.

  6. PROCEDURES FOR ACCURATE PRODUCTION OF COLOR IMAGES FROM SATELLITE OR AIRCRAFT MULTISPECTRAL DIGITAL DATA.

    USGS Publications Warehouse

    Duval, Joseph S.

    1985-01-01

    Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.

  7. An earth imaging camera simulation using wide-scale construction of reflectance surfaces

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk

    2013-10-01

    Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.

  8. Scientific Software

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Interactive Data Language (IDL), developed by Research Systems, Inc., is a tool for scientists to investigate their data without having to write a custom program for each study. IDL is based on the Mariners Mars spectral Editor (MMED) developed for studies from NASA's Mars spacecraft flights. The company has also developed Environment for Visualizing Images (ENVI), an image processing system for easily analyzing remotely sensed data written in IDL. The Visible Human CD, another Research Systems product, is the first complete digital reference of photographic images for exploring human anatomy.

  9. The effects of SENSE on PROPELLER imaging.

    PubMed

    Chang, Yuchou; Pipe, James G; Karis, John P; Gibbs, Wende N; Zwart, Nicholas R; Schär, Michael

    2015-12-01

    To study how sensitivity encoding (SENSE) impacts periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) image quality, including signal-to-noise ratio (SNR), robustness to motion, precision of motion estimation, and image quality. Five volunteers were imaged by three sets of scans. A rapid method for generating the g-factor map was proposed and validated via Monte Carlo simulations. Sensitivity maps were extrapolated to increase the area over which SENSE can be performed and therefore enhance the robustness to head motion. The precision of motion estimation of PROPELLER blades that are unfolded with these sensitivity maps was investigated. An interleaved R-factor PROPELLER sequence was used to acquire data with similar amounts of motion with and without SENSE acceleration. Two neuroradiologists independently and blindly compared 214 image pairs. The proposed method of g-factor calculation was similar to that provided by the Monte Carlo methods. Extrapolation and rotation of the sensitivity maps allowed for continued robustness of SENSE unfolding in the presence of motion. SENSE-widened blades improved the precision of rotation and translation estimation. PROPELLER images with a SENSE factor of 3 outperformed the traditional PROPELLER images when reconstructing the same number of blades. SENSE not only accelerates PROPELLER but can also improve robustness and precision of head motion correction, which improves overall image quality even when SNR is lost due to acceleration. The reduction of SNR, as a penalty of acceleration, is characterized by the proposed g-factor method. © 2014 Wiley Periodicals, Inc.

  10. Spatial Statistical Data Fusion for Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, Hai

    2010-01-01

    Data fusion is the process of combining information from heterogeneous sources into a single composite picture of the relevant process, such that the composite picture is generally more accurate and complete than that derived from any single source alone. Data collection is often incomplete, sparse, and yields incompatible information. Fusion techniques can make optimal use of such data. When investment in data collection is high, fusion gives the best return. Our study uses data from two satellites: (1) Multiangle Imaging SpectroRadiometer (MISR), (2) Moderate Resolution Imaging Spectroradiometer (MODIS).

  11. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    PubMed Central

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-01-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment. PMID:27922592

  12. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    NASA Astrophysics Data System (ADS)

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-12-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  13. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification.

    PubMed

    Bradbury, Kyle; Saboo, Raghav; L Johnson, Timothy; Malof, Jordan M; Devarajan, Arjun; Zhang, Wuming; M Collins, Leslie; G Newell, Richard

    2016-12-06

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  14. UNFOLD-SENSE: a parallel MRI method with self-calibration and artifact suppression.

    PubMed

    Madore, Bruno

    2004-08-01

    This work aims at improving the performance of parallel imaging by using it with our "unaliasing by Fourier-encoding the overlaps in the temporal dimension" (UNFOLD) temporal strategy. A self-calibration method called "self, hybrid referencing with UNFOLD and GRAPPA" (SHRUG) is presented. SHRUG combines the UNFOLD-based sensitivity mapping strategy introduced in the TSENSE method by Kellman et al. (5), with the strategy introduced in the GRAPPA method by Griswold et al. (10). SHRUG merges the two approaches to alleviate their respective limitations, and provides fast self-calibration at any given acceleration factor. UNFOLD-SENSE further includes an UNFOLD artifact suppression scheme to significantly suppress artifacts and amplified noise produced by parallel imaging. This suppression scheme, which was published previously (4), is related to another method that was presented independently as part of TSENSE. While the two are equivalent at accelerations < or = 2.0, the present approach is shown here to be significantly superior at accelerations > 2.0, with up to double the artifact suppression at high accelerations. Furthermore, a slight modification of Cartesian SENSE is introduced, which allows departures from purely Cartesian sampling grids. This technique, termed variable-density SENSE (vdSENSE), allows the variable-density data required by SHRUG to be reconstructed with the simplicity and fast processing of Cartesian SENSE. UNFOLD-SENSE is given by the combination of SHRUG for sensitivity mapping, vdSENSE for reconstruction, and UNFOLD for artifact/amplified noise suppression. The method was implemented, with online reconstruction, on both an SSFP and a myocardium-perfusion sequence. The results from six patients scanned with UNFOLD-SENSE are presented.

  15. NASA Tech Briefs, April 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics include: Computational Ghost Imaging for Remote Sensing; Digital Architecture for a Trace Gas Sensor Platform; Dispersed Fringe Sensing Analysis - DFSA; Indium Tin Oxide Resistor-Based Nitric Oxide Microsensors; Gas Composition Sensing Using Carbon Nanotube Arrays; Sensor for Boundary Shear Stress in Fluid Flow; Model-Based Method for Sensor Validation; Qualification of Engineering Camera for Long-Duration Deep Space Missions; Remotely Powered Reconfigurable Receiver for Extreme Environment Sensing Platforms; Bump Bonding Using Metal-Coated Carbon Nanotubes; In Situ Mosaic Brightness Correction; Simplex GPS and InSAR Inversion Software; Virtual Machine Language 2.1; Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction; Pandora Operation and Analysis Software; Fabrication of a Cryogenic Bias Filter for Ultrasensitive Focal Plane; Processing of Nanosensors Using a Sacrificial Template Approach; High-Temperature Shape Memory Polymers; Modular Flooring System; Non-Toxic, Low-Freezing, Drop-In Replacement Heat Transfer Fluids; Materials That Enhance Efficiency and Radiation Resistance of Solar Cells; Low-Cost, Rugged High-Vacuum System; Static Gas-Charging Plug; Floating Oil-Spill Containment Device; Stemless Ball Valve; Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs; Oxygen-Methane Thruster; Lunar Navigation Determination System - LaNDS; Launch Method for Kites in Low-Wind or No-Wind Conditions; Supercritical CO2 Cleaning System for Planetary Protection and Contamination Control Applications; Design and Performance of a Wideband Radio Telescope; Finite Element Models for Electron Beam Freeform Fabrication Process Autonomous Information Unit for Fine-Grain Data Access Control and Information Protection in a Net-Centric System; Vehicle Detection for RCTA/ANS (Autonomous Navigation System); Image Mapping and Visual Attention on the Sensory Ego-Sphere; HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis; and IMAGESEER - IMAGEs for Education and Research.

  16. Ultrahigh-frame CCD imagers

    NASA Astrophysics Data System (ADS)

    Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.

    2004-02-01

    This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.

  17. Some Defence Applications of Civilian Remote Sensing Satellite Images

    DTIC Science & Technology

    1993-11-01

    This report is on a pilot study to demonstrate some of the capabilities of remote sensing in intelligence gathering. A wide variety of issues, both...colour images. The procedure will be presented in a companion report. Remote sensing , Satellite imagery, Image analysis, Military applications, Military intelligence.

  18. Comparison and analysis of nonlinear algorithms for compressed sensing in MRI.

    PubMed

    Yu, Yeyang; Hong, Mingjian; Liu, Feng; Wang, Hua; Crozier, Stuart

    2010-01-01

    Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI.

  19. a Rough Set Decision Tree Based Mlp-Cnn for Very High Resolution Remotely Sensed Image Classification

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Pan, X.; Zhang, S. Q.; Li, H. P.; Atkinson, P. M.

    2017-09-01

    Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR) images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP), which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  20. Mid-infrared (MIR) photonics: MIR passive and active fiberoptics chemical and biomedical, sensing and imaging

    NASA Astrophysics Data System (ADS)

    Seddon, Angela B.

    2016-10-01

    The case for new, portable, real-time mid-infrared (MIR) molecular sensing and imaging is discussed. We set a record in demonstrating extreme broad-band supercontinuum (SC) generated light 1.4-13.3 μm in a specially engineered, step-index MIR optical fiber of high numerical aperture. This was the first experimental demonstration truly to reveal the potential of MIR fibers to emit across the MIR molecular "fingerprint spectral region" and a key first step towards bright, portable, broadband MIR sources for chemical and biomedical, molecular sensing and imaging in real-time. Potential applications are in the healthcare, security, energy, environmental monitoring, chemical-processing, manufacturing and the agriculture sectors. MIR narrow-line fiber lasers are now required to pump the fiber MIR-SC for a compact all-fiber solution. Rare-earth-ion (RE-) doped MIR fiber lasers are not yet demonstrated >=4 μm wavelength. We have fabricated small-core RE-fiber with photoluminescence across 3.5-6 μm, and long excited-state lifetimes. MIR-RE-fiber lasers are also applicable as discrete MIR fiber sensors in their own right, for applications including: ship-to-ship free-space communications, aircraft counter-measures, coherent MIR imaging, MIR-optical coherent tomography, laser-cutting/ patterning of soft materials and new wavelengths for fiber laser medical surgery.

  1. Searching for patterns in remote sensing image databases using neural networks

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.

  2. Monitoring Change in Temperate Coniferous Forest Ecosystems

    NASA Technical Reports Server (NTRS)

    Williams, Darrel (Technical Monitor); Woodcock, Curtis E.

    2004-01-01

    The primary goal of this research was to improve monitoring of temperate forest change using remote sensing. In this context, change includes both clearing of forest due to effects such as fire, logging, or land conversion and forest growth and succession. The Landsat 7 ETM+ proved an extremely valuable research tool in this domain. The Landsat 7 program has generated an extremely valuable transformation in the land remote sensing community by making high quality images available for relatively low cost. In addition, the tremendous improvements in the acquisition strategy greatly improved the overall availability of remote sensing images. I believe that from an historical prespective, the Landsat 7 mission will be considered extremely important as the improved image availability will stimulate the use of multitemporal imagery at resolutions useful for local to regional mapping. Also, Landsat 7 has opened the way to global applications of remote sensing at spatial scales where important surface processes and change can be directly monitored. It has been a wonderful experience to have participated on the Landsat 7 Science Team. The research conducted under this project led to contributions in four general domains: I. Improved understanding of the information content of images as a function of spatial resolution; II. Monitoring Forest Change and Succession; III. Development and Integration of Advanced Analysis Methods; and IV. General support of the remote sensing of forests and environmental change. This report is organized according to these topics. This report does not attempt to provide the complete details of the research conducted with support from this grant. That level of detail is provided in the 16 peer reviewed journal articles, 7 book chapters and 5 conference proceedings papers published as part of this grant. This report attempts to explain how the various publications fit together to improve our understanding of how forests are changing and how to monitor forest change with remote sensing. There were no new inventions that resulted from this grant.

  3. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  4. Approach for removing ghost-images in remote field eddy current testing of ferromagnetic pipes

    NASA Astrophysics Data System (ADS)

    Luo, Q. W.; Shi, Y. B.; Wang, Z. G.; Zhang, W.; Zhang, Y.

    2016-10-01

    In the non-destructive testing of ferromagnetic pipes based on remote field eddy currents, an array of sensing coils is often used to detect local defects. While testing, the image that is obtained by sensing coils exhibits a ghost-image, which originates from both the transmitter and sensing coils passing over the same defects in pipes. Ghost-images are caused by transmitters and lead to undesirable assessments of defects. In order to remove ghost-images, two pickup coils are coaxially set to each other in remote field. Due to the time delay between differential signals tested by the two pickup coils, a Wiener deconvolution filter is used to identify the artificial peaks that lead to ghost-images. Because the sensing coils and two pickup coils all receive the same signal from one transmitter, they all contain the same artificial peaks. By subtracting the artificial peak values obtained by the two pickup coils from the imaging data, the ghost-image caused by the transmitter is eliminated. Finally, a relatively highly accurate image of local defects is obtained by these sensing coils. With proposed method, there is no need to subtract the average value of the sensing coils, and it is sensitive to ringed defects.

  5. Approach for removing ghost-images in remote field eddy current testing of ferromagnetic pipes.

    PubMed

    Luo, Q W; Shi, Y B; Wang, Z G; Zhang, W; Zhang, Y

    2016-10-01

    In the non-destructive testing of ferromagnetic pipes based on remote field eddy currents, an array of sensing coils is often used to detect local defects. While testing, the image that is obtained by sensing coils exhibits a ghost-image, which originates from both the transmitter and sensing coils passing over the same defects in pipes. Ghost-images are caused by transmitters and lead to undesirable assessments of defects. In order to remove ghost-images, two pickup coils are coaxially set to each other in remote field. Due to the time delay between differential signals tested by the two pickup coils, a Wiener deconvolution filter is used to identify the artificial peaks that lead to ghost-images. Because the sensing coils and two pickup coils all receive the same signal from one transmitter, they all contain the same artificial peaks. By subtracting the artificial peak values obtained by the two pickup coils from the imaging data, the ghost-image caused by the transmitter is eliminated. Finally, a relatively highly accurate image of local defects is obtained by these sensing coils. With proposed method, there is no need to subtract the average value of the sensing coils, and it is sensitive to ringed defects.

  6. WE-G-207-04: Non-Local Total-Variation (NLTV) Combined with Reweighted L1-Norm for Compressed Sensing Based CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Compressed sensing (CS) has been used for CT (4DCT/CBCT) reconstruction with few projections to reduce dose of radiation. Total-variation (TV) in L1-minimization (min.) with local information is the prevalent technique in CS, while it can be prone to noise. To address the problem, this work proposes to apply a new image processing technique, called non-local TV (NLTV), to CS based CT reconstruction, and incorporate reweighted L1-norm into it for more precise reconstruction. Methods: TV minimizes intensity variations by considering two local neighboring voxels, which can be prone to noise, possibly damaging the reconstructed CT image. NLTV, contrarily, utilizes moremore » global information by computing a weight function of current voxel relative to surrounding search area. In fact, it might be challenging to obtain an optimal solution due to difficulty in defining the weight function with appropriate parameters. Introducing reweighted L1-min., designed for approximation to ideal L0-min., can reduce the dependence on defining the weight function, therefore improving accuracy of the solution. This work implemented the NLTV combined with reweighted L1-min. by Split Bregman Iterative method. For evaluation, a noisy digital phantom and a pelvic CT images are employed to compare the quality of images reconstructed by TV, NLTV and reweighted NLTV. Results: In both cases, conventional and reweighted NLTV outperform TV min. in signal-to-noise ratio (SNR) and root-mean squared errors of the reconstructed images. Relative to conventional NLTV, NLTV with reweighted L1-norm was able to slightly improve SNR, while greatly increasing the contrast between tissues due to additional iterative reweighting process. Conclusion: NLTV min. can provide more precise compressed sensing based CT image reconstruction by incorporating the reweighted L1-norm, while maintaining greater robustness to the noise effect than TV min.« less

  7. 1982 International Geoscience and Remote Sensing Symposium, Munich, West Germany, June 1-4, 1982, Digest. Volumes 1 and 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1982-01-01

    Theoretical and experimental data which have defined and/or extended the effectiveness of remote sensing operations are explored, with consideration given to both scientific and commercial activities. The remote sensing of soil moisture, the sea surface, and oil slicks is discussed, as are programs using satellites for studying geodynamics and geodesy, currents and waves, and coastal zones. NASA, Canadian, and Japanese radar and microwave passive and active systems are described, together with algorithms and techniques for image processing and classification. The SAR-580 project is outlined, and attention is devoted to satellite applications in investigations of the structure of the atmosphere, agriculturemore » and land use, and geology. Design and performance features of various optical scanner, radar, and multispectral data processing systems and procedures are detailed.« less

  8. Using Remote Sensing to Visualize and Extract Building Inventories of Urban Areas for Disaster Planning and Response

    NASA Astrophysics Data System (ADS)

    Lang, A. F.; Salvaggio, C.

    2016-12-01

    Climate change, skyrocketing global population, and increasing urbanization have set the stage for more so-called "mega-disasters." We possess the knowledge to mitigate and predict the scope of these events, and recent advancements in remote sensing can inform these efforts. Satellite and aerial imagery can be obtained anywhere of interest; unmanned aerial systems can be deployed quickly; and improved sensor resolutions and image processing techniques allow close examination of the built environment. Combined, these technologies offer an unprecedented ability for the disaster community to visualize, assess, and communicate risk. Disaster mitigation and response efforts rely on an accurate representation of the built environment, including knowledge of building types, structural characteristics, and juxtapositions to known hazards. The use of remote sensing to extract these inventory data has come far in the last five years. Researchers in the Digital Imaging and Remote Sensing (DIRS) group at the Rochester Institute of Technology are meeting the needs of the disaster community through the development of novel image processing methods capable of extracting detailed information of individual buildings. DIRS researchers have pioneered the ability to generate three-dimensional building models from point cloud imagery (e.g., LiDAR). This method can process an urban area and recreate it in a navigable virtual reality environment such as Google Earth within hours. Detailed geometry is obtained for individual structures (e.g., footprint, elevation). In a recent step forward, these geometric data can now be combined with imagery from other sources, such as high resolution or multispectral imagery. The latter ascribes a spectral signature to individual pixels, suggesting construction material. Ultimately, these individual building data are amassed over an entire region, facilitating aggregation and risk modeling analyses. The downtown region of Rochester, New York is presented as a case study. High resolution optical, LiDAR, and multi-spectral imagery was captured of this region. Using the techniques described, these imagery sources are combined and processed to produce a holistic representation of the built environment, inclusive of individual building characteristics.

  9. Remote sensing programs and courses in engineering and water resources

    NASA Technical Reports Server (NTRS)

    Kiefer, R. W.

    1981-01-01

    The content of typical basic and advanced remote sensing and image interpretation courses are described and typical remote sensing graduate programs of study in civil engineering and in interdisciplinary environmental remote sensing and water resources management programs are outlined. Ideally, graduate programs with an emphasis on remote sensing and image interpretation should be built around a core of five courses: (1) a basic course in fundamentals of remote sensing upon which the more specialized advanced remote sensing courses can build; (2) a course dealing with visual image interpretation; (3) a course dealing with quantitative (computer-based) image interpretation; (4) a basic photogrammetry course; and (5) a basic surveying course. These five courses comprise up to one-half of the course work required for the M.S. degree. The nature of other course work and thesis requirements vary greatly, depending on the department in which the degree is being awarded.

  10. Applications of Low Altitude Remote Sensing in Agriculture upon Farmers' Requests– A Case Study in Northeastern Ontario, Canada

    PubMed Central

    Zhang, Chunhua; Walters, Dan; Kovacs, John M.

    2014-01-01

    With the growth of the low altitude remote sensing (LARS) industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV). Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind) on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams. PMID:25386696

  11. Demonstrating the Value of Near Real-time Satellite-based Earth Observations in a Research and Education Framework

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Hao, X.; Kinter, J. L.; Stearn, G.; Aliani, M.

    2017-12-01

    The launch of GOES-16 series provides an opportunity to advance near real-time applications in natural hazard detection, monitoring and warning. This study demonstrates the capability and values of receiving real-time satellite-based Earth observations over a fast terrestrial networks and processing high-resolution remote sensing data in a university environment. The demonstration system includes 4 components: 1) Near real-time data receiving and processing; 2) data analysis and visualization; 3) event detection and monitoring; and 4) information dissemination. Various tools are developed and integrated to receive and process GRB data in near real-time, produce images and value-added data products, and detect and monitor extreme weather events such as hurricane, fire, flooding, fog, lightning, etc. A web-based application system is developed to disseminate near-real satellite images and data products. The images are generated with GIS-compatible format (GeoTIFF) to enable convenient use and integration in various GIS platforms. This study enhances the capacities for undergraduate and graduate education in Earth system and climate sciences, and related applications to understand the basic principles and technology in real-time applications with remote sensing measurements. It also provides an integrated platform for near real-time monitoring of extreme weather events, which are helpful for various user communities.

  12. Applications of low altitude remote sensing in agriculture upon farmers' requests--a case study in northeastern Ontario, Canada.

    PubMed

    Zhang, Chunhua; Walters, Dan; Kovacs, John M

    2014-01-01

    With the growth of the low altitude remote sensing (LARS) industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV). Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind) on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams.

  13. Effective use of principal component analysis with high resolution remote sensing data to delineate hydrothermal alteration and carbonate rocks

    NASA Technical Reports Server (NTRS)

    Feldman, Sandra C.

    1987-01-01

    Methods of applying principal component (PC) analysis to high resolution remote sensing imagery were examined. Using Airborne Imaging Spectrometer (AIS) data, PC analysis was found to be useful for removing the effects of albedo and noise and for isolating the significant information on argillic alteration, zeolite, and carbonate minerals. An effective technique for using PC analysis using an input the first 16 AIS bands, 7 intermediate bands, and the last 16 AIS bands from the 32 flat field corrected bands between 2048 and 2337 nm. Most of the significant mineralogical information resided in the second PC. PC color composites and density sliced images provided a good mineralogical separation when applied to a AIS data set. Although computer intensive, the advantage of PC analysis is that it employs algorithms which already exist on most image processing systems.

  14. Multispectral Photography

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.

  15. BOREAS Level-4c AVHRR-LAC Ten-Day Composite Images: Surface Parameters

    NASA Technical Reports Server (NTRS)

    Cihlar, Josef; Chen, Jing; Huang, Fengting; Nickeson, Jaime; Newcomer, Jeffrey A.; Hall, Forrest G. (Editor)

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study (BOREAS) Staff Science Satellite Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. Manitoba Remote Sensing Center (MRSC) and BOREAS Information System (BORIS) personnel acquired, processed, and archived data from the Advanced Very High Resolution Radiometer (AVHRR) instruments on the NOAA-11 and -14 satellites. The AVHRR data were acquired by CCRS and were provided to BORIS for use by BOREAS researchers. These AVHRR level-4c data are gridded, 10-day composites of surface parameters produced from sets of single-day images. Temporally, the 10-day compositing periods begin 11-Apr-1994 and end 10-Sep-1994. Spatially, the data cover the entire BOREAS region. The data are stored in binary image format files. Note: Some of the data files on the BOREAS CD-ROMs have been compressed using the Gzip program.

  16. BOREAS Level-4b AVHRR-LAC Ten-Day Composite Images: At-sensor Radiance

    NASA Technical Reports Server (NTRS)

    Cihlar, Josef; Chen, Jing; Nickerson, Jaime; Newcomer, Jeffrey A.; Huang, Feng-Ting; Hall, Forrest G. (Editor)

    2000-01-01

    The BOReal Ecosystem-Atmosphere Study (BOREAS) Staff Science Satellite Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. Manitoba Remote Sensing Center (MRSC) and BOREAS Information System (BORIS) personnel acquired, processed, and archived data from the Advanced Very High Resolution Radiometer (AVHRR) instruments on the National Oceanic and Atmospheric Administration (NOAA-11) and -14 satellites. The AVHRR data were acquired by CCRS and were provided to BORIS for use by BOREAS researchers. These AVHRR level-4b data are gridded, 10-day composites of at-sensor radiance values produced from sets of single-day images. Temporally, the 10- day compositing periods begin 11-Apr-1994 and end 10-Sep-1994. Spatially, the data cover the entire BOREAS region. The data are stored in binary image format files.

  17. System and method for glass processing and temperature sensing

    DOEpatents

    Shepard, Chester L.; Cannon, Bret D.; Khaleel, Mohammad A.

    2004-09-28

    Techniques for measuring the temperature at various locations through the thickness of glass products and to control the glass processing operation with the sensed temperature information are disclosed. Fluorescence emission of iron or cerium in glass is excited and imaged onto segmented detectors. Spatially resolved temperature data are obtained through correlation of the detected photoluminescence signal with location within the glass. In one form the detected photoluminescence is compared to detected scattered excitation light to determine temperature. Stress information is obtained from the time history of the temperature profile data and used to evaluate the quality of processed glass. A heating or cooling rate of the glass is also controlled to maintain a predetermined desired temperature profile in the glass.

  18. a Kml-Based Approach for Distributed Collaborative Interpretation of Remote Sensing Images in the Geo-Browser

    NASA Astrophysics Data System (ADS)

    Huang, L.; Zhu, X.; Guo, W.; Xiang, L.; Chen, X.; Mei, Y.

    2012-07-01

    Existing implementations of collaborative image interpretation have many limitations for very large satellite imageries, such as inefficient browsing, slow transmission, etc. This article presents a KML-based approach to support distributed, real-time, synchronous collaborative interpretation for remote sensing images in the geo-browser. As an OGC standard, KML (Keyhole Markup Language) has the advantage of organizing various types of geospatial data (including image, annotation, geometry, etc.) in the geo-browser. Existing KML elements can be used to describe simple interpretation results indicated by vector symbols. To enlarge its application, this article expands KML elements to describe some complex image processing operations, including band combination, grey transformation, geometric correction, etc. Improved KML is employed to describe and share interpretation operations and results among interpreters. Further, this article develops some collaboration related services that are collaboration launch service, perceiving service and communication service. The launch service creates a collaborative interpretation task and provides a unified interface for all participants. The perceiving service supports interpreters to share collaboration awareness. Communication service provides interpreters with written words communication. Finally, the GeoGlobe geo-browser (an extensible and flexible geospatial platform developed in LIESMARS) is selected to perform experiments of collaborative image interpretation. The geo-browser, which manage and visualize massive geospatial information, can provide distributed users with quick browsing and transmission. Meanwhile in the geo-browser, GIS data (for example DEM, DTM, thematic map and etc.) can be integrated to assist in improving accuracy of interpretation. Results show that the proposed method is available to support distributed collaborative interpretation of remote sensing image

  19. Broadband Phase Retrieval for Image-Based Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A focus-diverse phase-retrieval algorithm has been shown to perform adequately for the purpose of image-based wavefront sensing when (1) broadband light (typically spanning the visible spectrum) is used in forming the images by use of an optical system under test and (2) the assumption of monochromaticity is applied to the broadband image data. Heretofore, it had been assumed that in order to obtain adequate performance, it is necessary to use narrowband or monochromatic light. Some background information, including definitions of terms and a brief description of pertinent aspects of image-based phase retrieval, is prerequisite to a meaningful summary of the present development. Phase retrieval is a general term used in optics to denote estimation of optical imperfections or aberrations of an optical system under test. The term image-based wavefront sensing refers to a general class of algorithms that recover optical phase information, and phase-retrieval algorithms constitute a subset of this class. In phase retrieval, one utilizes the measured response of the optical system under test to produce a phase estimate. The optical response of the system is defined as the image of a point-source object, which could be a star or a laboratory point source. The phase-retrieval problem is characterized as image-based in the sense that a charge-coupled-device camera, preferably of scientific imaging quality, is used to collect image data where the optical system would normally form an image. In a variant of phase retrieval, denoted phase-diverse phase retrieval [which can include focus-diverse phase retrieval (in which various defocus planes are used)], an additional known aberration (or an equivalent diversity function) is superimposed as an aid in estimating unknown aberrations by use of an image-based wavefront-sensing algorithm. Image-based phase-retrieval differs from such other wavefront-sensing methods, such as interferometry, shearing interferometry, curvature wavefront sensing, and Shack-Hartmann sensing, all of which entail disadvantages in comparison with image-based methods. The main disadvantages of these non-image based methods are complexity of test equipment and the need for a wavefront reference.

  20. Detection and Monitoring of Oil Spills Using Moderate/High-Resolution Remote Sensing Images.

    PubMed

    Li, Ying; Cui, Can; Liu, Zexi; Liu, Bingxin; Xu, Jin; Zhu, Xueyuan; Hou, Yongchao

    2017-07-01

    Current marine oil spill detection and monitoring methods using high-resolution remote sensing imagery are quite limited. This study presented a new bottom-up and top-down visual saliency model. We used Landsat 8, GF-1, MAMS, HJ-1 oil spill imagery as dataset. A simplified, graph-based visual saliency model was used to extract bottom-up saliency. It could identify the regions with high visual saliency object in the ocean. A spectral similarity match model was used to obtain top-down saliency. It could distinguish oil regions and exclude the other salient interference by spectrums. The regions of interest containing oil spills were integrated using these complementary saliency detection steps. Then, the genetic neural network was used to complete the image classification. These steps increased the speed of analysis. For the test dataset, the average running time of the entire process to detect regions of interest was 204.56 s. During image segmentation, the oil spill was extracted using a genetic neural network. The classification results showed that the method had a low false-alarm rate (high accuracy of 91.42%) and was able to increase the speed of the detection process (fast runtime of 19.88 s). The test image dataset was composed of different types of features over large areas in complicated imaging conditions. The proposed model was proved to be robust in complex sea conditions.

  1. "Peak tracking chip" for label-free optical detection of bio-molecular interaction and bulk sensing.

    PubMed

    Bougot-Robin, Kristelle; Li, Shunbo; Zhang, Yinghua; Hsing, I-Ming; Benisty, Henri; Wen, Weijia

    2012-10-21

    A novel imaging method for bulk refractive index sensing or label-free bio-molecular interaction sensing is presented. This method is based on specially designed "Peak tracking chip" (PTC) involving "tracks" of adjacent resonant waveguide gratings (RWG) "micropads" with slowly evolving resonance position. Using a simple camera the spatial information robustly retrieves the diffraction efficiency, which in turn transduces either the refractive index of the liquids on the tracks or the effective thickness of an immobilized biological layer. Our intrinsically multiplex chip combines tunability and versatility advantages of dielectric guided wave biochips without the need of costly hyperspectral instrumentation. The current success of surface plasmon imaging techniques suggests that our chip proposal could leverage an untapped potential to routinely extend such techniques in a convenient and sturdy optical configuration toward, for instance for large analytes detection. PTC design and fabrication are discussed with challenging process to control micropads properties by varying their period (step of 2 nm) or their duty cycle through the groove width (steps of 4 nm). Through monochromatic imaging of our PTC, we present experimental demonstration of bulk index sensing on the range [1.33-1.47] and of surface biomolecule detection of molecular weight 30 kDa in aqueous solution using different surface densities. A sensitivity of the order of 10(-5) RIU for bulk detection and a sensitivity of the order of ∼10 pg mm(-2) for label-free surface detection are expected, therefore opening a large range of application of our chip based imaging technique. Exploiting and chip design, we expect as well our chip to open new direction for multispectral studies through imaging.

  2. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  3. Method of acquiring an image from an optical structure having pixels with dedicated readout circuits

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2006-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  4. GPU-Accelerated Hybrid Algorithm for 3D Localization of Fluorescent Emitters in Dense Clusters

    NASA Astrophysics Data System (ADS)

    Jung, Yoon; Barsic, Anthony; Piestun, Rafael; Fakhri, Nikta

    In stochastic switching-based super-resolution imaging, a random subset of fluorescent emitters are imaged and localized for each frame to construct a single high resolution image. However, the condition of non-overlapping point spread functions (PSFs) imposes constraints on experimental parameters. Recent development in post processing methods such as dictionary-based sparse support recovery using compressive sensing has shown up to an order of magnitude higher recall rate than single emitter fitting methods. However, the computational complexity of this approach scales poorly with the grid size and requires long runtime. Here, we introduce a fast and accurate compressive sensing algorithm for localizing fluorescent emitters in high density in 3D, namely sparse support recovery using Orthogonal Matching Pursuit (OMP) and L1-Homotopy algorithm for reconstructing STORM images (SOLAR STORM). SOLAR STORM combines OMP with L1-Homotopy to reduce computational complexity, which is further accelerated by parallel implementation using GPUs. This method can be used in a variety of experimental conditions for both in vitro and live cell fluorescence imaging.

  5. Collaborative classification of hyperspectral and visible images with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Mengmeng; Li, Wei; Du, Qian

    2017-10-01

    Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.

  6. Biomedical imaging and sensing using flatbed scanners.

    PubMed

    Göröcs, Zoltán; Ozcan, Aydogan

    2014-09-07

    In this Review, we provide an overview of flatbed scanner based biomedical imaging and sensing techniques. The extremely large imaging field-of-view (e.g., ~600-700 cm(2)) of these devices coupled with their cost-effectiveness provide unique opportunities for digital imaging of samples that are too large for regular optical microscopes, and for collection of large amounts of statistical data in various automated imaging or sensing tasks. Here we give a short introduction to the basic features of flatbed scanners also highlighting the key parameters for designing scientific experiments using these devices, followed by a discussion of some of the significant examples, where scanner-based systems were constructed to conduct various biomedical imaging and/or sensing experiments. Along with mobile phones and other emerging consumer electronics devices, flatbed scanners and their use in advanced imaging and sensing experiments might help us transform current practices of medicine, engineering and sciences through democratization of measurement science and empowerment of citizen scientists, science educators and researchers in resource limited settings.

  7. Biomedical Imaging and Sensing using Flatbed Scanners

    PubMed Central

    Göröcs, Zoltán; Ozcan, Aydogan

    2014-01-01

    In this Review, we provide an overview of flatbed scanner based biomedical imaging and sensing techniques. The extremely large imaging field-of-view (e.g., ~600–700 cm2) of these devices coupled with their cost-effectiveness provide unique opportunities for digital imaging of samples that are too large for regular optical microscopes, and for collection of large amounts of statistical data in various automated imaging or sensing tasks. Here we give a short introduction to the basic features of flatbed scanners also highlighting the key parameters for designing scientific experiments using these devices, followed by a discussion of some of the significant examples, where scanner-based systems were constructed to conduct various biomedical imaging and/or sensing experiments. Along with mobile phones and other emerging consumer electronics devices, flatbed scanners and their use in advanced imaging and sensing experiments might help us transform current practices of medicine, engineering and sciences through democratization of measurement science and empowerment of citizen scientists, science educators and researchers in resource limited settings. PMID:24965011

  8. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  9. Monitoring, analysis and classification of vegetation and soil data collected by a small and lightweight hyperspectral imaging system

    NASA Astrophysics Data System (ADS)

    Mönnig, Carsten

    2014-05-01

    The increasing precision of modern farming systems requires a near-real-time monitoring of agricultural crops in order to estimate soil condition, plant health and potential crop yield. For large sized agricultural plots, satellite imagery or aerial surveys can be used at considerable costs and possible time delays of days or even weeks. However, for small to medium sized plots, these monitoring approaches are cost-prohibitive and difficult to assess. Therefore, we propose within the INTERREG IV A-Project SMART INSPECTORS (Smart Aerial Test Rigs with Infrared Spectrometers and Radar), a cost effective, comparably simple approach to support farmers with a small and lightweight hyperspectral imaging system to collect remotely sensed data in spectral bands in between 400 to 1700nm. SMART INSPECTORS includes the whole remote sensing processing chain of small scale remote sensing from sensor construction, data processing and ground truthing for analysis of the results. The sensors are mounted on a remotely controlled (RC) Octocopter, a fixed wing RC airplane as well as on a two-seated Autogyro for larger plots. The high resolution images up to 5cm on the ground include spectra of visible light, near and thermal infrared as well as hyperspectral imagery. The data will be analyzed using remote sensing software and a Geographic Information System (GIS). The soil condition analysis includes soil humidity, temperature and roughness. Furthermore, a radar sensor is envisaged for the detection of geomorphologic, drainage and soil-plant roughness investigation. Plant health control includes drought stress, vegetation health, pest control, growth condition and canopy temperature. Different vegetation and soil indices will help to determine and understand soil conditions and plant traits. Additional investigation might include crop yield estimation of certain crops like apples, strawberries, pasture land, etc. The quality of remotely sensed vegetation data will be tested with ground truthing tools like a spectrometer, visual inspection and ground control panel. The soil condition will also be monitored with a wireless sensor network installed on the examined plots of interest. Provided with this data, a farmer can respond immediately to potential threats with high local precision. In this presentation, preliminary results of hyperspectral images of distinctive vegetation cover and soil on different pasture test plots are shown. After an evaluation period, the whole processing chain will offer farmers a unique, near real- time, low cost solution for small to mid-sized agricultural plots in order to easily assess crop and soil quality and the estimation of harvest. SMART INSPECTORS remotely sensed data will form the basis for an input in a decision support system which aims to detect crop related issues in order to react quickly and efficiently, saving fertilizer, water or pesticides.

  10. Application of Remote Sensing for the Analysis of Environmental Changes in Albania

    NASA Astrophysics Data System (ADS)

    Frasheri, N.; Beqiraj, G.; Bushati, S.; Frasheri, A.

    2016-08-01

    In the paper there is presented a review of remote sensing studies carried out for investigation of environmental changes in Albania. Using, often simple methodologies and general purpose image processing software, and exploiting free Internet archives of satellite imagery, significant results were obtained for hot areas of environmental changes. Such areas include sea coasts experiencing sea transgression, temporal variations of vegetation and aerosols, lakes, landslides and regional tectonics. Internet archives of European Space Agency ESA and USA Geological Service USGS are used.

  11. Global geomorphology: Report of Working Group Number 1

    NASA Technical Reports Server (NTRS)

    Douglas, I.

    1985-01-01

    Remote sensing was considered invaluable for seeing landforms in their regional context and in relationship to each other. Sequential images, such as those available from LANDSAT orbits provide a means of detecting landform change and the operation of large scale processes, such as major floods in semiarid regions. The use of remote sensing falls into two broad stages: (1) the characterization or accurate description of the features of the Earth's surface; and (2) the study of landform evolution. Recommendations for future research are made.

  12. Using satellite image-based maps and ground inventory data to estimate the area of the remaining Atlantic forest in the Brazilian state of Santa Catarina

    Treesearch

    Alexander C. Vibrans; Ronald E. McRoberts; Paolo Moser; Adilson L. Nicoletti

    2013-01-01

    Estimation of large area forest attributes, such as area of forest cover, from remote sensing-based maps is challenging because of image processing, logistical, and data acquisition constraints. In addition, techniques for estimating and compensating for misclassification and estimating uncertainty are often unfamiliar. Forest area for the state of Santa Catarina in...

  13. Fast Occlusion and Shadow Detection for High Resolution Remote Sensing Image Combined with LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Hu, X.; Li, X.

    2012-08-01

    The orthophoto is an important component of GIS database and has been applied in many fields. But occlusion and shadow causes the loss of feature information which has a great effect on the quality of images. One of the critical steps in true orthophoto generation is the detection of occlusion and shadow. Nowadays LiDAR can obtain the digital surface model (DSM) directly. Combined with this technology, image occlusion and shadow can be detected automatically. In this paper, the Z-Buffer is applied for occlusion detection. The shadow detection can be regarded as a same problem with occlusion detection considering the angle between the sun and the camera. However, the Z-Buffer algorithm is computationally expensive. And the volume of scanned data and remote sensing images is very large. Efficient algorithm is another challenge. Modern graphics processing unit (GPU) is much more powerful than central processing unit (CPU). We introduce this technology to speed up the Z-Buffer algorithm and get 7 times increase in speed compared with CPU. The results of experiments demonstrate that Z-Buffer algorithm plays well in occlusion and shadow detection combined with high density of point cloud and GPU can speed up the computation significantly.

  14. Emergency Response Imagery Related to Hurricanes Harvey, Irma, and Maria

    NASA Astrophysics Data System (ADS)

    Worthem, A. V.; Madore, B.; Imahori, G.; Woolard, J.; Sellars, J.; Halbach, A.; Helmricks, D.; Quarrick, J.

    2017-12-01

    NOAA's National Geodetic Survey (NGS) and Remote Sensing Division acquired and rapidly disseminated emergency response imagery related to the three recent hurricanes Harvey, Irma, and Maria. Aerial imagery was collected using a Trimble Digital Sensor System, a high-resolution digital camera, by means of NOAA's King Air 350ER and DeHavilland Twin Otter (DHC-6) Aircraft. The emergency response images are used to assess the before and after effects of the hurricanes' damage. The imagery aids emergency responders, such as FEMA, Coast Guard, and other state and local governments, in developing recovery strategies and efforts by prioritizing areas most affected and distributing appropriate resources. Collected imagery is also used to provide damage assessment for use in long-term recovery and rebuilding efforts. Additionally, the imagery allows for those evacuated persons to see images of their homes and neighborhoods remotely. Each of the individual images are processed through ortho-rectification and merged into a uniform mosaic image. These remotely sensed datasets are publically available, and often used by web-based map servers as well as, federal, state, and local government agencies. This poster will show the imagery collected for these three hurricanes and the processes involved in getting data quickly into the hands of those that need it most.

  15. The Extraction of Terrace in the Loess Plateau Based on radial method

    NASA Astrophysics Data System (ADS)

    Liu, W.; Li, F.

    2016-12-01

    The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.

  16. Fiber-Optic Surface Temperature Sensor Based on Modal Interference.

    PubMed

    Musin, Frédéric; Mégret, Patrice; Wuilpart, Marc

    2016-07-28

    Spatially-integrated surface temperature sensing is highly useful when it comes to controlling processes, detecting hazardous conditions or monitoring the health and safety of equipment and people. Fiber-optic sensing based on modal interference has shown great sensitivity to temperature variation, by means of cost-effective image-processing of few-mode interference patterns. New developments in the field of sensor configuration, as described in this paper, include an innovative cooling and heating phase discrimination functionality and more precise measurements, based entirely on the image processing of interference patterns. The proposed technique was applied to the measurement of the integrated surface temperature of a hollow cylinder and compared with a conventional measurement system, consisting of an infrared camera and precision temperature probe. As a result, the optical technique is in line with the reference system. Compared with conventional surface temperature probes, the optical technique has the following advantages: low heat capacity temperature measurement errors, easier spatial deployment, and replacement of multiple angle infrared camera shooting and the continuous monitoring of surfaces that are not visually accessible.

  17. Breadboard linear array scan imager using LSI solid-state technology

    NASA Technical Reports Server (NTRS)

    Tracy, R. A.; Brennan, J. A.; Frankel, D. G.; Noll, R. E.

    1976-01-01

    The performance of large scale integration photodiode arrays in a linear array scan (pushbroom) breadboard was evaluated for application to multispectral remote sensing of the earth's resources. The technical approach, implementation, and test results of the program are described. Several self scanned linear array visible photodetector focal plane arrays were fabricated and evaluated in an optical bench configuration. A 1728-detector array operating in four bands (0.5 - 1.1 micrometer) was evaluated for noise, spectral response, dynamic range, crosstalk, MTF, noise equivalent irradiance, linearity, and image quality. Other results include image artifact data, temporal characteristics, radiometric accuracy, calibration experience, chip alignment, and array fabrication experience. Special studies and experimentation were included in long array fabrication and real-time image processing for low-cost ground stations, including the use of computer image processing. High quality images were produced and all objectives of the program were attained.

  18. Efficiency of the spectral-spatial classification of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Borzov, S. M.; Potaturkin, O. I.

    2017-01-01

    The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.

  19. Two-dimensional signal processing with application to image restoration

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1974-01-01

    A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.

  20. Images of a Loving God and Sense of Meaning in Life

    ERIC Educational Resources Information Center

    Stroope, Samuel; Draper, Scott; Whitehead, Andrew L.

    2013-01-01

    Although prior studies have documented a positive association between religiosity and sense of meaning in life, the role of specific religious beliefs is currently unclear. Past research on images of God suggests that loving images of God will positively correlate with a sense of meaning and purpose. Mechanisms for this hypothesized relationship…

Top