Sample records for sensing image generation

  1. Exploring Models and Data for Remote Sensing Image Caption Generation

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoqiang; Wang, Binqiang; Zheng, Xiangtao; Li, Xuelong

    2018-04-01

    Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, noticeable progress has been made in scene classification and target detection.However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at https://github.com/201528014227051/RSICD_optimal

  2. The least-squares mixing models to generate fraction images derived from remote sensing multispectral data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1991-01-01

    Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.

  3. Remote Sensing: Analyzing Satellite Images to Create Higher Order Thinking Skills.

    ERIC Educational Resources Information Center

    Marks, Steven K.; And Others

    1996-01-01

    Presents a unit that uses remote-sensing images from satellites and other spacecraft to provide new perspectives of the earth and generate greater global awareness. Relates the levels of Bloom's hierarchy to different aspects of the remote sensing unit to confirm that the concepts and principles of remote sensing and related images belong in…

  4. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    NASA Astrophysics Data System (ADS)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  5. MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification

    NASA Astrophysics Data System (ADS)

    Lin, Daoyu; Fu, Kun; Wang, Yang; Xu, Guangluan; Sun, Xian

    2017-11-01

    With the development of deep learning, supervised learning has frequently been adopted to classify remotely sensed images using convolutional networks (CNNs). However, due to the limited amount of labeled data available, supervised learning is often difficult to carry out. Therefore, we proposed an unsupervised model called multiple-layer feature-matching generative adversarial networks (MARTA GANs) to learn a representation using only unlabeled data. MARTA GANs consists of both a generative model $G$ and a discriminative model $D$. We treat $D$ as a feature extractor. To fit the complex properties of remote sensing data, we use a fusion layer to merge the mid-level and global features. $G$ can produce numerous images that are similar to the training data; therefore, $D$ can learn better representations of remotely sensed images using the training data provided by $G$. The classification results on two widely used remote sensing image databases show that the proposed method significantly improves the classification performance compared with other state-of-the-art methods.

  6. Object-oriented recognition of high-resolution remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan

    2016-01-01

    With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .

  7. System and method for optical fiber based image acquisition suitable for use in turbine engines

    DOEpatents

    Baleine, Erwan; A V, Varun; Zombo, Paul J.; Varghese, Zubin

    2017-05-16

    A system and a method for image acquisition suitable for use in a turbine engine are disclosed. Light received from a field of view in an object plane is projected onto an image plane through an optical modulation device and is transferred through an image conduit to a sensor array. The sensor array generates a set of sampled image signals in a sensing basis based on light received from the image conduit. Finally, the sampled image signals are transformed from the sensing basis to a representation basis and a set of estimated image signals are generated therefrom. The estimated image signals are used for reconstructing an image and/or a motion-video of a region of interest within a turbine engine.

  8. System and method for object localization

    NASA Technical Reports Server (NTRS)

    Kelly, Alonzo J. (Inventor); Zhong, Yu (Inventor)

    2005-01-01

    A computer-assisted method for localizing a rack, including sensing an image of the rack, detecting line segments in the sensed image, recognizing a candidate arrangement of line segments in the sensed image indicative of a predetermined feature of the rack, generating a matrix of correspondence between the candidate arrangement of line segments and an expected position and orientation of the predetermined feature of the rack, and estimating a position and orientation of the rack based on the matrix of correspondence.

  9. The quantitative control and matching of an optical false color composite imaging system

    NASA Astrophysics Data System (ADS)

    Zhou, Chengxian; Dai, Zixin; Pan, Xizhe; Li, Yinxi

    1993-10-01

    Design of an imaging system for optical false color composite (OFCC) capable of high-precision density-exposure time control and color balance is presented. The system provides high quality FCC image data that can be analyzed using a quantitative calculation method. The quality requirement to each part of the image generation system is defined, and the distribution of satellite remote sensing image information is analyzed. The proposed technology makes it possible to present the remote sensing image data more effectively and accurately.

  10. Automatic Large-Scalae 3d Building Shape Refinement Using Conditional Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Bittner, K.; d'Angelo, P.; Körner, M.; Reinartz, P.

    2018-05-01

    Three-dimensional building reconstruction from remote sensing imagery is one of the most difficult and important 3D modeling problems for complex urban environments. The main data sources provided the digital representation of the Earths surface and related natural, cultural, and man-made objects of the urban areas in remote sensing are the digital surface models (DSMs). The DSMs can be obtained either by light detection and ranging (LIDAR), SAR interferometry or from stereo images. Our approach relies on automatic global 3D building shape refinement from stereo DSMs using deep learning techniques. This refinement is necessary as the DSMs, which are extracted from image matching point clouds, suffer from occlusions, outliers, and noise. Though most previous works have shown promising results for building modeling, this topic remains an open research area. We present a new methodology which not only generates images with continuous values representing the elevation models but, at the same time, enhances the 3D object shapes, buildings in our case. Mainly, we train a conditional generative adversarial network (cGAN) to generate accurate LIDAR-like DSM height images from the noisy stereo DSM input. The obtained results demonstrate the strong potential of creating large areas remote sensing depth images where the buildings exhibit better-quality shapes and roof forms.

  11. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  12. ANALYZING THE CONSEQUENCES OF ENVIRONMENTAL SPATIAL PATTERNS ON ENVIRONMENTAL RESOURCES: THE USE OF LANDSCAPE METRICS GENERATED FROM REMOTE SENSING DATA

    EPA Science Inventory

    A number of existing and new remote sensing data provide images of areas ranging from small communities to continents. These images provide views on a wide range of physical features in the landscape, including vegetation, road infrastructure, urban areas, geology, soils, and wa...

  13. Translation-aware semantic segmentation via conditional least-square generative adversarial networks

    NASA Astrophysics Data System (ADS)

    Zhang, Mi; Hu, Xiangyun; Zhao, Like; Pang, Shiyan; Gong, Jinqi; Luo, Min

    2017-10-01

    Semantic segmentation has recently made rapid progress in the field of remote sensing and computer vision. However, many leading approaches cannot simultaneously translate label maps to possible source images with a limited number of training images. The core issue is insufficient adversarial information to interpret the inverse process and proper objective loss function to overcome the vanishing gradient problem. We propose the use of conditional least squares generative adversarial networks (CLS-GAN) to delineate visual objects and solve these problems. We trained the CLS-GAN network for semantic segmentation to discriminate dense prediction information either from training images or generative networks. We show that the optimal objective function of CLS-GAN is a special class of f-divergence and yields a generator that lies on the decision boundary of discriminator that reduces possible vanished gradient. We also demonstrate the effectiveness of the proposed architecture at translating images from label maps in the learning process. Experiments on a limited number of high resolution images, including close-range and remote sensing datasets, indicate that the proposed method leads to the improved semantic segmentation accuracy and can simultaneously generate high quality images from label maps.

  14. Secure distribution for high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Sun, Jing; Xu, Zheng Q.

    2010-09-01

    The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.

  15. Image sensor with motion artifact supression and anti-blooming

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Wrigley, Chris (Inventor); Yang, Guang (Inventor); Yadid-Pecht, Orly (Inventor)

    2006-01-01

    An image sensor includes pixels formed on a semiconductor substrate. Each pixel includes a photoactive region in the semiconductor substrate, a sense node, and a power supply node. A first electrode is disposed near a surface of the semiconductor substrate. A bias signal on the first electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the sense node. A second electrode is disposed near the surface of the semiconductor substrate. A bias signal on the second electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the power supply node. The image sensor includes a controller that causes bias signals to be provided to the electrodes so that photocharges generated in the photoactive region are accumulated in the photoactive region during a pixel integration period, the accumulated photocharges are transferred to the sense node during a charge transfer period, and photocharges generated in the photoactive region are transferred to the power supply node during a third period without passing through the sense node. The imager can operate at high shutter speeds with simultaneous integration of pixels in the array. High quality images can be produced free from motion artifacts. High quantum efficiency, good blooming control, low dark current, low noise and low image lag can be obtained.

  16. High speed CMOS imager with motion artifact supression and anti-blooming

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor); Wrigley, Chris (Inventor); Yang, Guang (Inventor); Yadid-Pecht, Orly (Inventor)

    2001-01-01

    An image sensor includes pixels formed on a semiconductor substrate. Each pixel includes a photoactive region in the semiconductor substrate, a sense node, and a power supply node. A first electrode is disposed near a surface of the semiconductor substrate. A bias signal on the first electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the sense node. A second electrode is disposed near the surface of the semiconductor substrate. A bias signal on the second electrode sets a potential in a region of the semiconductor substrate between the photoactive region and the power supply node. The image sensor includes a controller that causes bias signals to be provided to the electrodes so that photocharges generated in the photoactive region are accumulated in the photoactive region during a pixel integration period, the accumulated photocharges are transferred to the sense node during a charge transfer period, and photocharges generated in the photoactive region are transferred to the power supply node during a third period without passing through the sense node. The imager can operate at high shutter speeds with simultaneous integration of pixels in the array. High quality images can be produced free from motion artifacts. High quantum efficiency, good blooming control, low dark current, low noise and low image lag can be obtained.

  17. VTT's Fabry-Perot interferometer technologies for hyperspectral imaging and mobile sensing applications

    NASA Astrophysics Data System (ADS)

    Rissanen, Anna; Guo, Bin; Saari, Heikki; Näsilä, Antti; Mannila, Rami; Akujärvi, Altti; Ojanen, Harri

    2017-02-01

    VTT's Fabry-Perot interferometers (FPI) technology enables creation of small and cost-efficient microspectrometers and hyperspectral imagers - these robust and light-weight sensors are currently finding their way into a variety of novel applications, including emerging medical products, automotive sensors, space instruments and mobile sensing devices. This presentation gives an overview of our core FPI technologies with current advances in generation of novel sensing applications including recent mobile technology demonstrators of a hyperspectral iPhone and a mobile phone CO2 sensor, which aim to advance mobile spectroscopic sensing.

  18. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  19. Mathematics of Sensing, Exploitation, and Execution (MSEE) Hierarchical Representations for the Evaluation of Sensed Data

    DTIC Science & Technology

    2016-06-01

    theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images

  20. Wavefront Sensing for WFIRST with a Linear Optical Model

    NASA Technical Reports Server (NTRS)

    Jurling, Alden S.; Content, David A.

    2012-01-01

    In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.

  1. Common-Path Wavefront Sensing for Advanced Coronagraphs

    NASA Technical Reports Server (NTRS)

    Wallace, J. Kent; Serabyn, Eugene; Mawet, Dimitri

    2012-01-01

    Imaging of faint companions around nearby stars is not limited by either intrinsic resolution of a coronagraph/telescope system, nor is it strictly photon limited. Typically, it is both the magnitude and temporal variation of small phase and amplitude errors imparted to the electric field by elements in the optical system which will limit ultimate performance. Adaptive optics systems, particularly those with multiple deformable mirrors, can remove these errors, but they need to be sensed in the final image plane. If the sensing system is before the final image plane, which is typical for most systems, then the non-common path optics between the wavefront sensor and science image plane will lead to un-sensed errors. However, a new generation of high-performance coronagraphs naturally lend themselves to wavefront sensing in the final image plane. These coronagraphs and the wavefront sensing will be discussed, as well as plans for demonstrating this with a high-contrast system on the ground. Such a system will be a key system-level proof for a future space-based coronagraph mission, which will also be discussed.

  2. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  3. The effects of SENSE on PROPELLER imaging.

    PubMed

    Chang, Yuchou; Pipe, James G; Karis, John P; Gibbs, Wende N; Zwart, Nicholas R; Schär, Michael

    2015-12-01

    To study how sensitivity encoding (SENSE) impacts periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) image quality, including signal-to-noise ratio (SNR), robustness to motion, precision of motion estimation, and image quality. Five volunteers were imaged by three sets of scans. A rapid method for generating the g-factor map was proposed and validated via Monte Carlo simulations. Sensitivity maps were extrapolated to increase the area over which SENSE can be performed and therefore enhance the robustness to head motion. The precision of motion estimation of PROPELLER blades that are unfolded with these sensitivity maps was investigated. An interleaved R-factor PROPELLER sequence was used to acquire data with similar amounts of motion with and without SENSE acceleration. Two neuroradiologists independently and blindly compared 214 image pairs. The proposed method of g-factor calculation was similar to that provided by the Monte Carlo methods. Extrapolation and rotation of the sensitivity maps allowed for continued robustness of SENSE unfolding in the presence of motion. SENSE-widened blades improved the precision of rotation and translation estimation. PROPELLER images with a SENSE factor of 3 outperformed the traditional PROPELLER images when reconstructing the same number of blades. SENSE not only accelerates PROPELLER but can also improve robustness and precision of head motion correction, which improves overall image quality even when SNR is lost due to acceleration. The reduction of SNR, as a penalty of acceleration, is characterized by the proposed g-factor method. © 2014 Wiley Periodicals, Inc.

  4. SenseCam improves memory for recent events and quality of life in a patient with memory retrieval difficulties.

    PubMed

    Browne, Georgina; Berry, Emma; Kapur, Narinder; Hodges, Steve; Smyth, Gavin; Watson, Peter; Wood, Ken

    2011-10-01

    A wearable camera that takes pictures automatically, SenseCam, was used to generate images for rehearsal, promoting consolidation and retrieval of memories for significant events in a patient with memory retrieval deficits. SenseCam images of recent events were systematically reviewed over a 2-week period. Memory for these events was assessed throughout and longer-term recall was tested up to 6 months later. A written diary control condition followed the same procedure. The SenseCam review procedure resulted in significantly more details of an event being recalled, with twice as many details recalled at 6 months follow up compared to the written diary method. Self-report measures suggested autobiographical recollection was triggered by the SenseCam condition but not by reviewing the written diary. Emotional and social wellbeing questionnaires indicated improved confidence and decreased anxiety as a result of memory rehearsal using SenseCam images. We propose that SenseCam images provide a powerful boost to autobiographical recall, with secondary benefits for quality of life.

  5. Automated seamline detection along skeleton for remote sensing image mosaicking

    NASA Astrophysics Data System (ADS)

    Zhang, Hansong; Chen, Jianyu; Liu, Xin

    2015-08-01

    The automatic generation of seamline along the overlap region skeleton is a concerning problem for the mosaicking of Remote Sensing(RS) images. Along with the improvement of RS image resolution, it is necessary to ensure rapid and accurate processing under complex conditions. So an automated seamline detection method for RS image mosaicking based on image object and overlap region contour contraction is introduced. By this means we can ensure universality and efficiency of mosaicking. The experiments also show that this method can select seamline of RS images with great speed and high accuracy over arbitrary overlap regions, and realize RS image rapid mosaicking in surveying and mapping production.

  6. Remote Sensing: The View from Above. Know Your Environment.

    ERIC Educational Resources Information Center

    Academy of Natural Sciences, Philadelphia, PA.

    This publication identifies some of the general concepts of remote sensing and explains the image collection process and computer-generated reconstruction of the data. Monitoring the ecological collapse in coral reefs, weather phenomena like El Nino/La Nina, and U.S. Space Shuttle-based sensing projects are some of the areas for which remote…

  7. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.

  8. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    NASA Astrophysics Data System (ADS)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  9. Object-Oriented Classification of Sugarcane Using Time-Series Middle-Resolution Remote Sensing Data Based on AdaBoost

    PubMed Central

    Zhou, Zhen; Huang, Jingfeng; Wang, Jing; Zhang, Kangyu; Kuang, Zhaomin; Zhong, Shiquan; Song, Xiaodong

    2015-01-01

    Most areas planted with sugarcane are located in southern China. However, remote sensing of sugarcane has been limited because useable remote sensing data are limited due to the cloudy climate of this region during the growing season and severe spectral mixing with other crops. In this study, we developed a methodology for automatically mapping sugarcane over large areas using time-series middle-resolution remote sensing data. For this purpose, two major techniques were used, the object-oriented method (OOM) and data mining (DM). In addition, time-series Chinese HJ-1 CCD images were obtained during the sugarcane growing period. Image objects were generated using a multi-resolution segmentation algorithm, and DM was implemented using the AdaBoost algorithm, which generated the prediction model. The prediction model was applied to the HJ-1 CCD time-series image objects, and then a map of the sugarcane planting area was produced. The classification accuracy was evaluated using independent field survey sampling points. The confusion matrix analysis showed that the overall classification accuracy reached 93.6% and that the Kappa coefficient was 0.85. Thus, the results showed that this method is feasible, efficient, and applicable for extrapolating the classification of other crops in large areas where the application of high-resolution remote sensing data is impractical due to financial considerations or because qualified images are limited. PMID:26528811

  10. Object-Oriented Classification of Sugarcane Using Time-Series Middle-Resolution Remote Sensing Data Based on AdaBoost.

    PubMed

    Zhou, Zhen; Huang, Jingfeng; Wang, Jing; Zhang, Kangyu; Kuang, Zhaomin; Zhong, Shiquan; Song, Xiaodong

    2015-01-01

    Most areas planted with sugarcane are located in southern China. However, remote sensing of sugarcane has been limited because useable remote sensing data are limited due to the cloudy climate of this region during the growing season and severe spectral mixing with other crops. In this study, we developed a methodology for automatically mapping sugarcane over large areas using time-series middle-resolution remote sensing data. For this purpose, two major techniques were used, the object-oriented method (OOM) and data mining (DM). In addition, time-series Chinese HJ-1 CCD images were obtained during the sugarcane growing period. Image objects were generated using a multi-resolution segmentation algorithm, and DM was implemented using the AdaBoost algorithm, which generated the prediction model. The prediction model was applied to the HJ-1 CCD time-series image objects, and then a map of the sugarcane planting area was produced. The classification accuracy was evaluated using independent field survey sampling points. The confusion matrix analysis showed that the overall classification accuracy reached 93.6% and that the Kappa coefficient was 0.85. Thus, the results showed that this method is feasible, efficient, and applicable for extrapolating the classification of other crops in large areas where the application of high-resolution remote sensing data is impractical due to financial considerations or because qualified images are limited.

  11. Impact Induced Delamination Detection and Quantification With Guided Wavefield Analysis

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Leckey, Cara A. C.; Yu, Lingyu; Seebo, Jeffrey P.

    2015-01-01

    This paper studies impact induced delamination detection and quantification by using guided wavefield data and spatial wavenumber imaging. The complex geometry impact-like delamination is created through a quasi-static indentation on a CFRP plate. To detect and quantify the impact delamination in the CFRP plate, PZT-SLDV sensing and spatial wavenumber imaging are performed. In the PZT-SLDV sensing, the guided waves are generated from the PZT, and the high spatial resolution guided wavefields are measured by the SLDV. The guided wavefield data acquired from the PZT-SLDV sensing represent guided wave propagation in the composite laminate and include guided wave interaction with the delamination damage. The measured guided wavefields are analyzed through the spatial wavenumber imaging method, which generates an image containing the dominant local wavenumber at each spatial location. The spatial wavenumber imaging result for the simple single layer Teflon insert delamination provided quantitative information on delamination damage size and location. The location of delamination damage is indicated by the area with larger wavenumbers in the spatial wavenumber image. The impact-like delamination results only partially agreed with the damage size and shape. The results also demonstrated the dependence on excitation frequency. Future work will further investigate the accuracy of the wavenumber imaging method for real composite damage and the dependence on frequency of excitation.

  12. Mobile Phones Democratize and Cultivate Next-Generation Imaging, Diagnostics and Measurement Tools

    PubMed Central

    Ozcan, Aydogan

    2014-01-01

    In this article, I discuss some of the emerging applications and the future opportunities and challenges created by the use of mobile phones and their embedded components for the development of next-generation imaging, sensing, diagnostics and measurement tools. The massive volume of mobile phone users, which has now reached ~7 billion, drives the rapid improvements of the hardware, software and high-end imaging and sensing technologies embedded in our phones, transforming the mobile phone into a cost-effective and yet extremely powerful platform to run e.g., biomedical tests and perform scientific measurements that would normally require advanced laboratory instruments. This rapidly evolving and continuing trend will help us transform how medicine, engineering and sciences are practiced and taught globally. PMID:24647550

  13. NASA Fluid Lensing & MiDAR: Next-Generation Remote Sensing Technologies for Aquatic Remote Sensing

    NASA Technical Reports Server (NTRS)

    Chirayath, Ved

    2018-01-01

    We present two recent instrument technology developments at NASA, Fluid Lensing and MiDAR, and their application to remote sensing of Earth's aquatic systems. Fluid Lensing is the first remote sensing technology capable of imaging through ocean waves in 3D at sub-cm resolutions. MiDAR is a next-generation active hyperspectral remote sensing and optical communications instrument capable of active fluid lensing. Fluid Lensing has been used to provide 3D multispectral imagery of shallow marine systems from unmanned aerial vehicles (UAVs, or drones), including coral reefs in American Samoa and stromatolite reefs in Hamelin Pool, Western Australia. MiDAR is being deployed on aircraft and underwater remotely operated vehicles (ROVs) to enable a new method for remote sensing of living and nonliving structures in extreme environments. MiDAR images targets with high-intensity narrowband structured optical radiation to measure an objectâ€"TM"s non-linear spectral reflectance, image through fluid interfaces such as ocean waves with active fluid lensing, and simultaneously transmit high-bandwidth data. As an active instrument, MiDAR is capable of remotely sensing reflectance at the centimeter (cm) spatial scale with a signal-to-noise ratio (SNR) multiple orders of magnitude higher than passive airborne and spaceborne remote sensing systems with significantly reduced integration time. This allows for rapid video-frame-rate hyperspectral sensing into the far ultraviolet and VNIR wavelengths. Previously, MiDAR was developed into a TRL 2 laboratory instrument capable of imaging in thirty-two narrowband channels across the VNIR spectrum (400-950nm). Recently, MiDAR UV was raised to TRL4 and expanded to include five ultraviolet bands from 280-400nm, permitting UV remote sensing capabilities in UV A, B, and C bands and enabling mineral identification and stimulated fluorescence measurements of organic proteins and compounds, such as green fluorescent proteins in terrestrial and aquatic organics.

  14. Pan Sharpening Quality Investigation of Turkish In-Operation Remote Sensing Satellites: Applications with Rasat and GÖKTÜRK-2 Images

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Topan, Hüseyin; Cam, Ali; Bayık, Çağlar

    2016-10-01

    Recently two optical remote sensing satellites, RASAT and GÖKTÜRK-2, launched successfully by the Republic of Turkey. RASAT has 7.5 m panchromatic, and 15 m visible bands whereas GÖKTÜRK-2 has 2.5 m panchromatic and 5 m VNIR (Visible and Near Infrared) bands. These bands with various resolutions can be fused by pan-sharpening methods which is an important application area of optical remote sensing imagery. So that, the high geometric resolution of panchromatic band and the high spectral resolution of VNIR bands can be merged. In the literature there are many pan-sharpening methods. However, there is not a standard framework for quality investigation of pan-sharpened imagery. The aim of this study is to investigate pan-sharpening performance of RASAT and GÖKTÜRK-2 images. For this purpose, pan-sharpened images are generated using most popular pan-sharpening methods IHS, Brovey and PCA at first. This procedure is followed by quantitative evaluation of pan-sharpened images using Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Average Spectral Error (RASE), Spectral Angle Mapper (SAM) and Erreur Relative Globale Adimensionnelle de Synthése (ERGAS) metrics. For generation of pan-sharpened images and computation of metrics SharpQ tool is used which is developed with MATLAB computing language. According to metrics, PCA derived pan-sharpened image is the most similar one to multispectral image for RASAT, and Brovey derived pan-sharpened image is the most similar one to multispectral image for GÖKTÜRK-2. Finally, pan-sharpened images are evaluated qualitatively in terms of object availability and completeness for various land covers (such as urban, forest and flat areas) by a group of operators who are experienced in remote sensing imagery.

  15. Modeling Habitat Suitability of Migratory Birds from Remote Sensing Images Using Convolutional Neural Networks

    PubMed Central

    Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping

    2018-01-01

    Simple Summary The understanding of the spatio-temporal distribution of the species habitats would facilitate wildlife resource management and conservation efforts. Existing methods have poor performance due to the limited availability of training samples. More recently, location-aware sensors have been widely used to track animal movements. The aim of the study was to generate suitability maps of bar-head geese using movement data coupled with environmental parameters, such as remote sensing images and temperature data. Therefore, we modified a deep convolutional neural network for the multi-scale inputs. The results indicate that the proposed method can identify the areas with the dense goose species around Qinghai Lake. In addition, this approach might also be interesting for implementation in other species with different niche factors or in areas where biological survey data are scarce. Abstract With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results indicated that our proposed method outperforms the existing baseline methods and was able to achieve good performance in habitat suitability prediction. PMID:29701686

  16. Three-dimensional imaging and remote sensing imaging; Proceedings of the Meeting, Los Angeles, CA, Jan. 14, 15, 1988

    NASA Astrophysics Data System (ADS)

    Robbins, Woodrow E.

    1988-01-01

    The present conference discusses topics in novel technologies and techniques of three-dimensional imaging, human factors-related issues in three-dimensional display system design, three-dimensional imaging applications, and image processing for remote sensing. Attention is given to a 19-inch parallactiscope, a chromostereoscopic CRT-based display, the 'SpaceGraph' true three-dimensional peripheral, advantages of three-dimensional displays, holographic stereograms generated with a liquid crystal spatial light modulator, algorithms and display techniques for four-dimensional Cartesian graphics, an image processing system for automatic retina diagnosis, the automatic frequency control of a pulsed CO2 laser, and a three-dimensional display of magnetic resonance imaging of the spine.

  17. Modeling Habitat Suitability of Migratory Birds from Remote Sensing Images Using Convolutional Neural Networks.

    PubMed

    Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping

    2018-04-26

    With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results indicated that our proposed method outperforms the existing baseline methods and was able to achieve good performance in habitat suitability prediction.

  18. Haptic feedback in OP:Sense - augmented reality in telemanipulated robotic surgery.

    PubMed

    Beyl, T; Nicolai, P; Mönnich, H; Raczkowksy, J; Wörn, H

    2012-01-01

    In current research, haptic feedback in robot assisted interventions plays an important role. However most approaches to haptic feedback only regard the mapping of the current forces at the surgical instrument to the haptic input devices, whereas surgeons demand a combination of medical imaging and telemanipulated robotic setups. In this paper we describe how this feature is integrated in our robotic research platform OP:Sense. The proposed method allows the automatic transfer of segmented imaging data to the haptic renderer and therefore allows enriching the haptic feedback with virtual fixtures based on imaging data. Anatomical structures are extracted from pre-operative generated medical images or virtual walls are defined by the surgeon inside the imaging data. Combining real forces with virtual fixtures can guide the surgeon to the regions of interest as well as helps to prevent the risk of damage to critical structures inside the patient. We believe that the combination of medical imaging and telemanipulation is a crucial step for the next generation of MIRS-systems.

  19. Remote Sensing of Landscapes with Spectral Images

    NASA Astrophysics Data System (ADS)

    Adams, John B.; Gillespie, Alan R.

    2006-05-01

    Remote Sensing of Landscapes with Spectral Images describes how to process and interpret spectral images using physical models to bridge the gap between the engineering and theoretical sides of remote-sensing and the world that we encounter when we venture outdoors. The emphasis is on the practical use of images rather than on theory and mathematical derivations. Examples are drawn from a variety of landscapes and interpretations are tested against the reality seen on the ground. The reader is led through analysis of real images (using figures and explanations); the examples are chosen to illustrate important aspects of the analytic framework. This textbook will form a valuable reference for graduate students and professionals in a variety of disciplines including ecology, forestry, geology, geography, urban planning, archeology and civil engineering. It is supplemented by a web-site hosting digital color versions of figures in the book as well as ancillary images (www.cambridge.org/9780521662214). Presents a coherent view of practical remote sensing, leading from imaging and field work to the generation of useful thematic maps Explains how to apply physical models to help interpret spectral images Supplemented by a website hosting digital colour versions of figures in the book, as well as additional colour figures

  20. Lossless Compression of Classification-Map Data

    NASA Technical Reports Server (NTRS)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  1. Super-Resolution Reconstruction of Remote Sensing Images Using Multifractal Analysis

    PubMed Central

    Hu, Mao-Gui; Wang, Jin-Feng; Ge, Yong

    2009-01-01

    Satellite remote sensing (RS) is an important contributor to Earth observation, providing various kinds of imagery every day, but low spatial resolution remains a critical bottleneck in a lot of applications, restricting higher spatial resolution analysis (e.g., intra-urban). In this study, a multifractal-based super-resolution reconstruction method is proposed to alleviate this problem. The multifractal characteristic is common in Nature. The self-similarity or self-affinity presented in the image is useful to estimate details at larger and smaller scales than the original. We first look for the presence of multifractal characteristics in the images. Then we estimate parameters of the information transfer function and noise of the low resolution image. Finally, a noise-free, spatial resolution-enhanced image is generated by a fractal coding-based denoising and downscaling method. The empirical case shows that the reconstructed super-resolution image performs well in detail enhancement. This method is not only useful for remote sensing in investigating Earth, but also for other images with multifractal characteristics. PMID:22291530

  2. 3D Imaging Millimeter Wave Circular Synthetic Aperture Radar

    PubMed Central

    Zhang, Renyuan; Cao, Siyang

    2017-01-01

    In this paper, a new millimeter wave 3D imaging radar is proposed. The user just needs to move the radar along a circular track, and high resolution 3D imaging can be generated. The proposed radar uses the movement of itself to synthesize a large aperture in both the azimuth and elevation directions. It can utilize inverse Radon transform to resolve 3D imaging. To improve the sensing result, the compressed sensing approach is further investigated. The simulation and experimental result further illustrated the design. Because a single transceiver circuit is needed, a light, affordable and high resolution 3D mmWave imaging radar is illustrated in the paper. PMID:28629140

  3. Optical image encryption using chaos-based compressed sensing and phase-shifting interference in fractional wavelet domain

    NASA Astrophysics Data System (ADS)

    Liu, Qi; Wang, Ying; Wang, Jun; Wang, Qiong-Hua

    2018-02-01

    In this paper, a novel optical image encryption system combining compressed sensing with phase-shifting interference in fractional wavelet domain is proposed. To improve the encryption efficiency, the volume data of original image are decreased by compressed sensing. Then the compacted image is encoded through double random phase encoding in asymmetric fractional wavelet domain. In the encryption system, three pseudo-random sequences, generated by three-dimensional chaos map, are used as the measurement matrix of compressed sensing and two random-phase masks in the asymmetric fractional wavelet transform. It not only simplifies the keys to storage and transmission, but also enhances our cryptosystem nonlinearity to resist some common attacks. Further, holograms make our cryptosystem be immune to noises and occlusion attacks, which are obtained by two-step-only quadrature phase-shifting interference. And the compression and encryption can be achieved in the final result simultaneously. Numerical experiments have verified the security and validity of the proposed algorithm.

  4. Sensing Urban Land-Use Patterns by Integrating Google Tensorflow and Scene-Classification Models

    NASA Astrophysics Data System (ADS)

    Yao, Y.; Liang, H.; Li, X.; Zhang, J.; He, J.

    2017-09-01

    With the rapid progress of China's urbanization, research on the automatic detection of land-use patterns in Chinese cities is of substantial importance. Deep learning is an effective method to extract image features. To take advantage of the deep-learning method in detecting urban land-use patterns, we applied a transfer-learning-based remote-sensing image approach to extract and classify features. Using the Google Tensorflow framework, a powerful convolution neural network (CNN) library was created. First, the transferred model was previously trained on ImageNet, one of the largest object-image data sets, to fully develop the model's ability to generate feature vectors of standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI). Then, a random-forest-based classifier was constructed and trained on these generated vectors to classify the actual urban land-use pattern on the scale of traffic analysis zones (TAZs). To avoid the multi-scale effect of remote-sensing imagery, a large random patch (LRP) method was used. The proposed method could efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737) for the study area. In addition, the results show that the proposed method can effectively overcome the multi-scale effect that occurs in urban land-use classification at the irregular land-parcel level. The proposed method can help planners monitor dynamic urban land use and evaluate the impact of urban-planning schemes.

  5. High-resolution imaging of cellular processes across textured surfaces using an indexed-matched elastomer.

    PubMed

    Ravasio, Andrea; Vaishnavi, Sree; Ladoux, Benoit; Viasnoff, Virgile

    2015-03-01

    Understanding and controlling how cells interact with the microenvironment has emerged as a prominent field in bioengineering, stem cell research and in the development of the next generation of in vitro assays as well as organs on a chip. Changing the local rheology or the nanotextured surface of substrates has proved an efficient approach to improve cell lineage differentiation, to control cell migration properties and to understand environmental sensing processes. However, introducing substrate surface textures often alters the ability to image cells with high precision, compromising our understanding of molecular mechanisms at stake in environmental sensing. In this paper, we demonstrate how nano/microstructured surfaces can be molded from an elastomeric material with a refractive index matched to the cell culture medium. Once made biocompatible, contrast imaging (differential interference contrast, phase contrast) and high-resolution fluorescence imaging of subcellular structures can be implemented through the textured surface using an inverted microscope. Simultaneous traction force measurements by micropost deflection were also performed, demonstrating the potential of our approach to study cell-environment interactions, sensing processes and cellular force generation with unprecedented resolution. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  6. Improving the analysis of biogeochemical patterns associated with internal waves in the strait of Gibraltar using remote sensing images

    NASA Astrophysics Data System (ADS)

    Navarro, Gabriel; Vicent, Jorge; Caballero, Isabel; Gómez-Enri, Jesús; Morris, Edward P.; Sabater, Neus; Macías, Diego; Bolado-Penagos, Marina; Gomiz, Juan Jesús; Bruno, Miguel; Caldeira, Rui; Vázquez, Águeda

    2018-05-01

    High Amplitude Internal Waves (HAIWs) are physical processes observed in the Strait of Gibraltar (the narrow channel between the Atlantic Ocean and the Mediterranean Sea). These internal waves are generated over the Camarinal Sill (western side of the strait) during the tidal outflow (toward the Atlantic Ocean) when critical hydraulic conditions are established. HAIWs remain over the sill for up to 4 h until the outflow slackens, being then released (mostly) towards the Mediterranean Sea. These have been previously observed using Synthetic Aperture Radar (SAR), which captures variations in surface water roughness. However, in this work we use high resolution optical remote sensing, with the aim of examining the influence of HAIWs on biogeochemical processes. We used hyperspectral images from the Hyperspectral Imager for the Coastal Ocean (HICO) and high spatial resolution (10 m) images from the MultiSpectral Instrument (MSI) onboard the Sentinel-2A satellite. This work represents the first attempt to examine the relation between internal wave generation and the water constituents of the Camarinal Sill using hyperspectral and high spatial resolution remote sensing images. This enhanced spatial and spectral resolution revealed the detailed biogeochemical patterns associated with the internal waves and suggests local enhancements of productivity associated with internal waves trains.

  7. Shear Stress Sensing with Elastic Microfence Structures

    NASA Technical Reports Server (NTRS)

    Cisotto, Alexxandra; Palmieri, Frank L.; Saini, Aditya; Lin, Yi; Thurman, Christopher S; Kim, Jinwook; Kim, Taeyang; Connell, John W.; Zhu, Yong; Gopalarathnam, Ashok; hide

    2015-01-01

    In this work, elastic microfences were generated for the purpose of measuring shear forces acting on a wind tunnel model. The microfences were fabricated in a two part process involving laser ablation patterning to generate a template in a polymer film followed by soft lithography with a two-part silicone. Incorporation of a fluorescent dye was demonstrated as a method to enhance contrast between the sensing elements and the substrate. Sensing elements consisted of multiple microfences prepared at different orientations to enable determination of both shear force and directionality. Microfence arrays were integrated into an optical microscope with sub-micrometer resolution. Initial experiments were conducted on a flat plate wind tunnel model. Both image stabilization algorithms and digital image correlation were utilized to determine the amount of fence deflection as a result of airflow. Initial free jet experiments indicated that the microfences could be readily displaced and this displacement was recorded through the microscope.

  8. Enhance the Quality of Crowdsensing for Fine-Grained Urban Environment Monitoring via Data Correlation

    PubMed Central

    Kang, Xu; Liu, Liang; Ma, Huadong

    2017-01-01

    Monitoring the status of urban environments, which provides fundamental information for a city, yields crucial insights into various fields of urban research. Recently, with the popularity of smartphones and vehicles equipped with onboard sensors, a people-centric scheme, namely “crowdsensing”, for city-scale environment monitoring is emerging. This paper proposes a data correlation based crowdsensing approach for fine-grained urban environment monitoring. To demonstrate urban status, we generate sensing images via crowdsensing network, and then enhance the quality of sensing images via data correlation. Specifically, to achieve a higher quality of sensing images, we not only utilize temporal correlation of mobile sensing nodes but also fuse the sensory data with correlated environment data by introducing a collective tensor decomposition approach. Finally, we conduct a series of numerical simulations and a real dataset based case study. The results validate that our approach outperforms the traditional spatial interpolation-based method. PMID:28054968

  9. Quality evaluation of pansharpened hyperspectral images generated using multispectral images

    NASA Astrophysics Data System (ADS)

    Matsuoka, Masayuki; Yoshioka, Hiroki

    2012-11-01

    Hyperspectral remote sensing can provide a smooth spectral curve of a target by using a set of higher spectral resolution detectors. The spatial resolution of the hyperspectral images, however, is generally much lower than that of multispectral images due to the lower energy of incident radiation. Pansharpening is an image-fusion technique that generates higher spatial resolution multispectral images by combining lower resolution multispectral images with higher resolution panchromatic images. In this study, higher resolution hyperspectral images were generated by pansharpening of simulated lower hyperspectral and higher multispectral data. Spectral and spatial qualities of pansharpened images, then, were accessed in relation to the spectral bands of multispectral images. Airborne hyperspectral data of AVIRIS was used in this study, and it was pansharpened using six methods. Quantitative evaluations of pansharpened image are achieved using two frequently used indices, ERGAS, and the Q index.

  10. The Characterization of a DIRSIG Simulation Environment to Support the Inter-Calibration of Spaceborne Sensors

    NASA Technical Reports Server (NTRS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-01-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed..

  11. The characterization of a DIRSIG simulation environment to support the inter-calibration of spaceborne sensors

    NASA Astrophysics Data System (ADS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-09-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed.

  12. Image processing methods used to simulate flight over remotely sensed data

    NASA Technical Reports Server (NTRS)

    Mortensen, H. B.; Hussey, K. J.; Mortensen, R. A.

    1988-01-01

    It has been demonstrated that image processing techniques can provide an effective means of simulating flight over remotely sensed data (Hussey et al. 1986). This paper explains the methods used to simulate and animate three-dimensional surfaces from two-dimensional imagery. The preprocessing techniques used on the input data, the selection of the animation sequence, the generation of the animation frames, and the recording of the animation is covered. The software used for all steps is discussed.

  13. Multispectral Remote Sensing of the Earth and Environment Using KHawk Unmanned Aircraft Systems

    NASA Astrophysics Data System (ADS)

    Gowravaram, Saket

    This thesis focuses on the development and testing of the KHawk multispectral remote sensing system for environmental and agricultural applications. KHawk Unmanned Aircraft System (UAS), a small and low-cost remote sensing platform, is used as the test bed for aerial video acquisition. An efficient image geotagging and photogrammetric procedure for aerial map generation is described, followed by a comprehensive error analysis on the generated maps. The developed procedure is also used for generation of multispectral aerial maps including red, near infrared (NIR) and colored infrared (CIR) maps. A robust Normalized Difference Vegetation index (NDVI) calibration procedure is proposed and validated by ground tests and KHawk flight test. Finally, the generated aerial maps and their corresponding Digital Elevation Models (DEMs) are used for typical application scenarios including prescribed fire monitoring, initial fire line estimation, and tree health monitoring.

  14. Next-generation pushbroom filter radiometers for remote sensing

    NASA Astrophysics Data System (ADS)

    Tarde, Richard W.; Dittman, Michael G.; Kvaran, Geir E.

    2012-09-01

    Individual focal plane size, yield, and quality continue to improve, as does the technology required to combine these into large tiled formats. As a result, next-generation pushbroom imagers are replacing traditional scanning technologies in remote sensing applications. Pushbroom architecture has inherently better radiometric sensitivity and significantly reduced payload mass, power, and volume than previous generation scanning technologies. However, the architecture creates challenges achieving the required radiometric accuracy performance. Achieving good radiometric accuracy, including image spectral and spatial uniformity, requires creative optical design, high quality focal planes and filters, careful consideration of on-board calibration sources, and state-of-the-art ground test facilities. Ball Aerospace built the Landsat Data Continuity Mission (LDCM) next-generation Operational Landsat Imager (OLI) payload. Scheduled to launch in 2013, OLI provides imagery consistent with the historical Landsat spectral, spatial, radiometric, and geometric data record and completes the generational technology upgrade from the Enhanced Thematic Mapper (ETM+) whiskbroom technology to modern pushbroom technology afforded by advanced focal planes. We explain how Ball's capabilities allowed producing the innovative next-generational OLI pushbroom filter radiometer that meets challenging radiometric accuracy or calibration requirements. OLI will improve the multi-decadal land surface observation dataset dating back to the 1972 launch of ERTS-1 or Landsat 1.

  15. Multispectral imaging for biometrics

    NASA Astrophysics Data System (ADS)

    Rowe, Robert K.; Corcoran, Stephen P.; Nixon, Kristin A.; Ostrom, Robert E.

    2005-03-01

    Automated identification systems based on fingerprint images are subject to two significant types of error: an incorrect decision about the identity of a person due to a poor quality fingerprint image and incorrectly accepting a fingerprint image generated from an artificial sample or altered finger. This paper discusses the use of multispectral sensing as a means to collect additional information about a finger that significantly augments the information collected using a conventional fingerprint imager based on total internal reflectance. In the context of this paper, "multispectral sensing" is used broadly to denote a collection of images taken under different polarization conditions and illumination configurations, as well as using multiple wavelengths. Background information is provided on conventional fingerprint imaging. A multispectral imager for fingerprint imaging is then described and a means to combine the two imaging systems into a single unit is discussed. Results from an early-stage prototype of such a system are shown.

  16. Laser focus compensating sensing and imaging device

    DOEpatents

    Vann, Charles S.

    1993-01-01

    A laser focus compensating sensing and imaging device permits the focus of a single focal point of different frequency laser beams emanating from the same source point. In particular it allows the focusing of laser beam originating from the same laser device but having differing intensities so that a low intensity beam will not convert to a higher frequency when passing through a conversion crystal associated with the laser generating device. The laser focus compensating sensing and imaging device uses a cassegrain system to fold the lower frequency, low intensity beam back upon itself so that it will focus at the same focal point as a high intensity beam. An angular tilt compensating lens is mounted about the secondary mirror of the cassegrain system to assist in alignment. In addition cameras or CCD's are mounted with the primary mirror to sense the focused image. A convex lens is positioned co-axial with the cassegrain system on the side of the primary mirror distal of the secondary for use in aligning a target with the laser beam. A first alternate embodiment includes a cassegrain system using a series of shutters and an internally mounted dichroic mirror. A second alternate embodiment uses two laser focus compensating sensing and imaging devices for aligning a moving tool with a work piece.

  17. Laser focus compensating sensing and imaging device

    DOEpatents

    Vann, C.S.

    1993-08-31

    A laser focus compensating sensing and imaging device permits the focus of a single focal point of different frequency laser beams emanating from the same source point. In particular it allows the focusing of laser beam originating from the same laser device but having differing intensities so that a low intensity beam will not convert to a higher frequency when passing through a conversion crystal associated with the laser generating device. The laser focus compensating sensing and imaging device uses a Cassegrain system to fold the lower frequency, low intensity beam back upon itself so that it will focus at the same focal point as a high intensity beam. An angular tilt compensating lens is mounted about the secondary mirror of the Cassegrain system to assist in alignment. In addition cameras or CCD's are mounted with the primary mirror to sense the focused image. A convex lens is positioned co-axial with the Cassegrain system on the side of the primary mirror distal of the secondary for use in aligning a target with the laser beam. A first alternate embodiment includes a Cassegrain system using a series of shutters and an internally mounted dichroic mirror. A second alternate embodiment uses two laser focus compensating sensing and imaging devices for aligning a moving tool with a work piece.

  18. High-speed imaging using compressed sensing and wavelength-dependent scattering (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shin, Jaewook; Bosworth, Bryan T.; Foster, Mark A.

    2017-02-01

    The process of multiple scattering has inherent characteristics that are attractive for high-speed imaging with high spatial resolution and a wide field-of-view. A coherent source passing through a multiple-scattering medium naturally generates speckle patterns with diffraction-limited features over an arbitrarily large field-of-view. In addition, the process of multiple scattering is deterministic allowing a given speckle pattern to be reliably reproduced with identical illumination conditions. Here, by exploiting wavelength dependent multiple scattering and compressed sensing, we develop a high-speed 2D time-stretch microscope. Highly chirped pulses from a 90-MHz mode-locked laser are sent through a 2D grating and a ground-glass diffuser to produce 2D speckle patterns that rapidly evolve with the instantaneous frequency of the chirped pulse. To image a scene, we first characterize the high-speed evolution of the generated speckle patterns. Subsequently we project the patterns onto the microscopic region of interest and collect the total light from the scene using a single high-speed photodetector. Thus the wavelength dependent speckle patterns serve as high-speed pseudorandom structured illumination of the scene. An image sequence is then recovered using the time-dependent signal received by the photodetector, the known speckle pattern evolution, and compressed sensing algorithms. Notably, the use of compressed sensing allows for reconstruction of a time-dependent scene using a highly sub-Nyquist number of measurements, which both increases the speed of the imager and reduces the amount of data that must be collected and stored. We will discuss our experimental demonstration of this approach and the theoretical limits on imaging speed.

  19. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    NASA Astrophysics Data System (ADS)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  20. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation ofmore » quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.« less

  1. Compressive Sensing for Background Subtraction

    DTIC Science & Technology

    2009-12-20

    i) reconstructing an image using only a single optical pho- todiode (infrared, hyperspectral, etc.) along with a digital micromirror device (DMD... curves , we use the full images, run the background subtraction algorithm proposed in [19], and obtain baseline background subtracted images. We then...the images to generate the ROC curve . 5.5 Silhouettes vs. Difference Images We have used a multi camera set up for a 3D voxel reconstruction using the

  2. Improving resolution of MR images with an adversarial network incorporating images with different contrast.

    PubMed

    Kim, Ki Hwan; Do, Won-Joon; Park, Sung-Hong

    2018-05-04

    The routine MRI scan protocol consists of multiple pulse sequences that acquire images of varying contrast. Since high frequency contents such as edges are not significantly affected by image contrast, down-sampled images in one contrast may be improved by high resolution (HR) images acquired in another contrast, reducing the total scan time. In this study, we propose a new deep learning framework that uses HR MR images in one contrast to generate HR MR images from highly down-sampled MR images in another contrast. The proposed convolutional neural network (CNN) framework consists of two CNNs: (a) a reconstruction CNN for generating HR images from the down-sampled images using HR images acquired with a different MRI sequence and (b) a discriminator CNN for improving the perceptual quality of the generated HR images. The proposed method was evaluated using a public brain tumor database and in vivo datasets. The performance of the proposed method was assessed in tumor and no-tumor cases separately, with perceptual image quality being judged by a radiologist. To overcome the challenge of training the network with a small number of available in vivo datasets, the network was pretrained using the public database and then fine-tuned using the small number of in vivo datasets. The performance of the proposed method was also compared to that of several compressed sensing (CS) algorithms. Incorporating HR images of another contrast improved the quantitative assessments of the generated HR image in reference to ground truth. Also, incorporating a discriminator CNN yielded perceptually higher image quality. These results were verified in regions of normal tissue as well as tumors for various MRI sequences from pseudo k-space data generated from the public database. The combination of pretraining with the public database and fine-tuning with the small number of real k-space datasets enhanced the performance of CNNs in in vivo application compared to training CNNs from scratch. The proposed method outperformed the compressed sensing methods. The proposed method can be a good strategy for accelerating routine MRI scanning. © 2018 American Association of Physicists in Medicine.

  3. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 5 : aerial bridge deck imaging data collection and software revision.

    DOT National Transportation Integrated Search

    2012-02-01

    For rapid deployment of bridge scan missions, sub-inch aerial imaging using small format aerial photography : is suggested. Under-belly photography is used to generate high resolution aerial images that can be geo-referenced and : used for quantifyin...

  4. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  5. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  6. Symmetric and asymmetric hybrid cryptosystem based on compressive sensing and computer generated holography

    NASA Astrophysics Data System (ADS)

    Ma, Lihong; Jin, Weimin

    2018-01-01

    A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.

  7. Visible-Infrared Hyperspectral Image Projector

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew

    2013-01-01

    The VisIR HIP generates spatially-spectrally complex scenes. The generated scenes simulate real-world targets viewed by various remote sensing instruments. The VisIR HIP consists of two subsystems: a spectral engine and a spatial engine. The spectral engine generates spectrally complex uniform illumination that spans the wavelength range between 380 nm and 1,600 nm. The spatial engine generates two-dimensional gray-scale scenes. When combined, the two engines are capable of producing two-dimensional scenes with a unique spectrum at each pixel. The VisIR HIP can be used to calibrate any spectrally sensitive remote-sensing instrument. Tests were conducted on the Wide-field Imaging Interferometer Testbed at NASA s Goddard Space Flight Center. The device is a variation of the calibrated hyperspectral image projector developed by the National Institute of Standards and Technology in Gaithersburg, MD. It uses Gooch & Housego Visible and Infrared OL490 Agile Light Sources to generate arbitrary spectra. The two light sources are coupled to a digital light processing (DLP(TradeMark)) digital mirror device (DMD) that serves as the spatial engine. Scenes are displayed on the DMD synchronously with desired spectrum. Scene/spectrum combinations are displayed in rapid succession, over time intervals that are short compared to the integration time of the system under test.

  8. Increasing the UAV data value by an OBIA methodology

    NASA Astrophysics Data System (ADS)

    García-Pedrero, Angel; Lillo-Saavedra, Mario; Rodriguez-Esparragon, Dionisio; Rodriguez-Gonzalez, Alejandro; Gonzalo-Martin, Consuelo

    2017-10-01

    Recently, there has been a noteworthy increment of using images registered by unmanned aerial vehicles (UAV) in different remote sensing applications. Sensors boarded on UAVs has lower operational costs and complexity than other remote sensing platforms, quicker turnaround times as well as higher spatial resolution. Concerning this last aspect, particular attention has to be paid on the limitations of classical algorithms based on pixels when they are applied to high resolution images. The objective of this study is to investigate the capability of an OBIA methodology developed for the automatic generation of a digital terrain model of an agricultural area from Digital Elevation Model (DEM) and multispectral images registered by a Parrot Sequoia multispectral sensor board on a eBee SQ agricultural drone. The proposed methodology uses a superpixel approach for obtaining context and elevation information used for merging superpixels and at the same time eliminating objects such as trees in order to generate a Digital Terrain Model (DTM) of the analyzed area. Obtained results show the potential of the approach, in terms of accuracy, when it is compared with a DTM generated by manually eliminating objects.

  9. Contribution of non-negative matrix factorization to the classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  10. ROSE: the road simulation environment

    NASA Astrophysics Data System (ADS)

    Liatsis, Panos; Mitronikas, Panogiotis

    1997-05-01

    Evaluation of advanced sensing systems for autonomous vehicle navigation (AVN) is currently carried out off-line with prerecorded image sequences taken by physically attaching the sensors to the ego-vehicle. The data collection process is cumbersome and costly as well as highly restricted to specific road environments and weather conditions. This work proposes the use of scientific animation in modeling and representation of real-world traffic scenes and aims to produce an efficient, reliable and cost-effective concept evaluation suite for AVN sensing algorithms. ROSE is organized in a modular fashion consisting of the route generator, the journey generator, the sequence description generator and the renderer. The application was developed in MATLAB and POV-Ray was selected as the rendering module. User-friendly graphical user interfaces have been designed to allow easy selection of animation parameters and monitoring of the generation proces. The system, in its current form, allows the generation of various traffic scenarios, providing for an adequate number of static/dynamic objects, road types and environmental conditions. Initial tests on the robustness of various image processing algorithms to varying lighting and weather conditions have been already carried out.

  11. A review of ultra-short pulse lasers for military remote sensing and rangefinding

    NASA Astrophysics Data System (ADS)

    Lamb, Robert A.

    2009-09-01

    Advances in ultra-short pulse laser technology have resulted in commercially available laser systems capable of generating high peak powers >1GW in tabletop systems. This opens the prospect of generating very wide spectral emissions with a combination of non-linear optical effects in photonic crystal fibres to produce supercontinuua in systems that are readily accessible to military applications. However, military remote sensing rarely requires bandwidths spanning two octaves and it is clear that efficient systems require controlled spectral emission in relevant bands. Furthermore, the limited spectral responsivity of focal plane arrays may impose further restriction on the usable spectrum. A recent innovation which temporally encodes a spectrum using group velocity dispersion allows detection with a photodiode, opening the prospect for high speed hyperspectral sensing and imaging. At the opposite end of the power spectrum, ultra-low power remote sensing using time-correlated single photon counting (SPC) has reduced the laser power requirement and demonstrated remote sensing over 5km during daylight with repetition rates of ~10MHz with ps pulses. Recent research has addressed uncorrelated SPC and waveform transmission to increase data rates for absolute rangefinding whilst avoiding range aliasing. This achievement opens the prospect of combining SPC with high repetition rate temporal encoding of supercontinuua to realise practical hyperspectral remote sensing lidar. The talk will present an overview of these technologies and present a concept which combines them into a single system for high-speed hyperspectral imaging and remote sensing.

  12. High-density CMOS Microelectrode Array System for Impedance Spectroscopy and Imaging of Biological Cells.

    PubMed

    Vijay, Viswam; Raziyeh, Bounik; Amir, Shadmani; Jelena, Dragas; Alicia, Boos Julia; Axel, Birchler; Jan, Müller; Yihui, Chen; Andreas, Hierlemann

    2017-01-26

    A monolithic measurement platform was implemented to enable label-free in-vitro electrical impedance spectroscopy measurements of cells on multi-functional CMOS microelectrode array. The array includes 59,760 platinum microelectrodes, densely packed within a 4.5 mm × 2.5 mm sensing region at a pitch of 13.5 μm. The 32 on-chip lock-in amplifiers can be used to measure the impedance of any arbitrarily chosen electrodes on the array by applying a sinusoidal voltage, generated by an on-chip waveform generator with a frequency range from 1 Hz to 1 MHz, and measuring the respective current. Proof-of-concept measurements of impedance sensing and imaging are shown in this paper. Correlations between cell detection through optical microscopy and electrochemical impedance scanning were established.

  13. Modeling Coniferous Canopy Structure over Extensive Areas for Ray Tracing Simulations: Scaling from the Leaf to the Stand Level

    NASA Astrophysics Data System (ADS)

    van Aardt, J. A.; van Leeuwen, M.; Kelbe, D.; Kampe, T.; Krause, K.

    2015-12-01

    Remote sensing is widely accepted as a useful technology for characterizing the Earth surface in an objective, reproducible, and economically feasible manner. To date, the calibration and validation of remote sensing data sets and biophysical parameter estimates remain challenging due to the requirements to sample large areas for ground-truth data collection, and restrictions to sample these data within narrow temporal windows centered around flight campaigns or satellite overpasses. The computer graphics community have taken significant steps to ameliorate some of these challenges by providing an ability to generate synthetic images based on geometrically and optically realistic representations of complex targets and imaging instruments. These synthetic data can be used for conceptual and diagnostic tests of instrumentation prior to sensor deployment or to examine linkages between biophysical characteristics of the Earth surface and at-sensor radiance. In the last two decades, the use of image generation techniques for remote sensing of the vegetated environment has evolved from the simulation of simple homogeneous, hypothetical vegetation canopies, to advanced scenes and renderings with a high degree of photo-realism. Reported virtual scenes comprise up to 100M surface facets; however, due to the tighter coupling between hardware and software development, the full potential of image generation techniques for forestry applications yet remains to be fully explored. In this presentation, we examine the potential computer graphics techniques have for the analysis of forest structure-function relationships and demonstrate techniques that provide for the modeling of extremely high-faceted virtual forest canopies, comprising billions of scene elements. We demonstrate the use of ray tracing simulations for the analysis of gap size distributions and characterization of foliage clumping within spatial footprints that allow for a tight matching between characteristics derived from these virtual scenes and typical pixel resolutions of remote sensing imagery.

  14. Innovative hyperspectral imaging-based techniques for quality evaluation of fruits and vegetables: a review

    USDA-ARS?s Scientific Manuscript database

    New, non-destructive sensing techniques for fast and more effective quality assessment of fruits and vegetables are needed to meet the ever-increasing consumer demand for better, more consistent and safer food products. Over the past 15 years, hyperspectral imaging has emerged as a new generation of...

  15. Enhancing Spatial Resolution of Remotely Sensed Imagery Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Beck, J. M.; Bridges, S.; Collins, C.; Rushing, J.; Graves, S. J.

    2017-12-01

    Researchers at the Information Technology and Systems Center at the University of Alabama in Huntsville are using Deep Learning with Convolutional Neural Networks (CNNs) to develop a method for enhancing the spatial resolutions of moderate resolution (10-60m) multispectral satellite imagery. This enhancement will effectively match the resolutions of imagery from multiple sensors to provide increased global temporal-spatial coverage for a variety of Earth science products. Our research is centered on using Deep Learning for automatically generating transformations for increasing the spatial resolution of remotely sensed images with different spatial, spectral, and temporal resolutions. One of the most important steps in using images from multiple sensors is to transform the different image layers into the same spatial resolution, preferably the highest spatial resolution, without compromising the spectral information. Recent advances in Deep Learning have shown that CNNs can be used to effectively and efficiently upscale or enhance the spatial resolution of multispectral images with the use of an auxiliary data source such as a high spatial resolution panchromatic image. In contrast, we are using both the spatial and spectral details inherent in low spatial resolution multispectral images for image enhancement without the use of a panchromatic image. This presentation will discuss how this technology will benefit many Earth Science applications that use remotely sensed images with moderate spatial resolutions.

  16. Focal-Plane Sensing-Processing: A Power-Efficient Approach for the Implementation of Privacy-Aware Networked Visual Sensors

    PubMed Central

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-01-01

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849

  17. Focal-plane sensing-processing: a power-efficient approach for the implementation of privacy-aware networked visual sensors.

    PubMed

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-08-19

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.

  18. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction.

    PubMed

    Yang, Guang; Yu, Simiao; Dong, Hao; Slabaugh, Greg; Dragotti, Pier Luigi; Ye, Xujiong; Liu, Fangde; Arridge, Simon; Keegan, Jennifer; Guo, Yike; Firmin, David; Keegan, Jennifer; Slabaugh, Greg; Arridge, Simon; Ye, Xujiong; Guo, Yike; Yu, Simiao; Liu, Fangde; Firmin, David; Dragotti, Pier Luigi; Yang, Guang; Dong, Hao

    2018-06-01

    Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist-Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.

  19. Using hyperspectral remote sensing for land cover classification

    NASA Astrophysics Data System (ADS)

    Zhang, Wendy W.; Sriharan, Shobha

    2005-01-01

    This project used hyperspectral data set to classify land cover using remote sensing techniques. Many different earth-sensing satellites, with diverse sensors mounted on sophisticated platforms, are currently in earth orbit. These sensors are designed to cover a wide range of the electromagnetic spectrum and are generating enormous amounts of data that must be processed, stored, and made available to the user community. The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) collects data in 224 bands that are approximately 9.6 nm wide in contiguous bands between 0.40 and 2.45 mm. Hyperspectral sensors acquire images in many, very narrow, contiguous spectral bands throughout the visible, near-IR, and thermal IR portions of the spectrum. The unsupervised image classification procedure automatically categorizes the pixels in an image into land cover classes or themes. Experiments on using hyperspectral remote sensing for land cover classification were conducted during the 2003 and 2004 NASA Summer Faculty Fellowship Program at Stennis Space Center. Research Systems Inc.'s (RSI) ENVI software package was used in this application framework. In this application, emphasis was placed on: (1) Spectrally oriented classification procedures for land cover mapping, particularly, the supervised surface classification using AVIRIS data; and (2) Identifying data endmembers.

  20. a Cloud Boundary Detection Scheme Combined with Aslic and Cnn Using ZY-3, GF-1/2 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Li, C.; Wang, Z.; Kwok, E.; Wei, X.

    2018-04-01

    Remote sensing optical image cloud detection is one of the most important problems in remote sensing data processing. Aiming at the information loss caused by cloud cover, a cloud detection method based on convolution neural network (CNN) is presented in this paper. Firstly, a deep CNN network is used to extract the multi-level feature generation model of cloud from the training samples. Secondly, the adaptive simple linear iterative clustering (ASLIC) method is used to divide the detected images into superpixels. Finally, the probability of each superpixel belonging to the cloud region is predicted by the trained network model, thereby generating a cloud probability map. The typical region of GF-1/2 and ZY-3 were selected to carry out the cloud detection test, and compared with the traditional SLIC method. The experiment results show that the average accuracy of cloud detection is increased by more than 5 %, and it can detected thin-thick cloud and the whole cloud boundary well on different imaging platforms.

  1. a Spiral-Based Downscaling Method for Generating 30 M Time Series Image Data

    NASA Astrophysics Data System (ADS)

    Liu, B.; Chen, J.; Xing, H.; Wu, H.; Zhang, J.

    2017-09-01

    The spatial detail and updating frequency of land cover data are important factors influencing land surface dynamic monitoring applications in high spatial resolution scale. However, the fragmentized patches and seasonal variable of some land cover types (e. g. small crop field, wetland) make it labor-intensive and difficult in the generation of land cover data. Utilizing the high spatial resolution multi-temporal image data is a possible solution. Unfortunately, the spatial and temporal resolution of available remote sensing data like Landsat or MODIS datasets can hardly satisfy the minimum mapping unit and frequency of current land cover mapping / updating at the same time. The generation of high resolution time series may be a compromise to cover the shortage in land cover updating process. One of popular way is to downscale multi-temporal MODIS data with other high spatial resolution auxiliary data like Landsat. But the usual manner of downscaling pixel based on a window may lead to the underdetermined problem in heterogeneous area, result in the uncertainty of some high spatial resolution pixels. Therefore, the downscaled multi-temporal data can hardly reach high spatial resolution as Landsat data. A spiral based method was introduced to downscale low spatial and high temporal resolution image data to high spatial and high temporal resolution image data. By the way of searching the similar pixels around the adjacent region based on the spiral, the pixel set was made up in the adjacent region pixel by pixel. The underdetermined problem is prevented to a large extent from solving the linear system when adopting the pixel set constructed. With the help of ordinary least squares, the method inverted the endmember values of linear system. The high spatial resolution image was reconstructed on the basis of high spatial resolution class map and the endmember values band by band. Then, the high spatial resolution time series was formed with these high spatial resolution images image by image. Simulated experiment and remote sensing image downscaling experiment were conducted. In simulated experiment, the 30 meters class map dataset Globeland30 was adopted to investigate the effect on avoid the underdetermined problem in downscaling procedure and a comparison between spiral and window was conducted. Further, the MODIS NDVI and Landsat image data was adopted to generate the 30m time series NDVI in remote sensing image downscaling experiment. Simulated experiment results showed that the proposed method had a robust performance in downscaling pixel in heterogeneous region and indicated that it was superior to the traditional window-based methods. The high resolution time series generated may be a benefit to the mapping and updating of land cover data.

  2. A Saliency Guided Semi-Supervised Building Change Detection Method for High Resolution Remote Sensing Images

    PubMed Central

    Hou, Bin; Wang, Yunhong; Liu, Qingjie

    2016-01-01

    Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation. PMID:27618903

  3. A Saliency Guided Semi-Supervised Building Change Detection Method for High Resolution Remote Sensing Images.

    PubMed

    Hou, Bin; Wang, Yunhong; Liu, Qingjie

    2016-08-27

    Characterizations of up to date information of the Earth's surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.

  4. Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS

    NASA Astrophysics Data System (ADS)

    Sofina, N.; Ehlers, M.

    2012-08-01

    High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.

  5. A Robust False Matching Points Detection Method for Remote Sensing Image Registration

    NASA Astrophysics Data System (ADS)

    Shan, X. J.; Tang, P.

    2015-04-01

    Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.

  6. Investigation related to multispectral imaging systems

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Erickson, J. D.

    1974-01-01

    A summary of technical progress made during a five year research program directed toward the development of operational information systems based on multispectral sensing and the use of these systems in earth-resource survey applications is presented. Efforts were undertaken during this program to: (1) improve the basic understanding of the many facets of multispectral remote sensing, (2) develop methods for improving the accuracy of information generated by remote sensing systems, (3) improve the efficiency of data processing and information extraction techniques to enhance the cost-effectiveness of remote sensing systems, (4) investigate additional problems having potential remote sensing solutions, and (5) apply the existing and developing technology for specific users and document and transfer that technology to the remote sensing community.

  7. Long-Term Monitoring of Desert Land and Natural Resources and Application of Remote Sensing Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Yuki; Rollins, Katherine E.

    2016-11-01

    Monitoring environmental impacts over large, remote desert regions for long periods of time can be very costly. Remote sensing technologies present a promising monitoring tool because they entail the collection of spatially contiguous data, automated processing, and streamlined data analysis. This report provides a summary of remote sensing products and refinement of remote sensing data interpretation methodologies that were generated as part of the U.S. Department of the Interior Bureau of Land Management Solar Energy Program. In March 2015, a team of researchers from Argonne National Laboratory (Argonne) collected field data of vegetation and surface types from more than 5,000more » survey points within the eastern part of the Riverside East Solar Energy Zone (SEZ). Using the field data, remote sensing products that were generated in 2014 using very high spatial resolution (VHSR; 15 cm) multispectral aerial images were validated in order to evaluate potential refinements to the previous methodologies to improve the information extraction accuracy.« less

  8. Experimental image alignment system

    NASA Technical Reports Server (NTRS)

    Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.

    1980-01-01

    A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.

  9. An HDR imaging method with DTDI technology for push-broom cameras

    NASA Astrophysics Data System (ADS)

    Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin

    2018-03-01

    Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.

  10. Real-Time Digital Bright Field Technology for Rapid Antibiotic Susceptibility Testing.

    PubMed

    Canali, Chiara; Spillum, Erik; Valvik, Martin; Agersnap, Niels; Olesen, Tom

    2018-01-01

    Optical scanning through bacterial samples and image-based analysis may provide a robust method for bacterial identification, fast estimation of growth rates and their modulation due to the presence of antimicrobial agents. Here, we describe an automated digital, time-lapse, bright field imaging system (oCelloScope, BioSense Solutions ApS, Farum, Denmark) for rapid and higher throughput antibiotic susceptibility testing (AST) of up to 96 bacteria-antibiotic combinations at a time. The imaging system consists of a digital camera, an illumination unit and a lens where the optical axis is tilted 6.25° relative to the horizontal plane of the stage. Such tilting grants more freedom of operation at both high and low concentrations of microorganisms. When considering a bacterial suspension in a microwell, the oCelloScope acquires a sequence of 6.25°-tilted images to form an image Z-stack. The stack contains the best-focus image, as well as the adjacent out-of-focus images (which contain progressively more out-of-focus bacteria, the further the distance from the best-focus position). The acquisition process is repeated over time, so that the time-lapse sequence of best-focus images is used to generate a video. The setting of the experiment, image analysis and generation of time-lapse videos can be performed through a dedicated software (UniExplorer, BioSense Solutions ApS). The acquired images can be processed for online and offline quantification of several morphological parameters, microbial growth, and inhibition over time.

  11. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  12. Mapping of Polar Areas Based on High-Resolution Satellite Images: The Example of the Henryk Arctowski Polish Antarctic Station

    NASA Astrophysics Data System (ADS)

    Kurczyński, Zdzisław; Różycki, Sebastian; Bylina, Paweł

    2017-12-01

    To produce orthophotomaps or digital elevation models, the most commonly used method is photogrammetric measurement. However, the use of aerial images is not easy in polar regions for logistical reasons. In these areas, remote sensing data acquired from satellite systems is much more useful. This paper presents the basic technical requirements of different products which can be obtain (in particular orthoimages and digital elevation model (DEM)) using Very-High-Resolution Satellite (VHRS) images. The study area was situated in the vicinity of the Henryk Arctowski Polish Antarctic Station on the Western Shore of Admiralty Bay, King George Island, Western Antarctic. Image processing was applied on two triplets of images acquired by the Pléiades 1A and 1B in March 2013. The results of the generation of orthoimages from the Pléiades systems without control points showed that the proposed method can achieve Root Mean Squared Error (RMSE) of 3-9 m. The presented Pléiades images are useful for thematic remote sensing analysis and processing of measurements. Using satellite images to produce remote sensing products for polar regions is highly beneficial and reliable and compares well with more expensive airborne photographs or field surveys.

  13. Using multi-level remote sensing and ground data to estimate forest biomass resources in remote regions: a case study in the boreal forests of interior Alaska

    Treesearch

    Hans-Erik Andersen; Strunk Jacob; Hailemariam Temesgen; Donald Atwood; Ken Winterberger

    2012-01-01

    The emergence of a new generation of remote sensing and geopositioning technologies, as well as increased capabilities in image processing, computing, and inferential techniques, have enabled the development and implementation of increasingly efficient and cost-effective multilevel sampling designs for forest inventory. In this paper, we (i) describe the conceptual...

  14. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  15. Crosscutting Airborne Remote Sensing Technologies for Oil and Gas and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Aubrey, A. D.; Frankenberg, C.; Green, R. O.; Eastwood, M. L.; Thompson, D. R.; Thorpe, A. K.

    2015-01-01

    Airborne imaging spectroscopy has evolved dramatically since the 1980s as a robust remote sensing technique used to generate 2-dimensional maps of surface properties over large spatial areas. Traditional applications for passive airborne imaging spectroscopy include interrogation of surface composition, such as mapping of vegetation diversity and surface geological composition. Two recent applications are particularly relevant to the needs of both the oil and gas as well as government sectors: quantification of surficial hydrocarbon thickness in aquatic environments and mapping atmospheric greenhouse gas components. These techniques provide valuable capabilities for petroleum seepage in addition to detection and quantification of fugitive emissions. New empirical data that provides insight into the source strength of anthropogenic methane will be reviewed, with particular emphasis on the evolving constraints enabled by new methane remote sensing techniques. Contemporary studies attribute high-strength point sources as significantly contributing to the national methane inventory and underscore the need for high performance remote sensing technologies that provide quantitative leak detection. Imaging sensors that map spatial distributions of methane anomalies provide effective techniques to detect, localize, and quantify fugitive leaks. Airborne remote sensing instruments provide the unique combination of high spatial resolution (<1 m) and large coverage required to directly attribute methane emissions to individual emission sources. This capability cannot currently be achieved using spaceborne sensors. In this study, results from recent NASA remote sensing field experiments focused on point-source leak detection, will be highlighted. This includes existing quantitative capabilities for oil and methane using state-of-the-art airborne remote sensing instruments. While these capabilities are of interest to NASA for assessment of environmental impact and global climate change, industry similarly seeks to detect and localize leaks of both oil and methane across operating fields. In some cases, higher sensitivities desired for upstream and downstream applications can only be provided by new airborne remote sensing instruments tailored specifically for a given application. There exists a unique opportunity for alignment of efforts between commercial and government sectors to advance the next generation of instruments to provide more sensitive leak detection capabilities, including those for quantitative source strength determination.

  16. REMOTE SENSING AND GIS WETLANDS

    EPA Science Inventory

    Learn how photographs and computer sensor generated images can illustrate conditions of hydrology, extent, change over time, and impact of events such as hurricanes and tornados. Other topics include: information storage and modeling, and evaluation of wetlands for managing reso...

  17. REMOTE SENSING AND GIS FOR WETLANDS

    EPA Science Inventory

    In identifying and characterizing wetland and adjacent features, the use of remote sensor and Geographic Information Systems (GIS) technologies has been valuable. Remote sensors such as photographs and computer-sensor generated images can illustrate conditions of hydrology, exten...

  18. On validating remote sensing simulations using coincident real data

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan

    2016-05-01

    The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.

  19. Remote sensing and implications for variable-rate application using agricultural aircraft

    NASA Astrophysics Data System (ADS)

    Thomson, Steven J.; Smith, Lowrey A.; Ray, Jeffrey D.; Zimba, Paul V.

    2004-01-01

    Aircraft routinely used for agricultural spray application are finding utility for remote sensing. Data obtained from remote sensing can be used for prescription application of pesticides, fertilizers, cotton growth regulators, and water (the latter with the assistance of hyperspectral indices and thermal imaging). Digital video was used to detect weeds in early cotton, and preliminary data were obtained to see if nitrogen status could be detected in early soybeans. Weeds were differentiable from early cotton at very low altitudes (65-m), with the aid of supervised classification algorithms in the ENVI image analysis software. The camera was flown at very low altitude for acceptable pixel resolution. Nitrogen status was not detectable by statistical analysis of digital numbers (DNs) obtained from images, but soybean cultivar differences were statistically discernable (F=26, p=0.01). Spectroradiometer data are being analyzed to identify narrow spectral bands that might aid in selecting camera filters for determination of plant nitrogen status. Multiple camera configurations are proposed to allow vegetative indices to be developed more readily. Both remotely sensed field images and ground data are to be used for decision-making in a proposed variable-rate application system for agricultural aircraft. For this system, prescriptions generated from digital imagery and data will be coupled with GPS-based swath guidance and programmable flow control.

  20. Generation of high-dynamic range image from digital photo

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Potemin, Igor S.; Zhdanov, Dmitry D.; Wang, Xu-yang; Cheng, Han

    2016-10-01

    A number of the modern applications such as medical imaging, remote sensing satellites imaging, virtual prototyping etc use the High Dynamic Range Image (HDRI). Generally to obtain HDRI from ordinary digital image the camera is calibrated. The article proposes the camera calibration method based on the clear sky as the standard light source and takes sky luminance from CIE sky model for the corresponding geographical coordinates and time. The article considers base algorithms for getting real luminance values from ordinary digital image and corresponding programmed implementation of the algorithms. Moreover, examples of HDRI reconstructed from ordinary images illustrate the article.

  1. Carbon nanotube thin film strain sensor models assembled using nano- and micro-scale imaging

    NASA Astrophysics Data System (ADS)

    Lee, Bo Mi; Loh, Kenneth J.; Yang, Yuan-Sen

    2017-07-01

    Nanomaterial-based thin films, particularly those based on carbon nanotubes (CNT), have brought forth tremendous opportunities for designing next-generation strain sensors. However, their strain sensing properties can vary depending on fabrication method, post-processing treatment, and types of CNTs and polymers employed. The objective of this study was to derive a CNT-based thin film strain sensor model using inputs from nano-/micro-scale experimental measurements of nanotube physical properties. This study began with fabricating ultra-low-concentration CNT-polymer thin films, followed by imaging them using atomic force microscopy. Image processing was employed for characterizing CNT dispersed shapes, lengths, and other physical attributes, and results were used for building five different types of thin film percolation-based models. Numerical simulations were conducted to assess how the morphology of dispersed CNTs in its 2D matrix affected bulk film electrical and electromechanical (strain sensing) properties. The simulation results showed that CNT morphology had a significant impact on strain sensing performance.

  2. Gender differences in autobiographical memory for everyday events: retrieval elicited by SenseCam images versus verbal cues.

    PubMed

    St Jacques, Peggy L; Conway, Martin A; Cabeza, Roberto

    2011-10-01

    Gender differences are frequently observed in autobiographical memory (AM). However, few studies have investigated the neural basis of potential gender differences in AM. In the present functional MRI (fMRI) study we investigated gender differences in AMs elicited using dynamic visual images vs verbal cues. We used a novel technology called a SenseCam, a wearable device that automatically takes thousands of photographs. SenseCam differs considerably from other prospective methods of generating retrieval cues because it does not disrupt the ongoing experience. This allowed us to control for potential gender differences in emotional processing and elaborative rehearsal, while manipulating how the AMs were elicited. We predicted that males would retrieve more richly experienced AMs elicited by the SenseCam images vs the verbal cues, whereas females would show equal sensitivity to both cues. The behavioural results indicated that there were no gender differences in subjective ratings of reliving, importance, vividness, emotion, and uniqueness, suggesting that gender differences in brain activity were not due to differences in these measures of phenomenological experience. Consistent with our predictions, the fMRI results revealed that males showed a greater difference in functional activity associated with the rich experience of SenseCam vs verbal cues, than did females.

  3. Novel Raman Techniques for Imaging and Sensing

    NASA Astrophysics Data System (ADS)

    Edwards, Perry S.

    Raman scattering spectroscopy is extensively demonstrated as a label-free, chemically selective sensing and imaging technique for a multitude of chemical and biological applications. The ability to detect "fingerprint" spectral signatures of individual molecules, without the need to introduce chemical labelers, makes Raman scattering a powerful sensing technique. However, spectroscopy based on spontaneous Raman scattering traditionally suffers from inherently weak signals due to small Raman scattering cross-sections. Thus, considerable efforts have been put forth to find pathways towards enhancing Raman signals to bolster sensitivity for detecting small concentrations of molecules or particles. The development of coherent Raman techniques that can offer orders of magnitude increase in signal have garnered significant interest in recent years for their application in imaging; such techniques include coherent anti-Stokes Raman scattering and stimulated Raman scattering. Additionally, methods to enhance the local field of either the pump or generated Raman signal, such as through surface enhanced Raman scattering, have been investigated for their orders of magnitude improvement in sensitivity and single molecule sensing capability. The work presented in this dissertation describes novel techniques for performing high speed and highly sensitive Raman imaging as well as sensing applications towards bioimaging and biosensing. Coherent anti-Stokes Raman scattering (CARS) is combined with holography to enable recording of high-speed (single laser shot), wide field CARS holograms which can be used to reconstruct the both the amplitude and the phase of the anti-Stokes field therefore allowing 3D imaging. This dissertation explores CARS holography as a viable label-free bio-imaging technique. A Raman scattering particle sensing system is also developed that utilizes wave guide properties of optical fibers and ring-resonators to perform enhanced particle sensing. Resonator-enhanced particle sensing is experimentally examined as a new method for enhancing Raman scattering from particles interacting with circulating optical fields within both a fiber ring-cavity and whispering gallery mode microtoroid microresonators. The achievements described in this dissertation include: (1) Demonstration of the bio-imaging capability of CARS holography by recording of CARS holograms of subcellular components in live cancer (HeLa) cells. (2) Label-free Raman microparticle sensing using a tapered optical fibers. A tapered fiber can guide light to particles adsorbed on the surface of the taper to generate scattered Raman signal which can be collected by a microRaman detection system. (3) Demonstration of the proof of concept of resonator-enhanced Raman spectroscopy in a fiber ring resonator consisting of a section of fiber taper. (4) A method for locking the pump laser to the resonate frequencies of a resonator. This is demonstrated using a fiber ring resonator and microtoroid microresonators. (5) Raman scattered signal from particles adhered to microtoroid microresonators is acquired using 5 seconds of signal integration time and with the pump laser locked to a cavity resonance. (6) Theoretical analysis is performed that indicates resonator-enhanced Raman scattering from microparticles adhered to microresonators can be achieved with the pump laser locked to the frequency of a high-Q cavity resonant mode.

  4. Can we infer plant facilitation from remote sensing? A test across global drylands

    PubMed Central

    Xu, Chi; Holmgren, Milena; Van Nes, Egbert H.; Maestre, Fernando T.; Soliveres, Santiago; Berdugo, Miguel; Kéfi, Sonia; Marquet, Pablo A.; Abades, Sebastian; Scheffer, Marten

    2016-01-01

    Facilitation is a major force shaping the structure and diversity of plant communities in terrestrial ecosystems. Detecting positive plant-plant interactions relies on the combination of field experimentation and the demonstration of spatial association between neighboring plants. This has often restricted the study of facilitation to particular sites, limiting the development of systematic assessments of facilitation over regional and global scales. Here we explore whether the frequency of plant spatial associations detected from high-resolution remotely-sensed images can be used to infer plant facilitation at the community level in drylands around the globe. We correlated the information from remotely-sensed images freely available through Google Earth™ with detailed field assessments, and used a simple individual-based model to generate patch-size distributions using different assumptions about the type and strength of plant-plant interactions. Most of the patterns found from the remotely-sensed images were more right-skewed than the patterns from the null model simulating a random distribution. This suggests that the plants in the studied drylands show stronger spatial clustering than expected by chance. We found that positive plant co-occurrence, as measured in the field, was significantly related to the skewness of vegetation patch-size distribution measured using Google Earth™ images. Our findings suggest that the relative frequency of facilitation may be inferred from spatial pattern signals measured from remotely-sensed images, since facilitation often determines positive co-occurrence among neighboring plants. They pave the road for a systematic global assessment of the role of facilitation in terrestrial ecosystems. PMID:26552256

  5. Suppressing the image smear of the vibration modulation transfer function for remote-sensing optical cameras.

    PubMed

    Li, Jin; Liu, Zilong; Liu, Si

    2017-02-20

    In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.

  6. Autonomous navigation and control of a Mars rover

    NASA Technical Reports Server (NTRS)

    Miller, D. P.; Atkinson, D. J.; Wilcox, B. H.; Mishkin, A. H.

    1990-01-01

    A Mars rover will need to be able to navigate autonomously kilometers at a time. This paper outlines the sensing, perception, planning, and execution monitoring systems that are currently being designed for the rover. The sensing is based around stereo vision. The interpretation of the images use a registration of the depth map with a global height map provided by an orbiting spacecraft. Safe, low energy paths are then planned through the map, and expectations of what the rover's articulation sensors should sense are generated. These expectations are then used to ensure that the planned path is correctly being executed.

  7. Solar thematic maps for space weather operations

    USGS Publications Warehouse

    Rigler, E. Joshua; Hill, Steven M.; Reinard, Alysha A.; Steenburgh, Robert A.

    2012-01-01

    Thematic maps are arrays of labels, or "themes", associated with discrete locations in space and time. Borrowing heavily from the terrestrial remote sensing discipline, a numerical technique based on Bayes' theorem captures operational expertise in the form of trained theme statistics, then uses this to automatically assign labels to solar image pixels. Ultimately, regular thematic maps of the solar corona will be generated from high-cadence, high-resolution SUVI images, the solar ultraviolet imager slated to fly on NOAA's next-generation GOES-R series of satellites starting ~2016. These thematic maps will not only provide quicker, more consistent synoptic views of the sun for space weather forecasters, but digital thematic pixel masks (e.g., coronal hole, active region, flare, etc.), necessary for a new generation of operational solar data products, will be generated. This paper presents the mathematical underpinnings of our thematic mapper, as well as some practical algorithmic considerations. Then, using images from the Solar Dynamics Observatory (SDO) Advanced Imaging Array (AIA) as test data, it presents results from validation experiments designed to ascertain the robustness of the technique with respect to differing expert opinions and changing solar conditions.

  8. Get the Picture?

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Positive Systems has worked in conjunction with Stennis Space Center to design the ADAR System 5500. This is a four-band airborne digital imaging system used to capture multispectral imagery similar to that available from satellite platforms such as Landsat, SPOT and the new generation of high resolution satellites. Positive Systems has provided remote sensing services for the development of digital aerial camera systems and software for commercial aerial imaging applications.

  9. Remote Sensing of Soils for Environmental Assessment and Management.

    NASA Technical Reports Server (NTRS)

    DeGloria, Stephen D.; Irons, James R.; West, Larry T.

    2014-01-01

    The next generation of imaging systems integrated with complex analytical methods will revolutionize the way we inventory and manage soil resources across a wide range of scientific disciplines and application domains. This special issue highlights those systems and methods for the direct benefit of environmental professionals and students who employ imaging and geospatial information for improved understanding, management, and monitoring of soil resources.

  10. Terahertz imaging through self-mixing in a quantum cascade laser.

    PubMed

    Dean, Paul; Lim, Yah Leng; Valavanis, Alex; Kliese, Russell; Nikolić, Milan; Khanna, Suraj P; Lachab, Mohammad; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Rakić, Aleksandar D; Linfield, Edmund H; Davies, A Giles

    2011-07-01

    We demonstrate terahertz (THz) frequency imaging using a single quantum cascade laser (QCL) device for both generation and sensing of THz radiation. Detection is achieved by utilizing the effect of self-mixing in the THz QCL, and, specifically, by monitoring perturbations to the voltage across the QCL, induced by light reflected from an external object back into the laser cavity. Self-mixing imaging offers high sensitivity, a potentially fast response, and a simple, compact optical design, and we show that it can be used to obtain high-resolution reflection images of exemplar structures.

  11. The pan-sharpening of satellite and UAV imagery for agricultural applications

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Woroszkiewicz, Malgorzata

    2016-10-01

    Remote sensing techniques are widely used in many different areas of interest, i.e. urban studies, environmental studies, agriculture, etc., due to fact that they provide rapid, accurate and information over large areas with optimal time, spatial and spectral resolutions. Agricultural management is one of the most common application of remote sensing methods nowadays. Monitoring of agricultural sites and creating information regarding spatial distribution and characteristics of crops are important tasks to provide data for precision agriculture, crop management and registries of agricultural lands. For monitoring of cultivated areas many different types of remote sensing data can be used- most popular are multispectral satellites imagery. Such data allow for generating land use and land cover maps, based on various methods of image processing and remote sensing methods. This paper presents fusion of satellite and unnamed aerial vehicle (UAV) imagery for agricultural applications, especially for distinguishing crop types. Authors in their article presented chosen data fusion methods for satellite images and data obtained from low altitudes. Moreover the authors described pan- sharpening approaches and applied chosen pan- sharpening methods for multiresolution image fusion of satellite and UAV imagery. For such purpose, satellite images from Landsat- 8 OLI sensor and data collected within various UAV flights (with mounted RGB camera) were used. In this article, the authors not only had shown the potential of fusion of satellite and UAV images, but also presented the application of pan- sharpening in crop identification and management.

  12. Technology study of quantum remote sensing imaging

    NASA Astrophysics Data System (ADS)

    Bi, Siwen; Lin, Xuling; Yang, Song; Wu, Zhiqiang

    2016-02-01

    According to remote sensing science and technology development and application requirements, quantum remote sensing is proposed. First on the background of quantum remote sensing, quantum remote sensing theory, information mechanism, imaging experiments and prototype principle prototype research situation, related research at home and abroad are briefly introduced. Then we expounds compress operator of the quantum remote sensing radiation field and the basic principles of single-mode compression operator, quantum quantum light field of remote sensing image compression experiment preparation and optical imaging, the quantum remote sensing imaging principle prototype, Quantum remote sensing spaceborne active imaging technology is brought forward, mainly including quantum remote sensing spaceborne active imaging system composition and working principle, preparation and injection compression light active imaging device and quantum noise amplification device. Finally, the summary of quantum remote sensing research in the past 15 years work and future development are introduced.

  13. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  14. A cloud mask methodology for high resolution remote sensing data combining information from high and medium resolution optical sensors

    NASA Astrophysics Data System (ADS)

    Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús

    2011-09-01

    This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.

  15. Recent Progress of Self-Powered Sensing Systems for Wearable Electronics.

    PubMed

    Lou, Zheng; Li, La; Wang, Lili; Shen, Guozhen

    2017-12-01

    Wearable/flexible electronic sensing systems are considered to be one of the key technologies in the next generation of smart personal electronics. To realize personal portable devices with mobile electronics application, i.e., wearable electronic sensors that can work sustainably and continuously without an external power supply are highly desired. The recent progress and advantages of wearable self-powered electronic sensing systems for mobile or personal attachable health monitoring applications are presented. An overview of various types of wearable electronic sensors, including flexible tactile sensors, wearable image sensor array, biological and chemical sensor, temperature sensors, and multifunctional integrated sensing systems is provided. Self-powered sensing systems with integrated energy units are then discussed, separated as energy harvesting self-powered sensing systems, energy storage integrated sensing systems, and all-in-on integrated sensing systems. Finally, the future perspectives of self-powered sensing systems for wearable electronics are discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Warning System for Rainfall-Induced Debris Flows: A Integrated Remote Sensing and Data Mining Approach

    NASA Astrophysics Data System (ADS)

    Elkadiri, R.; Sultan, M.; Nurmemet, I.; Al Harbi, H.; Youssef, A.; Elbayoumi, T.; Zabramwi, Y.; Alzahrani, S.; Bahamil, A.

    2014-12-01

    We developed methodologies that heavily rely on observations extracted from a wide-range of remote sensing data sets (TRMM, Landsat ETM, ENVISAT, ERS, SPOT, Orbview, GeoEye) to develop a warning system for rainfall-induced debris flows in the Jazan province in the Red Sea Hills. The developed warning system integrates static controlling factors and dynamic triggering factors. The algorithm couples a susceptibility map with a rainfall I-D curve, both are developed using readily available remote sensing datasets. The static susceptibility map was constructed as follows: (1) an inventory was compiled for debris flows identified from high spatial resolution datasets and field verified; (2) 10 topographical and land cover predisposing factors (i.e. slope angle, slope aspect, normalized difference vegetation index, topographical position index, stream power index, flow accumulation, distance to drainage line, soil weathering index, elevation and topographic wetness index) were generated; (3) an artificial neural network model (ANN) was constructed, optimized and validated; (4) a debris-flow susceptibility map was generated using the ANN model and refined (using differential backscatter coefficient radar images). The rainfall threshold curve was derived as follows: (1) a spatial database was generated to host temporal co-registered and radiometrically and atmospherically corrected Landsat images; (2) temporal change detection images were generated for pairs of successively acquired Landsat images and criteria were established to identify "the change" related to debris flows, (3) the duration and intensity of the precipitation event that caused each of the identified debris flow events was assumed to be that of the most intense event within the investigated period; and (4) the I-D curve was extracted using data (intensity and duration of precipitation) for the inventoried events. Our findings include: (1) the spatial controlling factors with the highest predictive power of debris-flow locations are: topographic position index, slope, NDVI and distance to drainage line; (2) the ANN model showed an excellent prediction performance (area under receiver operating characteristic [ROC] curve: 0.961); 3) the preliminary I-D curve is I=39.797×D-0.7355 (I: Intensity and D: duration).

  17. Towards an improved LAI collection protocol via simulated field-based PAR sensing

    DOE PAGES

    Yao, Wei; Van Leeuwen, Martin; Romanczyk, Paul; ...

    2016-07-14

    In support of NASA’s next-generation spectrometer—the Hyperspectral Infrared Imager (HyspIRI)—we are working towards assessing sub-pixel vegetation structure from imaging spectroscopy data. Of particular interest is Leaf Area Index (LAI), which is an informative, yet notoriously challenging parameter to efficiently measure in situ. While photosynthetically-active radiation (PAR) sensors have been validated for measuring crop LAI, there is limited literature on the efficacy of PAR-based LAI measurement in the forest environment. This study (i) validates PAR-based LAI measurement in forest environments, and (ii) proposes a suitable collection protocol, which balances efficiency with measurement variation, e.g., due to sun flecks and various-sized canopymore » gaps. A synthetic PAR sensor model was developed in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and used to validate LAI measurement based on first-principles and explicitly-known leaf geometry. Simulated collection parameters were adjusted to empirically identify optimal collection protocols. Furthermore, these collection protocols were then validated in the field by correlating PAR-based LAI measurement to the normalized difference vegetation index (NDVI) extracted from the “classic” Airborne Visible Infrared Imaging Spectrometer (AVIRIS-C) data (R 2 was 0.61). The results indicate that our proposed collecting protocol is suitable for measuring the LAI of sparse forest (LAI < 3–5 ( m 2/m 2)).« less

  18. Evaluation of computational endomicroscopy architectures for minimally-invasive optical biopsy

    NASA Astrophysics Data System (ADS)

    Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.

    2017-02-01

    We are investigating compressive sensing architectures for applications in endomicroscopy, where the narrow diameter probes required for tissue access can limit the achievable spatial resolution. We hypothesize that the compressive sensing framework can be used to overcome the fundamental pixel number limitation in fiber-bundle based endomicroscopy by reconstructing images with more resolvable points than fibers in the bundle. An experimental test platform was assembled to evaluate and compare two candidate architectures, based on introducing a coded amplitude mask at either a conjugate image or Fourier plane within the optical system. The benchtop platform consists of a common illumination and object path followed by separate imaging arms for each compressive architecture. The imaging arms contain a digital micromirror device (DMD) as a reprogrammable mask, with a CCD camera for image acquisition. One arm has the DMD positioned at a conjugate image plane ("IP arm"), while the other arm has the DMD positioned at a Fourier plane ("FP arm"). Lenses were selected and positioned within each arm to achieve an element-to-pixel ratio of 16 (230,400 mask elements mapped onto 14,400 camera pixels). We discuss our mathematical model for each system arm and outline the importance of accounting for system non-idealities. Reconstruction of a 1951 USAF resolution target using optimization-based compressive sensing algorithms produced images with higher spatial resolution than bicubic interpolation for both system arms when system non-idealities are included in the model. Furthermore, images generated with image plane coding appear to exhibit higher spatial resolution, but more noise, than images acquired through Fourier plane coding.

  19. Integration of Remote Sensing and Geophysical Applications for Delineation of Geological Structures: Implication for Water Resources in Egypt

    NASA Astrophysics Data System (ADS)

    Mohamed, L.; Farag, A. Z. A.

    2017-12-01

    North African countries struggle with insufficient, polluted, oversubscribed, and increasingly expensive water. This natural water shortage, in addition to the lack of a comprehensive scheme for the identification of new water resources challenge the political settings in north Africa. Groundwater is one of the main water resources and its occurrence is controlled by the structural elements which are still poorly understood. Integration of remote sensing images and geophysical tools enable us to delineate the surface and subsurface structures (i.e. faults, joints and shear zones), identify the role of these structures on groundwater flow and then to define the proper locations for groundwater wells. This approach were applied to three different areas in Egypt; southern Sinai, north eastern Sinai and the Eastern Desert using remote sensing, geophysical and hydrogeological datasets as follows: (1) identification of the spatial and temporal rainfall events using meteorological station data and Tropical Rainfall Measuring Mission data; (2) delineation of major faults and shear zones using ALOS Palsar, Landsat 8 and ASTER images, geological maps and field investigation; (3) generation of a normalized difference ratio image using Envisat radar images before and after the rain events to identify preferential water-channeling discontinuities in the crystalline terrain; (4) analysis of well data and derivations of hydrological parameters; (5) validation of the water-channeling discontinuities using Very Low Frequency, testing the structural elements (pre-delineated by remote sensing data) and their depth using gravity, magnetic and Vertical Electrical Sounding methods; (6) generation of regional groundwater flow and isotopic (18O and 2H) distribution maps for the sedimentary aquifer and an approximation flow map for the crystalline aquifer. The outputs include: (1) a conceptual/physical model for the groundwater flow in fractured crystalline and sedimentary aquifers; (2) locations of suggested new wells in light of the findings.

  20. Investigating the relationship between tree heights derived from SIBBORK forest model and remote sensing measurements

    NASA Astrophysics Data System (ADS)

    Osmanoglu, B.; Feliciano, E. A.; Armstrong, A. H.; Sun, G.; Montesano, P.; Ranson, K.

    2017-12-01

    Tree heights are one of the most commonly used remote sensing parameters to measure biomass of a forest. In this project, we investigate the relationship between remotely sensed tree heights (e.g. G-LiHT lidar and commercially available high resolution satellite imagery, HRSI) and the SIBBORK modeled tree heights. G-LiHT is a portable, airborne imaging system that simultaneously maps the composition, structure, and function of terrestrial ecosystems using lidar, imaging spectroscopy and thermal mapping. Ground elevation and canopy height models were generated using the lidar data acquired in 2012. A digital surface model was also generated using the HRSI technique from the commercially available WorldView data in 2016. The HRSI derived height and biomass products are available at the plot (10x10m) level. For this study, we parameterized the SIBBORK individual-based gap model for Howland forest, Maine. The parameterization was calibrated using field data for the study site and results show that the simulated forest reproduces the structural complexity of Howland old growth forest, based on comparisons of key variables including, aboveground biomass, forest height and basal area. Furthermore carbon cycle and ecosystem observational capabilities will be enhanced over the next 6 years via the launch of two LiDAR (NASA's GEDI and ICESAT 2) and two SAR (NASA's ISRO NiSAR and ESA's Biomass) systems. Our aim is to present the comparison of canopy height models obtained with SIBBORK forest model and remote sensing techniques, highlighting the synergy between individual-based forest modeling and high-resolution remote sensing.

  1. FFT-enhanced IHS transform method for fusing high-resolution satellite images

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2007-01-01

    Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  2. High-resolution DEM generation from multiple remote sensing data sources for improved volcanic hazard assessment - a case study from Nevado del Ruiz, Colombia

    NASA Astrophysics Data System (ADS)

    Deng, Fanghui; Dixon, Timothy H.; Rodgers, Mel; Charbonnier, Sylvain J.; Gallant, Elisabeth A.; Voss, Nicholas; Xie, Surui; Malservisi, Rocco; Ordoñez, Milton; López, Cristian M.

    2017-04-01

    Eruptions of active volcanoes in the presence of snow and ice can cause dangerous floods, avalanches and lahars, threatening millions of people living close to such volcanoes. Colombia's deadliest volcanic hazard in recorded history was caused by Nevado del Ruiz Volcano. On November 13, 1985, a relatively small eruption triggered enormous lahars, killing over 23,000 people in the city of Armero and 2,000 people in the town of Chinchina. Meltwater from a glacier capping the summit of the volcano was the main contributor to the lahars. From 2010 to present, increased seismicity, surface deformation, ash plumes and gas emissions have been observed at Nevado del Ruiz. The DEM is a key parameter for accurate prediction of the pathways of lava flows, pyroclastic flows, and lahars. While satellite coverage has greatly improved the quality of DEMs around the world, volcanoes remain a challenging target because of extremely rugged terrain with steep slopes and deeply cut valleys. In this study, three types of remote sensing data sources with different spatial scales (satellite radar interferometry, terrestrial radar interferometry (TRI), and structure from motion (SfM)) were combined to generate a high resolution DEM (10 m) of Nevado del Ruiz. 1) Synthetic aperture radar (SAR) images acquired by TSX/TDX satellites were applied to generate DEM covering most of the study area. To reduce the effect of geometric distortion inherited from SAR images, TSX/TDX DEMs from ascending and descending orbits were merged to generate a 10×10 m DEM. 2) TRI is a technique that uses a scanning radar to measure the amplitude and phase of a backscattered microwave signal. It provides a more flexible and reliable way to generate DEMs in steep-slope terrain compared with TSX/TDX satellites. The TRI was mounted at four different locations to image the upper slopes of the volcano. A DEM with 5×5 m resolution was generated by TRI. 3) SfM provides an alternative for shadow zones in both TSX/TDX and TRI images. It is a low-cost and effective method to generate high-quality DEMs in relatively small spatial scales. More than 2000 photos were combined to create a DEM of the deep valley in the shadow zones. DEMs from the above three remote sensing data sources were merged into a final DEM with 10×10 m resolution. The effect of this improved DEM on hazard assessment can be evaluated using numerical flow models.

  3. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  4. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    PubMed Central

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  5. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    PubMed

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  6. Efficient generation of image chips for training deep learning algorithms

    NASA Astrophysics Data System (ADS)

    Han, Sanghui; Fafard, Alex; Kerekes, John; Gartley, Michael; Ientilucci, Emmett; Savakis, Andreas; Law, Charles; Parhan, Jason; Turek, Matt; Fieldhouse, Keith; Rovito, Todd

    2017-05-01

    Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with the simulated images, especially when obtaining sufficient real data was particularly challenging.

  7. [Birds' sense of direction].

    PubMed

    Hohtola, Esa

    2016-01-01

    Birds utilize several distinct sensory systems in a flexible manner in their navigation. When navigating with the help of landmarks, location of the sun and stars, or polarization image of the dome of the sky, they resort to vision. The significance of olfaction in long-range navigation has been under debate, even though its significance in local orientation is well documented. The hearing in birds extends to the infrasound region. It has been assumed that they are able to hear the infrasounds generated in the mountains and seaside and navigate by using them. Of the senses of birds, the most exotic one is the ability to sense magnetic fields of the earth.

  8. Status of VESAS: a fully-electronic microwave imaging radiometer system

    NASA Astrophysics Data System (ADS)

    Schreiber, Eric; Peichl, Markus; Suess, Helmut

    2010-04-01

    Present applications of microwave remote sensing systems cover a large variety. One utilisation of the frequency range from 1 - 300 GHz is the domain of security and reconnaissance. Examples are the observation of critical infrastructures or the performance of security checks on people in order to detect concealed weapons or explosives, both being frequent threats in our world of growing international terrorism. The imaging capability of concealed objects is one of the main advantages of microwave remote sensing, because of the penetration performance of electromagnetic waves through dielectric materials in this frequency domain. The main physical effects used in passive microwave sensing rely on the naturally generated thermal radiation and the physical properties of matter, the latter being surface characteristics, chemical and physical composition, and the temperature of the material. As a consequence it is possible to discriminate objects having different material characteristics like ceramic weapons or plastic explosives with respect to the human body. Considering the use of microwave imaging with respect to people scanning systems in airports, railway stations, or stadiums, it is advantageous that passively operating devices generate no exposure on the scanned objects like actively operating devices do. For frequently used security gateways it is additionally important to have a high through-put rate in order to minimize the queue time. Consequently fast imaging systems are necessary. In this regard the conceptual idea of a fully-electronic microwave imaging radiometer system is introduced. The two-dimensional scanning mechanism is divided into a frequency scan in one direction and the method of aperture synthesis in the other. The overall goal here is to design a low-cost, fully-electronic imaging system with a frame rate of around one second at Ka band. This frequency domain around a center frequency of 37 GHz offers a well-balanced compromise between the achievable spatial resolution for a given size, and the penetration depth of the electromagnetic wave, which are conflictive requirements.

  9. Potential for detection of explosive and biological hazards with electronic terahertz systems.

    PubMed

    Choi, Min Ki; Bettermann, Alan; van der Weide, D W

    2004-02-15

    The terahertz (THz) regime (0.1-10 THz) is rich with emerging possibilities in sensing, imaging and communications, with unique applications to screening for weapons, explosives and biohazards, imaging of concealed objects, water content and skin. Here we present initial surveys to evaluate the possibility of sensing plastic explosives and bacterial spores using field-deployable electronic THz techniques based on short-pulse generation and coherent detection using nonlinear transmission lines and diode sampling bridges. We also review the barriers and approaches to achieving greater sensing-at-a-distance (stand-off) capabilities for THz sensing systems. We have made several reflection measurements of metallic and non-metallic targets in our laboratory, and have observed high contrast relative to reflection from skin. In particular, we have taken small quantities of energetic materials such as plastic explosives and a variety of Bacillus spores, and measured them in transmission and in reflection using a broadband pulsed electronic THz reflectometer. The pattern of reflection versus frequency gives rise to signatures that are remarkably specific to the composition of the target, even though the target's morphology and position is varied. Although more work needs to be done to reduce the effects of standing waves through time-gating or attenuators, the possibility of mapping out this contrast for imaging and detection is very attractive.

  10. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    NASA Astrophysics Data System (ADS)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  11. Holographic enhanced remote sensing system

    NASA Technical Reports Server (NTRS)

    Iavecchia, Helene P.; Gaynor, Edwin S.; Huff, Lloyd; Rhodes, William T.; Rothenheber, Edward H.

    1990-01-01

    The Holographic Enhanced Remote Sensing System (HERSS) consists of three primary subsystems: (1) an Image Acquisition System (IAS); (2) a Digital Image Processing System (DIPS); and (3) a Holographic Generation System (HGS) which multiply exposes a thermoplastic recording medium with sequential 2-D depth slices that are displayed on a Spatial Light Modulator (SLM). Full-parallax holograms were successfully generated by superimposing SLM images onto the thermoplastic and photopolymer. An improved HGS configuration utilizes the phase conjugate recording configuration, the 3-SLM-stacking technique, and the photopolymer. The holographic volume size is currently limited to the physical size of the SLM. A larger-format SLM is necessary to meet the desired 6 inch holographic volume. A photopolymer with an increased photospeed is required to ultimately meet a display update rate of less than 30 seconds. It is projected that the latter two technology developments will occur in the near future. While the IAS and DIPS subsystems were unable to meet NASA goals, an alternative technology is now available to perform the IAS/DIPS functions. Specifically, a laser range scanner can be utilized to build the HGS numerical database of the objects at the remote work site.

  12. Method of acquiring an image from an optical structure having pixels with dedicated readout circuits

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2006-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  13. Research on assessment and improvement method of remote sensing image reconstruction

    NASA Astrophysics Data System (ADS)

    Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping

    2018-01-01

    Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.

  14. Novel ray tracing method for stray light suppression from ocean remote sensing measurements.

    PubMed

    Oh, Eunsong; Hong, Jinsuk; Kim, Sug-Whan; Park, Young-Je; Cho, Seong-Ick

    2016-05-16

    We developed a new integrated ray tracing (IRT) technique to analyze the stray light effect in remotely sensed images. Images acquired with the Geostationary Ocean Color Imager show a radiance level discrepancy at the slot boundary, which is suspected to be a stray light effect. To determine its cause, we developed and adjusted a novel in-orbit stray light analysis method, which consists of three simulated phases (source, target, and instrument). Each phase simulation was performed in a way that used ray information generated from the Sun and reaching the instrument detector plane efficiently. This simulation scheme enabled the construction of the real environment from the remote sensing data, with a focus on realistic phenomena. In the results, even in a cloud-free environment, a background stray light pattern was identified at the bottom of each slot. Variations in the stray light effect and its pattern according to bright target movement were simulated, with a maximum stray light ratio of 8.5841% in band 2 images. To verify the proposed method and simulation results, we compared the results with the real acquired remotely sensed image. In addition, after correcting for abnormal phenomena in specific cases, we confirmed that the stray light ratio decreased from 2.38% to 1.02% in a band 6 case, and from 1.09% to 0.35% in a band 8 case. IRT-based stray light analysis enabled clear determination of the stray light path and candidates in in-orbit circumstances, and the correction process aided recovery of the radiometric discrepancy.

  15. Synchronous atmospheric radiation correction of GF-2 satellite multispectral image

    NASA Astrophysics Data System (ADS)

    Bian, Fuqiang; Fan, Dongdong; Zhang, Yan; Wang, Dandan

    2018-02-01

    GF-2 remote sensing products have been widely used in many fields for its high-quality information, which provides technical support for the the macroeconomic decisions. Atmospheric correction is the necessary part in the data preprocessing of the quantitative high resolution remote sensing, which can eliminate the signal interference in the radiation path caused by atmospheric scattering and absorption, and reducting apparent reflectance into real reflectance of the surface targets. Aiming at the problem that current research lack of atmospheric date which are synchronization and region matching of the surface observation image, this research utilize the MODIS Level 1B synchronous data to simulate synchronized atmospheric condition, and write programs to implementation process of aerosol retrieval and atmospheric correction, then generate a lookup table of the remote sensing image based on the radioactive transfer model of 6S (second simulation of a satellite signal in the solar spectrum) to correct the atmospheric effect of multispectral image from GF-2 satellite PMS-1 payload. According to the correction results, this paper analyzes the pixel histogram of the reflectance spectrum of the 4 spectral bands of PMS-1, and evaluates the correction results of different spectral bands. Then conducted a comparison experiment on the same GF-2 image based on the QUAC. According to the different targets respectively statistics the average value of NDVI, implement a comparative study of NDVI from two different results. The degree of influence was discussed by whether to adopt synchronous atmospheric date. The study shows that the result of the synchronous atmospheric parameters have significantly improved the quantitative application of the GF-2 remote sensing data.

  16. A case study of comparing radiometrically calibrated reflectance of an image mosaic from unmanned aerial system with that of a single image from manned aircraft over a same area

    NASA Astrophysics Data System (ADS)

    Shi, Yeyin; Thomasson, J. Alex; Yang, Chenghai; Cope, Dale; Sima, Chao

    2017-05-01

    Though sharing with many commonalities, one of the major differences between conventional high-altitude airborne remote sensing and low-altitude unmanned aerial system (UAS) based remote sensing is that the latter one has much smaller ground footprint for each image shot. To cover the same area on the ground, it requires the low-altitude UASbased platform to take many highly-overlapped images to produce a good mosaic, instead of just one or a few image shots by the high-altitude aerial platform. Such an UAS flight usually takes 10 to 30 minutes or even longer to complete; environmental lighting change during this time span cannot be ignored especially when spectral variations of various parts of a field are of interests. In this case study, we compared the visible reflectance of two aerial imagery - one generated from mosaicked UAS images, the other generated from a single image taken by a manned aircraft - over the same agricultural field to quantitatively evaluate their spectral variations caused by the different data acquisition strategies. Specifically, we (1) developed our customized ground calibration points (GCPs) and an associated radiometric calibration method for UAS data processing based on camera's sensitivity characteristics; (2) developed a basic comparison method for radiometrically calibrated data from the two aerial platforms based on regions of interests. We see this study as a starting point for a series of following studies to understand the environmental influence on UAS data and investigate the solutions to minimize such influence to ensure data quality.

  17. Optical image cryptosystem using chaotic phase-amplitude masks encoding and least-data-driven decryption by compressive sensing

    NASA Astrophysics Data System (ADS)

    Lang, Jun; Zhang, Jing

    2015-03-01

    In our proposed optical image cryptosystem, two pairs of phase-amplitude masks are generated from the chaotic web map for image encryption in the 4f double random phase-amplitude encoding (DRPAE) system. Instead of transmitting the real keys and the enormous masks codes, only a few observed measurements intermittently chosen from the masks are delivered. Based on compressive sensing paradigm, we suitably refine the series expansions of web map equations to better reconstruct the underlying system. The parameters of the chaotic equations can be successfully calculated from observed measurements and then can be used to regenerate the correct random phase-amplitude masks for decrypting the encoded information. Numerical simulations have been performed to verify the proposed optical image cryptosystem. This cryptosystem can provide a new key management and distribution method. It has the advantages of sufficiently low occupation of the transmitted key codes and security improvement of information transmission without sending the real keys.

  18. A novel technique to monitor thermal discharges using thermal infrared imaging.

    PubMed

    Muthulakshmi, A L; Natesan, Usha; Ferrer, Vincent A; Deepthi, K; Venugopalan, V P; Narasimhan, S V

    2013-09-01

    Coastal temperature is an important indicator of water quality, particularly in regions where delicate ecosystems sensitive to water temperature are present. Remote sensing methods are highly reliable for assessing the thermal dispersion. The plume dispersion from the thermal outfall of the nuclear power plant at Kalpakkam, on the southeast coast of India, was investigated from March to December 2011 using thermal infrared images along with field measurements. The absolute temperature as provided by the thermal infrared (TIR) images is used in the Arc GIS environment for generating a spatial pattern of the plume movement. Good correlation of the temperature measured by the TIR camera with the field data (r(2) = 0.89) make it a reliable method for the thermal monitoring of the power plant effluents. The study portrays that the remote sensing technique provides an effective means of monitoring the thermal distribution pattern in coastal waters.

  19. Real-time needle guidance with photoacoustic and laser-generated ultrasound probes

    NASA Astrophysics Data System (ADS)

    Colchester, Richard J.; Mosse, Charles A.; Nikitichev, Daniil I.; Zhang, Edward Z.; West, Simeon; Beard, Paul C.; Papakonstantinou, Ioannis; Desjardins, Adrien E.

    2015-03-01

    Detection of tissue structures such as nerves and blood vessels is of critical importance during many needle-based minimally invasive procedures. For instance, unintentional injections into arteries can lead to strokes or cardiotoxicity during interventional pain management procedures that involve injections in the vicinity of nerves. Reliable detection with current external imaging systems remains elusive. Optical generation and reception of ultrasound allow for depth-resolved sensing and they can be performed with optical fibers that are positioned within needles used in clinical practice. The needle probe developed in this study comprised separate optical fibers for generating and receiving ultrasound. Photoacoustic generation of ultrasound was performed on the distal end face of an optical fiber by coating it with an optically absorbing material. Ultrasound reception was performed using a high-finesse Fabry-Pérot cavity. The sensor data was displayed as an M-mode image with a real-time interface. Imaging was performed on a biological tissue phantom.

  20. Evaluation of Ice sheet evolution and coastline changes from 1960s in Amery Ice Shelf using multi-source remote sensing images

    NASA Astrophysics Data System (ADS)

    Qiao, G.; Ye, W.; Scaioni, M.; Liu, S.; Feng, T.; Liu, Y.; Tong, X.; Li, R.

    2013-12-01

    Global change is one of the major challenges that all the nations are commonly facing, and the Antarctica ice sheet changes have been playing a critical role in the global change research field during the past years. Long time-series of ice sheet observations in Antarctica would contribute to the quantitative evaluation and precise prediction of the effects on global change induced by the ice sheet, of which the remote sensing technology would make critical contributions. As the biggest ice shelf and one of the dominant drainage systems in East Antarctic, the Amery Ice Shelf has been making significant contributions to the mass balance of the Antarctic. Study of Amery Ice shelf changes would advance the understanding of Antarctic ice shelf evolution as well as the overall mass balance. At the same time, as one of the important indicators of Antarctica ice sheet characteristics, coastlines that can be detected from remote sensing imagery can help reveal the nature of the changes of ice sheet evolution. Most of the scientific research on Antarctica with satellite remote sensing dated from 1970s after LANDSAT satellite was brought into operation. It was the declassification of the cold war satellite reconnaissance photographs in 1995, known as Declassified Intelligence Satellite Photograph (DISP) that provided a direct overall view of the Antarctica ice-sheet's configuration in 1960s, greatly extending the time span of Antarctica surface observations. This paper will present the evaluation of ice-sheet evolution and coastline changes in Amery Ice Shelf from 1960s, by using multi-source remote sensing images including the DISP images and the modern optical satellite images. The DISP images scanned from negatives were first interior-oriented with the associated parameters, and then bundle block adjustment technology was employed based on the tie points and control points, to derive the mosaic image of the research region. Experimental results of coastlines generated from DISP images and that from ASTER images were analyzed, and the changes and evolution of Amery ice shelf were then evaluated, following by the discussion of the possible drives.

  1. Atmospheric Correction of High-Spatial-Resolution Commercial Satellite Imagery Products Using MODIS Atmospheric Products

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Holekamp, Kara; Ryan, Robert E.; Vaughan, Ronand; Russell, Jeff; Prados, Don; Stanley, Thomas

    2005-01-01

    Remotely sensed ground reflectance is the foundation of any interoperability or change detection technique. Satellite intercomparisons and accurate vegetation indices, such as the Normalized Difference Vegetation Index (NDVI), require the generation of accurate reflectance maps (NDVI is used to describe or infer a wide variety of biophysical parameters and is defined in terms of near-infrared (NIR) and red band reflectances). Accurate reflectance-map generation from satellite imagery relies on the removal of solar and satellite geometry and of atmospheric effects and is generally referred to as atmospheric correction. Atmospheric correction of remotely sensed imagery to ground reflectance has been widely applied to a few systems only. The ability to obtain atmospherically corrected imagery and products from various satellites is essential to enable widescale use of remotely sensed, multitemporal imagery for a variety of applications. An atmospheric correction approach derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that can be applied to high-spatial-resolution satellite imagery under many conditions was evaluated to demonstrate a reliable, effective reflectance map generation method. Additional information is included in the original extended abstract.

  2. Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code

    NASA Technical Reports Server (NTRS)

    Gonzales, Fabian O.; Velez-Reyes, Miguel

    1997-01-01

    When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,

  3. NASA Fluid Lensing & MiDAR - Next-Generation Remote Sensing Technologies for Aquatic Remote Sensing

    NASA Technical Reports Server (NTRS)

    Chirayath, Ved

    2018-01-01

    Piti's Tepungan Bay and Tumon Bay, two of five marine preserves in Guam, have not been mapped to a level of detail sufficient to support proposed management strategies. This project addresses this gap by providing high resolution maps to promote sustainable, responsible use of the area while protecting natural resources. Dr. Chirayath, a research scientist at the NASA Ames Laboratory, developed a theoretical model and algorithm called 'Fluid Lensing'. Fluid lensing removes optical distortions caused by moving water, improving the clarity of the images taken of the corals below the surface. We will also be using MiDAR, a next-generation remote sensing instrument that provides real-time multispectral video using an array of LED emitters coupled with NASA's FluidCam Imaging System, which may assist Guam's coral reef response team in understanding the severity and magnitude of coral bleaching events. This project will produce a 3D orthorectified model of the shallow water coral reef ecosystems in Tumon Bay and Piti marine preserves. These 3D models may be printed, creating a tactile diorama and increasing understanding of coral reefs among various audiences, including key decision makers. More importantly, the final data products can enable accurate and quantitative health assessment capabilities for coral reef ecosystems.

  4. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  5. Method and apparatus for imaging a sample on a device

    DOEpatents

    Trulson, Mark; Stern, David; Fiekowsky, Peter; Rava, Richard; Walton, Ian; Fodor, Stephen P. A.

    2001-01-01

    A method and apparatus for imaging a sample are provided. An electromagnetic radiation source generates excitation radiation which is sized by excitation optics to a line. The line is directed at a sample resting on a support and excites a plurality of regions on the sample. Collection optics collect response radiation reflected from the sample I and image the reflected radiation. A detector senses the reflected radiation and is positioned to permit discrimination between radiation reflected from a certain focal plane in the sample and certain other planes within the sample.

  6. Implementation of compressive sensing for preclinical cine-MRI

    NASA Astrophysics Data System (ADS)

    Tan, Elliot; Yang, Ming; Ma, Lixin; Zheng, Yahong Rosa

    2014-03-01

    This paper presents a practical implementation of Compressive Sensing (CS) for a preclinical MRI machine to acquire randomly undersampled k-space data in cardiac function imaging applications. First, random undersampling masks were generated based on Gaussian, Cauchy, wrapped Cauchy and von Mises probability distribution functions by the inverse transform method. The best masks for undersampling ratios of 0.3, 0.4 and 0.5 were chosen for animal experimentation, and were programmed into a Bruker Avance III BioSpec 7.0T MRI system through method programming in ParaVision. Three undersampled mouse heart datasets were obtained using a fast low angle shot (FLASH) sequence, along with a control undersampled phantom dataset. ECG and respiratory gating was used to obtain high quality images. After CS reconstructions were applied to all acquired data, resulting images were quantitatively analyzed using the performance metrics of reconstruction error and Structural Similarity Index (SSIM). The comparative analysis indicated that CS reconstructed images from MRI machine undersampled data were indeed comparable to CS reconstructed images from retrospective undersampled data, and that CS techniques are practical in a preclinical setting. The implementation achieved 2 to 4 times acceleration for image acquisition and satisfactory quality of image reconstruction.

  7. Research on active imaging information transmission technology of satellite borne quantum remote sensing

    NASA Astrophysics Data System (ADS)

    Bi, Siwen; Zhen, Ming; Yang, Song; Lin, Xuling; Wu, Zhiqiang

    2017-08-01

    According to the development and application needs of Remote Sensing Science and technology, Prof. Siwen Bi proposed quantum remote sensing. Firstly, the paper gives a brief introduction of the background of quantum remote sensing, the research status and related researches at home and abroad on the theory, information mechanism and imaging experiments of quantum remote sensing and the production of principle prototype.Then, the quantization of pure remote sensing radiation field, the state function and squeezing effect of quantum remote sensing radiation field are emphasized. It also describes the squeezing optical operator of quantum light field in active imaging information transmission experiment and imaging experiments, achieving 2-3 times higher resolution than that of coherent light detection imaging and completing the production of quantum remote sensing imaging prototype. The application of quantum remote sensing technology can significantly improve both the signal-to-noise ratio of information transmission imaging and the spatial resolution of quantum remote sensing .On the above basis, Prof.Bi proposed the technical solution of active imaging information transmission technology of satellite borne quantum remote sensing, launched researches on its system composition and operation principle and on quantum noiseless amplifying devices, providing solutions and technical basis for implementing active imaging information technology of satellite borne Quantum Remote Sensing.

  8. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  9. Mid-infrared (MIR) photonics: MIR passive and active fiberoptics chemical and biomedical, sensing and imaging

    NASA Astrophysics Data System (ADS)

    Seddon, Angela B.

    2016-10-01

    The case for new, portable, real-time mid-infrared (MIR) molecular sensing and imaging is discussed. We set a record in demonstrating extreme broad-band supercontinuum (SC) generated light 1.4-13.3 μm in a specially engineered, step-index MIR optical fiber of high numerical aperture. This was the first experimental demonstration truly to reveal the potential of MIR fibers to emit across the MIR molecular "fingerprint spectral region" and a key first step towards bright, portable, broadband MIR sources for chemical and biomedical, molecular sensing and imaging in real-time. Potential applications are in the healthcare, security, energy, environmental monitoring, chemical-processing, manufacturing and the agriculture sectors. MIR narrow-line fiber lasers are now required to pump the fiber MIR-SC for a compact all-fiber solution. Rare-earth-ion (RE-) doped MIR fiber lasers are not yet demonstrated >=4 μm wavelength. We have fabricated small-core RE-fiber with photoluminescence across 3.5-6 μm, and long excited-state lifetimes. MIR-RE-fiber lasers are also applicable as discrete MIR fiber sensors in their own right, for applications including: ship-to-ship free-space communications, aircraft counter-measures, coherent MIR imaging, MIR-optical coherent tomography, laser-cutting/ patterning of soft materials and new wavelengths for fiber laser medical surgery.

  10. Tactile surface classification for limbed robots using a pressure sensitive robot skin.

    PubMed

    Shill, Jacob J; Collins, Emmanuel G; Coyle, Eric; Clark, Jonathan

    2015-02-02

    This paper describes an approach to terrain identification based on pressure images generated through direct surface contact using a robot skin constructed around a high-resolution pressure sensing array. Terrain signatures for classification are formulated from the magnitude frequency responses of the pressure images. The initial experimental results for statically obtained images show that the approach yields classification accuracies [Formula: see text]. The methodology is extended to accommodate the dynamic pressure images anticipated when a robot is walking or running. Experiments with a one-legged hopping robot yield similar identification accuracies [Formula: see text]. In addition, the accuracies are independent with respect to changing robot dynamics (i.e., when using different leg gaits). The paper further shows that the high-resolution capabilities of the sensor enables similarly textured surfaces to be distinguished. A correcting filter is developed to accommodate for failures or faults that inevitably occur within the sensing array with continued use. Experimental results show using the correcting filter can extend the effective operational lifespan of a high-resolution sensing array over 6x in the presence of sensor damage. The results presented suggest this methodology can be extended to autonomous field robots, providing a robot with crucial information about the environment that can be used to aid stable and efficient mobility over rough and varying terrains.

  11. Solid images generated from UAVs to analyze areas affected by rock falls

    NASA Astrophysics Data System (ADS)

    Giordan, Daniele; Manconi, Andrea; Allasia, Paolo; Baldo, Marco

    2015-04-01

    The study of rock fall affected areas is usually based on the recognition of principal joints families and the localization of potential instable sectors. This requires the acquisition of field data, although as the areas are barely accessible and field inspections are often very dangerous. For this reason, remote sensing systems can be considered as suitable alternative. Recently, Unmanned Aerial Vehicles (UAVs) have been proposed as platform to acquire the necessary information. Indeed, mini UAVs (in particular in the multi-rotors configuration) provide versatility for the acquisition from different points of view a large number of high resolution optical images, which can be used to generate high resolution digital models relevant to the study area. By considering the recent development of powerful user-friendly software and algorithms to process images acquired from UAVs, there is now a need to establish robust methodologies and best-practice guidelines for correct use of 3D models generated in the context of rock fall scenarios. In this work, we show how multi-rotor UAVs can be used to survey areas by rock fall during real emergency contexts. We present two examples of application located in northwestern Italy: the San Germano rock fall (Piemonte region) and the Moneglia rock fall (Liguria region). We acquired data from both terrestrial LiDAR and UAV, in order to compare digital elevation models generated with different remote sensing approaches. We evaluate the volume of the rock falls, identify the areas potentially unstable, and recognize the main joints families. The use on is not so developed but probably this approach can be considered the better solution for a structural investigation of large rock walls. We propose a methodology that jointly considers the Structure from Motion (SfM) approach for the generation of 3D solid images, and a geotechnical analysis for the identification of joint families and potential failure planes.

  12. Remote Sensing, Modeling, and In-Situ Measurements to Study the Spring and Summer Thermal Regime of the Kuparuk River, Northern Alaska

    NASA Astrophysics Data System (ADS)

    Floyd, A.; Liljedahl, A. K.; Gens, R.; Prakash, A.; Mann, D. H.

    2011-12-01

    A combined use of remote sensing techniques, modeling and in-situ measurements is a pragmatic approach to study arctic hydrology, given the vastness, complexity, and logistical challenges posed by most arctic watersheds. Remote sensing techniques can provide tools to assess the geospatial variations that form the integrated response of a river system and therefore provide important details to study climate change effects on the remote arctic environment. The proposed study tests the applicability of remote sensing and modeling techniques to map, monitor and compare river temperatures and river break-up in the coastal and foothill sections of the Kuparak River, which is an intensely studied watershed. We co-registered about hundred synthetic aperture radar (SAR) images from RADARSAT-1, ERS-1 and ERS-2 satellites, acquired during the months of May through July for a period between 1999 and 2010. Co-registration involved a Fast Fourier Transform (FFT) match of amplitude images. The offsets were then applied to the radiometrically corrected SAR images, converted to dB values, to generate an image stack. We applied a mask to extract pixels representing only the river, and used an adaptive threshold to delineate open water from frozen areas. The variation in river break-up can be bracketed by defining open vs. frozen river conditions. Summer river surface water temperatures will be simulated through the well-established HEC-RAS hydrologic software package and validated with field measurements. The three-pronged approach of using remote sensing, modeling and field measurements demonstrated in this study can be adapted to work for other watersheds across the Arctic.

  13. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  14. Terahertz imaging with compressive sensing

    NASA Astrophysics Data System (ADS)

    Chan, Wai Lam

    Most existing terahertz imaging systems are generally limited by slow image acquisition due to mechanical raster scanning. Other systems using focal plane detector arrays can acquire images in real time, but are either too costly or limited by low sensitivity in the terahertz frequency range. To design faster and more cost-effective terahertz imaging systems, the first part of this thesis proposes two new terahertz imaging schemes based on compressive sensing (CS). Both schemes can acquire amplitude and phase-contrast images efficiently with a single-pixel detector, thanks to the powerful CS algorithms which enable the reconstruction of N-by- N pixel images with much fewer than N2 measurements. The first CS Fourier imaging approach successfully reconstructs a 64x64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels which defines the image in the Fourier plane. Only about 12% of the pixels are required for reassembling the image of a selected object, equivalent to a 2/3 reduction in acquisition time. The second approach is single-pixel CS imaging, which uses a series of random masks for acquisition. Besides speeding up acquisition with a reduced number of measurements, the single-pixel system can further cut down acquisition time by electrical or optical spatial modulation of random patterns. In order to switch between random patterns at high speed in the single-pixel imaging system, the second part of this thesis implements a multi-pixel electrical spatial modulator for terahertz beams using active terahertz metamaterials. The first generation of this device consists of a 4x4 pixel array, where each pixel is an array of sub-wavelength-sized split-ring resonator elements fabricated on a semiconductor substrate, and is independently controlled by applying an external voltage. The spatial modulator has a uniform modulation depth of around 40 percent across all pixels, and negligible crosstalk, at the resonant frequency. The second-generation spatial terahertz modulator, also based on metamaterials with a higher resolution (32x32), is under development. A FPGA-based circuit is designed to control the large number of modulator pixels. Once fully implemented, this second-generation device will enable fast terahertz imaging with both pulsed and continuous-wave terahertz sources.

  15. The Airborne Visible / Infrared Imaging Spectrometer AVIS: Design, Characterization and Calibration.

    PubMed

    Oppelt, Natascha; Mauser, Wolfram

    2007-09-14

    The Airborne Visible / Infrared imaging Spectrometer AVIS is a hyperspectralimager designed for environmental monitoring purposes. The sensor, which wasconstructed entirely from commercially available components, has been successfullydeployed during several experiments between 1999 and 2007. We describe the instrumentdesign and present the results of laboratory characterization and calibration of the system'ssecond generation, AVIS-2, which is currently being operated. The processing of the datais described and examples of remote sensing reflectance data are presented.

  16. Fast variogram analysis of remotely sensed images in HPC environment

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Cortés, Anna; Masó, Joan; Pons, Xavier

    2013-04-01

    Exploring and describing spatial variation of images is one of the main applications of geostatistics to remote sensing. The variogram is a very suitable tool to carry out this spatial pattern analysis. Variogram analysis is composed of two steps: empirical variogram generation and fitting a variogram model. The empirical variogram generation is a very quick procedure for most analyses of irregularly distributed samples, but time consuming increases quite significantly for remotely sensed images, because number of samples (pixels) involved is usually huge (more than 30 million for a Landsat TM scene), basically depending on extension and spatial resolution of images. In several remote sensing applications this type of analysis is repeated for each image, sometimes hundreds of scenes and sometimes for each radiometric band (high number in the case of hyperspectral images) so that there is a need for a fast implementation. In order to reduce this high execution time, we carried out a parallel solution of the variogram analyses. The solution adopted is the master/worker programming paradigm in which the master process distributes and coordinates the tasks executed by the worker processes. The code is written in ANSI-C language, including MPI (Message Passing Interface) as a message-passing library in order to communicate the master with the workers. This solution (ANSI-C + MPI) guarantees portability between different computer platforms. The High Performance Computing (HPC) environment is formed by 32 nodes, each with two Dual Core Intel(R) Xeon (R) 3.0 GHz processors with 12 Gb of RAM, communicated with integrated dual gigabit Ethernet. This IBM cluster is located in the research laboratory of the Computer Architecture and Operating Systems Department of the Universitat Autònoma de Barcelona. The performance results for a 15km x 15km subcene of 198-31 path-row Landsat TM image are shown in table 1. The proximity between empirical speedup behaviour and theoretical linear speedup confirms a suitable parallel design and implementation applied. N Workers Time (s) Speedup 0 2975.03 2 2112.33 1.41 4 1067.45 2.79 8 534.18 5.57 12 357.54 8.32 16 269.00 11.06 20 216.24 13.76 24 186.31 15.97 Furthermore, very similar performance results are obtained for CASI images (hyperspectral and finer spatial resolution than Landsat), showed in table 2, and demonstrating that the distributed load design is not specifically defined and optimized for unique type of images, but it is a flexible design that maintains a good balance and scalability suitable for different range of image dimensions. N Workers Time (s) Speedup 0 5485.03 2 3847.47 1.43 4 1921.62 2.85 8 965.55 5.68 12 644.26 8.51 16 483.40 11.35 20 393.67 13.93 24 347.15 15.80 28 306.33 17.91 32 304.39 18.02 Finally, we conclude that this significant time reduction underlines the utility of distributed environments for processing large amount of data as remotely sensed images.

  17. A micro-vibration generated method for testing the imaging quality on ground of space remote sensing

    NASA Astrophysics Data System (ADS)

    Gu, Yingying; Wang, Li; Wu, Qingwen

    2018-03-01

    In this paper, a novel method is proposed, which can simulate satellite platform micro-vibration and test the impact of satellite micro-vibration on imaging quality of space optical remote sensor on ground. The method can generate micro-vibration of satellite platform in orbit from vibrational degrees of freedom, spectrum, magnitude, and coupling path. Experiment results show that the relative error of acceleration control is within 7%, in frequencies from 7Hz to 40Hz. Utilizing this method, the system level test about the micro-vibration impact on imaging quality of space optical remote sensor can be realized. This method will have an important applications in testing micro-vibration tolerance margin of optical remote sensor, verifying vibration isolation and suppression performance of optical remote sensor, exploring the principle of micro-vibration impact on imaging quality of optical remote sensor.

  18. Assimilation of remote sensing observations into a sediment transport model of China's largest freshwater lake: spatial and temporal effects.

    PubMed

    Zhang, Peng; Chen, Xiaoling; Lu, Jianzhong; Zhang, Wei

    2015-12-01

    Numerical models are important tools that are used in studies of sediment dynamics in inland and coastal waters, and these models can now benefit from the use of integrated remote sensing observations. This study explores a scheme for assimilating remotely sensed suspended sediment (from charge-coupled device (CCD) images obtained from the Huanjing (HJ) satellite) into a two-dimensional sediment transport model of Poyang Lake, the largest freshwater lake in China. Optimal interpolation is used as the assimilation method, and model predictions are obtained by combining four remote sensing images. The parameters for optimal interpolation are determined through a series of assimilation experiments evaluating the sediment predictions based on field measurements. The model with assimilation of remotely sensed sediment reduces the root-mean-square error of the predicted sediment concentrations by 39.4% relative to the model without assimilation, demonstrating the effectiveness of the assimilation scheme. The spatial effect of assimilation is explored by comparing model predictions with remotely sensed sediment, revealing that the model with assimilation generates reasonable spatial distribution patterns of suspended sediment. The temporal effect of assimilation on the model's predictive capabilities varies spatially, with an average temporal effect of approximately 10.8 days. The current velocities which dominate the rate and direction of sediment transport most likely result in spatial differences in the temporal effect of assimilation on model predictions.

  19. Active pixel sensor array with multiresolution readout

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.

  20. Monitoring Change in Temperate Coniferous Forest Ecosystems

    NASA Technical Reports Server (NTRS)

    Williams, Darrel (Technical Monitor); Woodcock, Curtis E.

    2004-01-01

    The primary goal of this research was to improve monitoring of temperate forest change using remote sensing. In this context, change includes both clearing of forest due to effects such as fire, logging, or land conversion and forest growth and succession. The Landsat 7 ETM+ proved an extremely valuable research tool in this domain. The Landsat 7 program has generated an extremely valuable transformation in the land remote sensing community by making high quality images available for relatively low cost. In addition, the tremendous improvements in the acquisition strategy greatly improved the overall availability of remote sensing images. I believe that from an historical prespective, the Landsat 7 mission will be considered extremely important as the improved image availability will stimulate the use of multitemporal imagery at resolutions useful for local to regional mapping. Also, Landsat 7 has opened the way to global applications of remote sensing at spatial scales where important surface processes and change can be directly monitored. It has been a wonderful experience to have participated on the Landsat 7 Science Team. The research conducted under this project led to contributions in four general domains: I. Improved understanding of the information content of images as a function of spatial resolution; II. Monitoring Forest Change and Succession; III. Development and Integration of Advanced Analysis Methods; and IV. General support of the remote sensing of forests and environmental change. This report is organized according to these topics. This report does not attempt to provide the complete details of the research conducted with support from this grant. That level of detail is provided in the 16 peer reviewed journal articles, 7 book chapters and 5 conference proceedings papers published as part of this grant. This report attempts to explain how the various publications fit together to improve our understanding of how forests are changing and how to monitor forest change with remote sensing. There were no new inventions that resulted from this grant.

  1. Remote-Sensing Time Series Analysis, a Vegetation Monitoring Tool

    NASA Technical Reports Server (NTRS)

    McKellip, Rodney; Prados, Donald; Ryan, Robert; Ross, Kenton; Spruce, Joseph; Gasser, Gerald; Greer, Randall

    2008-01-01

    The Time Series Product Tool (TSPT) is software, developed in MATLAB , which creates and displays high signal-to- noise Vegetation Indices imagery and other higher-level products derived from remotely sensed data. This tool enables automated, rapid, large-scale regional surveillance of crops, forests, and other vegetation. TSPT temporally processes high-revisit-rate satellite imagery produced by the Moderate Resolution Imaging Spectroradiometer (MODIS) and by other remote-sensing systems. Although MODIS imagery is acquired daily, cloudiness and other sources of noise can greatly reduce the effective temporal resolution. To improve cloud statistics, the TSPT combines MODIS data from multiple satellites (Aqua and Terra). The TSPT produces MODIS products as single time-frame and multitemporal change images, as time-series plots at a selected location, or as temporally processed image videos. Using the TSPT program, MODIS metadata is used to remove and/or correct bad and suspect data. Bad pixel removal, multiple satellite data fusion, and temporal processing techniques create high-quality plots and animated image video sequences that depict changes in vegetation greenness. This tool provides several temporal processing options not found in other comparable imaging software tools. Because the framework to generate and use other algorithms is established, small modifications to this tool will enable the use of a large range of remotely sensed data types. An effective remote-sensing crop monitoring system must be able to detect subtle changes in plant health in the earliest stages, before the effects of a disease outbreak or other adverse environmental conditions can become widespread and devastating. The integration of the time series analysis tool with ground-based information, soil types, crop types, meteorological data, and crop growth models in a Geographic Information System, could provide the foundation for a large-area crop-surveillance system that could identify a variety of plant phenomena and improve monitoring capabilities.

  2. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  3. Digital image film generation: from the photoscientist's perspective

    USGS Publications Warehouse

    Boyd, John E.

    1982-01-01

    The technical sophistication of photoelectronic transducers, integrated circuits, and laser-beam film recorders has made digital imagery an alternative to traditional analog imagery for remote sensing. Because a digital image is stored in discrete digital values, image enhancement is possible before the data are converted to a photographic image. To create a special film-reproduction curve - which can simulate any desired gamma, relative film speed, and toe/shoulder response - the digital-to-analog transfer function of the film recorder is uniquely defined and implemented by a lookup table in the film recorder. Because the image data are acquired in spectral bands, false-color composites also can be given special characteristics by selecting a reproduction curve tailored for each band.

  4. Resolution enhancement of tri-stereo remote sensing images by super resolution methods

    NASA Astrophysics Data System (ADS)

    Tuna, Caglayan; Akoguz, Alper; Unal, Gozde; Sertel, Elif

    2016-10-01

    Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.

  5. A flexible spatiotemporal method for fusing satellite images with different resolutions

    Treesearch

    Xiaolin Zhu; Eileen H. Helmer; Feng Gao; Desheng Liu; Jin Chen; Michael A. Lefsky

    2016-01-01

    Studies of land surface dynamics in heterogeneous landscapes often require remote sensing datawith high acquisition frequency and high spatial resolution. However, no single sensor meets this requirement. This study presents a new spatiotemporal data fusion method, the Flexible Spatiotemporal DAta Fusion (FSDAF) method, to generate synthesized frequent high spatial...

  6. Improvements in Cloud Remote Sensing from Fusing VIIRS and CrIS data

    NASA Astrophysics Data System (ADS)

    Heidinger, A. K.; Walther, A.; Lindsey, D. T.; Li, Y.; NOH, Y. J.; Botambekov, D.; Miller, S. D.; Foster, M. J.

    2016-12-01

    In the fall of 2016, NOAA began the operational production of cloud products from the S-NPP Visible and Infrared Imaging Radiometer Suite (VIIRS) using the NOAA Enterprise Algorithms. VIIRS, while providing unprecedented spatial resolution and imaging clarity, does lack certain IR channels that are beneficial to cloud remote sensing. At the UW Space Science and Engineering Center (SSEC), tools were written to generate the missing IR channels from the Cross Track Infrared Sounder (CrIS) and to map them into the VIIRS swath. The NOAA Enterprise Algorithms are also implemented into the NESDIS CLAVR-x system. CLAVR-x has been modified to use the fused VIIRS and CrIS data. This presentation will highlight the benefits offered by the CrIS data to the NOAA Enterprise Algorithms. In addition, these benefits also have enabled the generation of 3D cloud retrievals to support the request from the National Weather Service (NWS) for a Cloud Cover Layers product. Lastly, the benefits of using VIIRS and CrIS for achieving consistency with GOES-R will also be demonstrated.

  7. Color difference threshold of chromostereopsis induced by flat display emission.

    PubMed

    Ozolinsh, Maris; Muizniece, Kristine

    2015-01-01

    The study of chromostereopsis has gained attention in the backdrop of the use of computer displays in daily life. In this context, we analyze the illusory depth sense using planar color images presented on a computer screen. We determine the color difference threshold required to induce an illusory sense of depth psychometrically using a constant stimuli paradigm. Isoluminant stimuli are presented on a computer screen, which stimuli are aligned along the blue-red line in the computer display CIE xyY color space. Stereo disparity is generated by increasing the color difference between the central and surrounding areas of the stimuli with both areas consisting of random dots on a black background. The observed altering of illusory depth sense, thus also stereo disparity is validated using the "center-of-gravity" model. The induced illusory sense of the depth effect undergoes color reversal upon varying the binocular lateral eye pupil covering conditions (lateral or medial). Analysis of the retinal image point spread function for the display red and blue pixel radiation validates the altering of chromostereopsis retinal disparity achieved by increasing the color difference, and also the chromostereopsis color reversal caused by varying the eye pupil covering conditions.

  8. Utilizing Hierarchical Segmentation to Generate Water and Snow Masks to Facilitate Monitoring Change with Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.; Plaza, Antonio J.

    2006-01-01

    The hierarchical segmentation (HSEG) algorithm is a hybrid of hierarchical step-wise optimization and constrained spectral clustering that produces a hierarchical set of image segmentations. This segmentation hierarchy organizes image data in a manner that makes the image's information content more accessible for analysis by enabling region-based analysis. This paper discusses data analysis with HSEG and describes several measures of region characteristics that may be useful analyzing segmentation hierarchies for various applications. Segmentation hierarchy analysis for generating landwater and snow/ice masks from MODIS (Moderate Resolution Imaging Spectroradiometer) data was demonstrated and compared with the corresponding MODIS standard products. The masks based on HSEG segmentation hierarchies compare very favorably to the MODIS standard products. Further, the HSEG based landwater mask was specifically tailored to the MODIS data and the HSEG snow/ice mask did not require the setting of a critical threshold as required in the production of the corresponding MODIS standard product.

  9. Low-cost digital image processing at the University of Oklahoma

    NASA Technical Reports Server (NTRS)

    Harrington, J. A., Jr.

    1981-01-01

    Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.

  10. Inkjet printing of conjugated polymer precursors on paper substrates for colorimetric sensing and flexible electrothermochromic display.

    PubMed

    Yoon, Bora; Ham, Dae-Young; Yarimaga, Oktay; An, Hyosung; Lee, Chan Woo; Kim, Jong-Man

    2011-12-08

    Inkjet-printable aqueous suspensions of conjugated polymer precursors are developed for fabrication of patterned color images on paper substrates. Printing of a diacetylene (DA)-surfactant composite ink on unmodified paper and photopaper, as well as on a banknote, enables generation of latent images that are transformed to blue-colored polydiacetylene (PDA) structures by UV irradiation. Both irreversible and reversible thermochromism with the PDA printed images are demonstrated and applied to flexible and disposable sensors and to displays. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  12. 3-D capacitance density imaging of fluidized bed

    DOEpatents

    Fasching, George E.

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  13. Calibration and testing of a Raman hyperspectral imaging system to reveal powdered food adulteration

    PubMed Central

    Lohumi, Santosh; Lee, Hoonsoo; Kim, Moon S.; Qin, Jianwei; Kandpal, Lalit Mohan; Bae, Hyungjin; Rahman, Anisur

    2018-01-01

    The potential adulteration of foodstuffs has led to increasing concern regarding food safety and security, in particular for powdered food products where cheap ground materials or hazardous chemicals can be added to increase the quantity of powder or to obtain the desired aesthetic quality. Due to the resulting potential health threat to consumers, the development of a fast, label-free, and non-invasive technique for the detection of adulteration over a wide range of food products is necessary. We therefore report the development of a rapid Raman hyperspectral imaging technique for the detection of food adulteration and for authenticity analysis. The Raman hyperspectral imaging system comprises of a custom designed laser illumination system, sensing module, and a software interface. Laser illumination system generates a 785 nm laser line of high power, and the Gaussian like intensity distribution of laser beam is shaped by incorporating an engineered diffuser. The sensing module utilize Rayleigh filters, imaging spectrometer, and detector for collection of the Raman scattering signals along the laser line. A custom-built software to acquire Raman hyperspectral images which also facilitate the real time visualization of Raman chemical images of scanned samples. The developed system was employed for the simultaneous detection of Sudan dye and Congo red dye adulteration in paprika powder, and benzoyl peroxide and alloxan monohydrate adulteration in wheat flour at six different concentrations (w/w) from 0.05 to 1%. The collected Raman imaging data of the adulterated samples were analyzed to visualize and detect the adulterant concentrations by generating a binary image for each individual adulterant material. The results obtained based on the Raman chemical images of adulterants showed a strong correlation (R>0.98) between added and pixel based calculated concentration of adulterant materials. This developed Raman imaging system thus, can be considered as a powerful analytical technique for the quality and authenticity analysis of food products. PMID:29708973

  14. Calibration and testing of a Raman hyperspectral imaging system to reveal powdered food adulteration.

    PubMed

    Lohumi, Santosh; Lee, Hoonsoo; Kim, Moon S; Qin, Jianwei; Kandpal, Lalit Mohan; Bae, Hyungjin; Rahman, Anisur; Cho, Byoung-Kwan

    2018-01-01

    The potential adulteration of foodstuffs has led to increasing concern regarding food safety and security, in particular for powdered food products where cheap ground materials or hazardous chemicals can be added to increase the quantity of powder or to obtain the desired aesthetic quality. Due to the resulting potential health threat to consumers, the development of a fast, label-free, and non-invasive technique for the detection of adulteration over a wide range of food products is necessary. We therefore report the development of a rapid Raman hyperspectral imaging technique for the detection of food adulteration and for authenticity analysis. The Raman hyperspectral imaging system comprises of a custom designed laser illumination system, sensing module, and a software interface. Laser illumination system generates a 785 nm laser line of high power, and the Gaussian like intensity distribution of laser beam is shaped by incorporating an engineered diffuser. The sensing module utilize Rayleigh filters, imaging spectrometer, and detector for collection of the Raman scattering signals along the laser line. A custom-built software to acquire Raman hyperspectral images which also facilitate the real time visualization of Raman chemical images of scanned samples. The developed system was employed for the simultaneous detection of Sudan dye and Congo red dye adulteration in paprika powder, and benzoyl peroxide and alloxan monohydrate adulteration in wheat flour at six different concentrations (w/w) from 0.05 to 1%. The collected Raman imaging data of the adulterated samples were analyzed to visualize and detect the adulterant concentrations by generating a binary image for each individual adulterant material. The results obtained based on the Raman chemical images of adulterants showed a strong correlation (R>0.98) between added and pixel based calculated concentration of adulterant materials. This developed Raman imaging system thus, can be considered as a powerful analytical technique for the quality and authenticity analysis of food products.

  15. Correction methods for underwater turbulence degraded imaging

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.; Hou, W.; Restaino, S. R.; Matt, S.; Gładysz, S.

    2014-10-01

    The use of remote sensing techniques such as adaptive optics and image restoration post processing to correct for aberrations in a wavefront of light propagating through turbulent environment has become customary for many areas including astronomy, medical imaging, and industrial applications. EO imaging underwater has been mainly concentrated on overcoming scattering effects rather than dealing with underwater turbulence. However, the effects of turbulence have crucial impact over long image-transmission ranges and under extreme turbulence conditions become important over path length of a few feet. Our group has developed a program that attempts to define under which circumstances application of atmospheric remote sensing techniques could be envisioned. In our experiments we employ the NRL Rayleigh-Bénard convection tank for simulated turbulence environment at Stennis Space Center, MS. A 5m long water tank is equipped with heating and cooling plates that generate a well measured thermal gradient that in turn produces various degrees of turbulence. The image or laser beam spot can be propagated along the tank's length where it is distorted by induced turbulence. In this work we report on the experimental and theoretical findings of the ongoing program. The paper will introduce the experimental setup, the techniques used, and the measurements made as well as describe novel methods for postprocessing and correction of images degraded by underwater turbulence.

  16. Building Development Monitoring in Multitemporal Remotely Sensed Image Pairs with Stochastic Birth-Death Dynamics.

    PubMed

    Benedek, C; Descombes, X; Zerubia, J

    2012-01-01

    In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.

  17. Cooperative studies between the United States of America and the People's Republic of China on applications of remote sensing to surveying and mapping

    USGS Publications Warehouse

    Lauer, Donald T.; Chu, Liangcai

    1992-01-01

    A Protocol established between the National Bureau of Surveying and Mapping, People's Republic of China (PRC) and the U.S. Geological Survey, United States of America (US), resulted in the exchange of scientific personnel, technical training, and exploration of the processing of remotely sensed data. These activities were directed toward the application of remotely sensed data to surveying and mapping. Data were processed and various products were generated for the Black Hills area in the US and the Ningxiang area of the PRC. The results of these investigations defined applicable processes in the creation of satellite image maps, land use maps, and the use of ancillary data for further map enhancements.

  18. Remote sensing helps to assess natural hazards and environmental changes in Asia-Pacific region

    NASA Astrophysics Data System (ADS)

    Thouret, Jean-Claud; Liew, Soo Chin; Gupta, Avijit

    2012-04-01

    Conference on Remote Sensing, Natural Hazards, and Environmental Change; Singapore, 28-29 July 2011 Natural hazards and anthropogenic environmental changes, both significant in the Asia-Pacific region, were the two themes of a conference organized by the National University of Singapore's Centre for Remote Imaging, Sensing and Processing (CRISP) and the Université Blaise Pascal's Laboratoire Magmas et Volcans. The application of satellite imagery at a wide range of resolutions, from 500 meters to 50 centimeters, was a unifying approach in many of the studies presented. The recent arrival of a new generation of satellites with extremely high resolution (50 centimeters) has improved scientists' ability to carry out detailed studies of natural hazards and environmental change.

  19. Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.

    PubMed

    Cowlagi, Raghvendra V; Tsiotras, Panagiotis

    2012-10-01

    We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.

  20. Surface flow measurements from drones

    NASA Astrophysics Data System (ADS)

    Tauro, Flavia; Porfiri, Maurizio; Grimaldi, Salvatore

    2016-09-01

    Drones are transforming the way we sense and interact with the environment. However, despite their increased capabilities, the use of drones in geophysical sciences usually focuses on image acquisition for generating high-resolution maps. Motivated by the increasing demand for innovative and high performance geophysical observational methodologies, we posit the integration of drone technology and optical sensing toward a quantitative characterization of surface flow phenomena. We demonstrate that a recreational drone can be used to yield accurate surface flow maps of sub-meter water bodies. Specifically, drone's vibrations do not hinder surface flow observations, and velocity measurements are in agreement with traditional techniques. This first instance of quantitative water flow sensing from a flying drone paves the way to novel observations of the environment.

  1. innoFSPEC: fiber optical spectroscopy and sensing

    NASA Astrophysics Data System (ADS)

    Roth, Martin M.; Löhmannsröben, Hans-Gerd; Kelz, Andreas; Kumke, Michael

    2008-07-01

    innoFSPEC Potsdam is presently being established as in interdisciplinary innovation center for fiber-optical spectroscopy and sensing, hosted by Astrophysikalisches Institut Potsdam and the Physical Chemistry group of Potsdam University, Germany. The center focuses on fundamental research in the two fields of fiber-coupled multi-channel spectroscopy and optical fiber-based sensing. Thanks to its interdisciplinary approach, the complementary methodologies of astrophysics on the one hand, and physical chemistry on the other hand, are expected to spawn synergies that otherwise would not normally become available in more standard research programmes. innoFSPEC targets future innovations for next generation astrophysical instrumentation, environmental analysis, manufacturing control and process monitoring, medical diagnostics, non-invasive imaging spectroscopy, biopsy, genomics/proteomics, high-throughput screening, and related applications.

  2. A generic FPGA-based detector readout and real-time image processing board

    NASA Astrophysics Data System (ADS)

    Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant

    2016-07-01

    For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.

  3. Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.

    2005-01-01

    NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.

  4. Multi-focus image fusion and robust encryption algorithm based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong

    2017-06-01

    Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.

  5. Image intensification; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989

    NASA Astrophysics Data System (ADS)

    Csorba, Illes P.

    Various papers on image intensification are presented. Individual topics discussed include: status of high-speed optical detector technologies, super second generation imge intensifier, gated image intensifiers and applications, resistive-anode position-sensing photomultiplier tube operational modeling, undersea imaging and target detection with gated image intensifier tubes, image intensifier modules for use with commercially available solid state cameras, specifying the components of an intensified solid state television camera, superconducting IR focal plane arrays, one-inch TV camera tube with very high resolution capacity, CCD-Digicon detector system performance parameters, high-resolution X-ray imaging device, high-output technology microchannel plate, preconditioning of microchannel plate stacks, recent advances in small-pore microchannel plate technology, performance of long-life curved channel microchannel plates, low-noise microchannel plates, development of a quartz envelope heater.

  6. Evaluation of Landscape Structure Using AVIRIS Quicklooks and Ancillary Data

    NASA Technical Reports Server (NTRS)

    Sanderson, Eric W.; Ustin, Susan L.

    1998-01-01

    Currently the best tool for examining landscape structure is remote sensing, because remotely sensed data provide complete and repeatable coverage over landscapes in many climatic regimes. Many sensors, with a variety of spatial scales and temporal repeat cycles, are available. The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has imaged over 4000 scenes from over 100 different sites throughout North America. For each of these scenes, one-band "quicklook" images have been produced for review by AVIRIS investigators. These quicklooks are free, publicly available over the Internet, and provide the most complete set of landscape structure data yet produced. This paper describes the methodologies used to evaluate the landscape structure of quicklooks and generate corresponding datasets for climate, topography and land use. A brief discussion of preliminary results is included at the end. Since quicklooks correspond exactly to their parent AVIRIS scenes, the methods used to derive climate, topography and land use data should be applicable to any AVIRIS analysis.

  7. High-density Schottky barrier IRCCD sensors for remote sensing applications

    NASA Astrophysics Data System (ADS)

    Elabd, H.; Tower, J. R.; McCarthy, B. M.

    1983-01-01

    It is pointed out that the ambitious goals envisaged for the next generation of space-borne sensors challenge the state-of-the-art in solid-state imaging technology. Studies are being conducted with the aim to provide focal plane array technology suitable for use in future Multispectral Linear Array (MLA) earth resource instruments. An important new technology for IR-image sensors involves the use of monolithic Schottky barrier infrared charge-coupled device arrays. This technology is suitable for earth sensing applications in which moderate quantum efficiency and intermediate operating temperatures are required. This IR sensor can be fabricated by using standard integrated circuit (IC) processing techniques, and it is possible to employ commercial IC grade silicon. For this reason, it is feasible to construct Schottky barrier area and line arrays with large numbers of elements and high-density designs. A Pd2Si Schottky barrier sensor for multispectral imaging in the 1 to 3.5 micron band is under development.

  8. Accelerated Fast Spin-Echo Magnetic Resonance Imaging of the Heart Using a Self-Calibrated Split-Echo Approach

    PubMed Central

    Klix, Sabrina; Hezel, Fabian; Fuchs, Katharina; Ruff, Jan; Dieringer, Matthias A.; Niendorf, Thoralf

    2014-01-01

    Purpose Design, validation and application of an accelerated fast spin-echo (FSE) variant that uses a split-echo approach for self-calibrated parallel imaging. Methods For self-calibrated, split-echo FSE (SCSE-FSE), extra displacement gradients were incorporated into FSE to decompose odd and even echo groups which were independently phase encoded to derive coil sensitivity maps, and to generate undersampled data (reduction factor up to R = 3). Reference and undersampled data were acquired simultaneously. SENSE reconstruction was employed. Results The feasibility of SCSE-FSE was demonstrated in phantom studies. Point spread function performance of SCSE-FSE was found to be competitive with traditional FSE variants. The immunity of SCSE-FSE for motion induced mis-registration between reference and undersampled data was shown using a dynamic left ventricular model and cardiac imaging. The applicability of black blood prepared SCSE-FSE for cardiac imaging was demonstrated in healthy volunteers including accelerated multi-slice per breath-hold imaging and accelerated high spatial resolution imaging. Conclusion SCSE-FSE obviates the need of external reference scans for SENSE reconstructed parallel imaging with FSE. SCSE-FSE reduces the risk for mis-registration between reference scans and accelerated acquisitions. SCSE-FSE is feasible for imaging of the heart and of large cardiac vessels but also meets the needs of brain, abdominal and liver imaging. PMID:24728341

  9. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  10. Imaging with terahertz radiation

    NASA Astrophysics Data System (ADS)

    Chan, Wai Lam; Deibel, Jason; Mittleman, Daniel M.

    2007-08-01

    Within the last several years, the field of terahertz science and technology has changed dramatically. Many new advances in the technology for generation, manipulation, and detection of terahertz radiation have revolutionized the field. Much of this interest has been inspired by the promise of valuable new applications for terahertz imaging and sensing. Among a long list of proposed uses, one finds compelling needs such as security screening and quality control, as well as whimsical notions such as counting the almonds in a bar of chocolate. This list has grown in parallel with the development of new technologies and new paradigms for imaging and sensing. Many of these proposed applications exploit the unique capabilities of terahertz radiation to penetrate common packaging materials and provide spectroscopic information about the materials within. Several of the techniques used for terahertz imaging have been borrowed from other, more well established fields such as x-ray computed tomography and synthetic aperture radar. Others have been developed exclusively for the terahertz field, and have no analogies in other portions of the spectrum. This review provides a comprehensive description of the various techniques which have been employed for terahertz image formation, as well as discussing numerous examples which illustrate the many exciting potential uses for these emerging technologies.

  11. The Use of False Color Landsat Imagery with a Fifth Grade Class.

    ERIC Educational Resources Information Center

    Harnapp, Vern R.

    Fifth grade students can become familiar with images of earth generated by space sensor Landsat satellites which sense nearly all surfaces of the earth once every 18 days. Two false color composites in which different colors represent various geographic formations were obtained for the northern Ohio region where the students live. The class had no…

  12. "A" Is for Aerial Maps and Art

    ERIC Educational Resources Information Center

    Todd, Reese H.; Delahunty, Tina

    2007-01-01

    The technology of satellite imagery and remote sensing adds a new dimension to teaching and learning about maps with elementary school children. Just a click of the mouse brings into view some images of the world that could only be imagined a generation ago. Close-up aerial pictures of the school and neighborhood quickly catch the interest of…

  13. Computational Ghost Imaging for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.

    2012-01-01

    This work relates to the generic problem of remote active imaging; that is, a source illuminates a target of interest and a receiver collects the scattered light off the target to obtain an image. Conventional imaging systems consist of an imaging lens and a high-resolution detector array [e.g., a CCD (charge coupled device) array] to register the image. However, conventional imaging systems for remote sensing require high-quality optics and need to support large detector arrays and associated electronics. This results in suboptimal size, weight, and power consumption. Computational ghost imaging (CGI) is a computational alternative to this traditional imaging concept that has a very simple receiver structure. In CGI, the transmitter illuminates the target with a modulated light source. A single-pixel (bucket) detector collects the scattered light. Then, via computation (i.e., postprocessing), the receiver can reconstruct the image using the knowledge of the modulation that was projected onto the target by the transmitter. This way, one can construct a very simple receiver that, in principle, requires no lens to image a target. Ghost imaging is a transverse imaging modality that has been receiving much attention owing to a rich interconnection of novel physical characteristics and novel signal processing algorithms suitable for active computational imaging. The original ghost imaging experiments consisted of two correlated optical beams traversing distinct paths and impinging on two spatially-separated photodetectors: one beam interacts with the target and then illuminates on a single-pixel (bucket) detector that provides no spatial resolution, whereas the other beam traverses an independent path and impinges on a high-resolution camera without any interaction with the target. The term ghost imaging was coined soon after the initial experiments were reported, to emphasize the fact that by cross-correlating two photocurrents, one generates an image of the target. In CGI, the measurement obtained from the reference arm (with the high-resolution detector) is replaced by a computational derivation of the measurement-plane intensity profile of the reference-arm beam. The algorithms applied to computational ghost imaging have diversified beyond simple correlation measurements, and now include modern reconstruction algorithms based on compressive sensing.

  14. Voxel-Based LIDAR Analysis and Applications

    NASA Astrophysics Data System (ADS)

    Hagstrom, Shea T.

    One of the greatest recent changes in the field of remote sensing is the addition of high-quality Light Detection and Ranging (LIDAR) instruments. In particular, the past few decades have been greatly beneficial to these systems because of increases in data collection speed and accuracy, as well as a reduction in the costs of components. These improvements allow modern airborne instruments to resolve sub-meter details, making them ideal for a wide variety of applications. Because LIDAR uses active illumination to capture 3D information, its output is fundamentally different from other modalities. Despite this difference, LIDAR datasets are often processed using methods appropriate for 2D images and that do not take advantage of its primary virtue of 3-dimensional data. It is this problem we explore by using volumetric voxel modeling. Voxel-based analysis has been used in many applications, especially medical imaging, but rarely in traditional remote sensing. In part this is because the memory requirements are substantial when handling large areas, but with modern computing and storage this is no longer a significant impediment. Our reason for using voxels to model scenes from LIDAR data is that there are several advantages over standard triangle-based models, including better handling of overlapping surfaces and complex shapes. We show how incorporating system position information from early in the LIDAR point cloud generation process allows radiometrically-correct transmission and other novel voxel properties to be recovered. This voxelization technique is validated on simulated data using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) software, a first-principles based ray-tracer developed at the Rochester Institute of Technology. Voxel-based modeling of LIDAR can be useful on its own, but we believe its primary advantage is when applied to problems where simpler surface-based 3D models conflict with the requirement of realistic geometry. To show the voxel model's advantage, we apply it to several outstanding problems in remote sensing: LIDAR quality metrics, line-of-sight mapping, and multi-model fusion. Each of these applications is derived, validated, and examined in detail, and our results compared with other state-of-the-art methods. In most cases the voxel-based methods demonstrate superior results and are able to derive information not available to existing methods. Realizing these improvements requires only a shift away from traditional 3D model generation, and our results give a small indicator of what is possible. Many examples of possible areas for future improvement and expansion of algorithms beyond the scope of our work are also noted.

  15. Passive millimeter-wave imaging for concealed article detection

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Galliano, Joseph A., Jr.; Clark, Stuart E.

    1997-02-01

    Passive-millimeter-wave imaging (PMI) provides a powerful sensing tool for law enforcement, allowing an unobtrusive means for detecting concealed weapons, explosives, or contraband on persons or in baggage. Natural thermal emissions at millimeter wavelengths from bodies, guns, explosives, and other articles pass easily through clothing or other concealment materials, where they can be detected and converted into conventional 2-dimensional images. A new implementation of PMI has demonstrated a large-area, near- real-time staring capability for personnel inspection at standoff ranges of greater than 10 meters. In this form, PMI does not require operator cuing based on subjective 'profiles' of suspicious appearance or behaviors, which may otherwise be construed as violations of civil rights. To the contrary, PMI detects and images heat generated by any object with no predisposition as to its nature or function (e.g. race or gender of humans). As a totally passive imaging tool, it generates no radio-frequency or other radiation which might raise public health concerns. Specifics of the new PMI architecture are presented along with a host of imaging data representing the current state- of-the-art.

  16. Digital micromirror device-based laser-illumination Fourier ptychographic microscopy

    PubMed Central

    Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Lee, Justin; Barbastathis, George; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.

    2015-01-01

    We report a novel approach to Fourier ptychographic microscopy (FPM) by using a digital micromirror device (DMD) and a coherent laser source (532 nm) for generating spatially modulated sample illumination. Previously demonstrated FPM systems are all based on partially-coherent illumination, which offers limited throughput due to insufficient brightness. Our FPM employs a high power coherent laser source to enable shot-noise limited high-speed imaging. For the first time, a digital micromirror device (DMD), imaged onto the back focal plane of the illumination objective, is used to generate spatially modulated sample illumination field for ptychography. By coding the on/off states of the micromirrors, the illumination plane wave angle can be varied at speeds more than 4 kHz. A set of intensity images, resulting from different oblique illuminations, are used to numerically reconstruct one high-resolution image without obvious laser speckle. Experiments were conducted using a USAF resolution target and a fiber sample, demonstrating high-resolution imaging capability of our system. We envision that our approach, if combined with a coded-aperture compressive-sensing algorithm, will further improve the imaging speed in DMD-based FPM systems. PMID:26480361

  17. Digital micromirror device-based laser-illumination Fourier ptychographic microscopy.

    PubMed

    Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Lee, Justin; Barbastathis, George; Dasari, Ramachandra R; Yaqoob, Zahid; So, Peter T C

    2015-10-19

    We report a novel approach to Fourier ptychographic microscopy (FPM) by using a digital micromirror device (DMD) and a coherent laser source (532 nm) for generating spatially modulated sample illumination. Previously demonstrated FPM systems are all based on partially-coherent illumination, which offers limited throughput due to insufficient brightness. Our FPM employs a high power coherent laser source to enable shot-noise limited high-speed imaging. For the first time, a digital micromirror device (DMD), imaged onto the back focal plane of the illumination objective, is used to generate spatially modulated sample illumination field for ptychography. By coding the on/off states of the micromirrors, the illumination plane wave angle can be varied at speeds more than 4 kHz. A set of intensity images, resulting from different oblique illuminations, are used to numerically reconstruct one high-resolution image without obvious laser speckle. Experiments were conducted using a USAF resolution target and a fiber sample, demonstrating high-resolution imaging capability of our system. We envision that our approach, if combined with a coded-aperture compressive-sensing algorithm, will further improve the imaging speed in DMD-based FPM systems.

  18. Geometric error analysis for shuttle imaging spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  19. An Approach of Registration between Remote Sensing Image and Electronic Chart Based on Coastal Line

    NASA Astrophysics Data System (ADS)

    Li, Ying; Yu, Shuiming; Li, Chuanlong

    Remote sensing plays an important role marine oil spill emergency. In order to implement a timely and effective countermeasure, it is important to provide exact position of oil spills. Therefore it is necessary to match remote sensing image and electronic chart properly. Variance ordinarily exists between oil spill image and electronic chart, although geometric correction is applied to remote sensing image. It is difficult to find the steady control points on sea to make exact rectification of remote sensing image. An improved relaxation algorithm was developed for finding the control points along the coastline since oil spills occurs generally near the coast. A conversion function is created with the least square, and remote sensing image can be registered with the vector map based on this function. SAR image was used as the remote sensing data and shape format map as the electronic chart data. The results show that this approach can guarantee the precision of the registration, which is essential for oil spill monitoring.

  20. Comparison of the Spectral Properties of Pansharpened Images Generated from AVNIR-2 and Prism Onboard Alos

    NASA Astrophysics Data System (ADS)

    Matsuoka, M.

    2012-07-01

    A considerable number of methods for pansharpening remote-sensing images have been developed to generate higher spatial resolution multispectral images by the fusion of lower resolution multispectral images and higher resolution panchromatic images. Because pansharpening alters the spectral properties of multispectral images, method selection is one of the key factors influencing the accuracy of subsequent analyses such as land-cover classification or change detection. In this study, seven pixel-based pansharpening methods (additive wavelet intensity, additive wavelet principal component, generalized Laplacian pyramid with spectral distortion minimization, generalized intensity-hue-saturation (GIHS) transform, GIHS adaptive, Gram-Schmidt spectral sharpening, and block-based synthetic variable ratio) were compared using AVNIR-2 and PRISM onboard ALOS from the viewpoint of the preservation of spectral properties of AVNIR-2. A visual comparison was made between pansharpened images generated from spatially degraded AVNIR-2 and original images over urban, agricultural, and forest areas. The similarity of the images was evaluated in terms of the image contrast, the color distinction, and the brightness of the ground objects. In the quantitative assessment, three kinds of statistical indices, correlation coefficient, ERGAS, and Q index, were calculated by band and land-cover type. These scores were relatively superior in bands 2 and 3 compared with the other two bands, especially over urban and agricultural areas. Band 4 showed a strong dependency on the land-cover type. This was attributable to the differences in the observing spectral wavelengths of the sensors and local scene variances.

  1. Remote Sensing Time Series Product Tool

    NASA Technical Reports Server (NTRS)

    Predos, Don; Ryan, Robert E.; Ross, Kenton W.

    2006-01-01

    The TSPT (Time Series Product Tool) software was custom-designed for NASA to rapidly create and display single-band and band-combination time series, such as NDVI (Normalized Difference Vegetation Index) images, for wide-area crop surveillance and for other time-critical applications. The TSPT, developed in MATLAB, allows users to create and display various MODIS (Moderate Resolution Imaging Spectroradiometer) or simulated VIIRS (Visible/Infrared Imager Radiometer Suite) products as single images, as time series plots at a selected location, or as temporally processed image videos. Manually creating these types of products is extremely labor intensive; however, the TSPT development tool makes the process simplified and efficient. MODIS is ideal for monitoring large crop areas because of its wide swath (2330 km), its relatively small ground sample distance (250 m), and its high temporal revisit time (twice daily). Furthermore, because MODIS imagery is acquired daily, rapid changes in vegetative health can potentially be detected. The new TSPT technology provides users with the ability to temporally process high-revisit-rate satellite imagery, such as that acquired from MODIS and from its successor, the VIIRS. The TSPT features the important capability of fusing data from both MODIS instruments onboard the Terra and Aqua satellites, which drastically improves cloud statistics. With the TSPT, MODIS metadata is used to find and optionally remove bad and suspect data. Noise removal and temporal processing techniques allow users to create low-noise time series plots and image videos and to select settings and thresholds that tailor particular output products. The TSPT GUI (graphical user interface) provides an interactive environment for crafting what-if scenarios by enabling a user to repeat product generation using different settings and thresholds. The TSPT Application Programming Interface provides more fine-tuned control of product generation, allowing experienced programmers to bypass the GUI and to create more user-specific output products, such as comparison time plots or images. This type of time series analysis tool for remotely sensed imagery could be the basis of a large-area vegetation surveillance system. The TSPT has been used to generate NDVI time series over growing seasons in California and Argentina and for hurricane events, such as Hurricane Katrina.

  2. Science, technology, and application of THz air photonics

    NASA Astrophysics Data System (ADS)

    Lu, X. F.; Clough, B.; Ho, I.-C.; Kaur, G.; Liu, J.; Karpowicz, N.; Dai, J. M.; Zhang, X.-C.

    2010-11-01

    The significant scientific and technological potential of terahertz (THz) wave sensing and imaging has been attracted considerable attention within many fields of research. However, the development of remote, broadband THz wave sensing technology is lagging behind the compelling needs that exist in the areas of astronomy, global environmental monitoring, and homeland security. This is due to the challenge posed by high absorption of ambient moisture in the THz range. Although various time-domain THz detection techniques have recently been demonstrated, the requirement for an on-site bias or forward collection of the optical signal inevitably prohibits their applications for remote sensing. The objective of this paper is to report updated THz air-plasma technology to meet this great challenge of remote sensing. A focused optical pulse (mJ pulse energy and femtosecond pulse duration) in gas creates a plasma, which can serve to generate intense, broadband, and directional THz waves in the far field.

  3. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  4. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    NASA Astrophysics Data System (ADS)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  5. ISSARS Aerosol Database : an Incorporation of Atmospheric Particles into a Universal Tool to Simulate Remote Sensing Instruments

    NASA Technical Reports Server (NTRS)

    Goetz, Michael B.

    2011-01-01

    The Instrument Simulator Suite for Atmospheric Remote Sensing (ISSARS) entered its third and final year of development with an overall goal of providing a unified tool to simulate active and passive space borne atmospheric remote sensing instruments. These simulations focus on the atmosphere ranging from UV to microwaves. ISSARS handles all assumptions and uses various models on scattering and microphysics to fill the gaps left unspecified by the atmospheric models to create each instrument's measurements. This will help benefit mission design and reduce mission cost, create efficient implementation of multi-instrument/platform Observing System Simulation Experiments (OSSE), and improve existing models as well as new advanced models in development. In this effort, various aerosol particles are incorporated into the system, and a simulation of input wavelength and spectral refractive indices related to each spherical test particle(s) generate its scattering properties and phase functions. These atmospheric particles being integrated into the system comprise the ones observed by the Multi-angle Imaging SpectroRadiometer(MISR) and by the Multiangle SpectroPolarimetric Imager(MSPI). In addition, a complex scattering database generated by Prof. Ping Yang (Texas A&M) is also incorporated into this aerosol database. Future development with a radiative transfer code will generate a series of results that can be validated with results obtained by the MISR and MSPI instruments; nevertheless, test cases are simulated to determine the validity of various plugin libraries used to determine or gather the scattering properties of particles studied by MISR and MSPI, or within the Single-scattering properties of tri-axial ellipsoidal mineral dust particles database created by Prof. Ping Yang.

  6. Development of pulse-echo ultrasonic propagation imaging system and its delivery to Korea Air Force

    NASA Astrophysics Data System (ADS)

    Ahmed, Hasan; Hong, Seung-Chan; Lee, Jung-Ryul; Park, Jongwoon; Ihn, Jeong-Beom

    2017-04-01

    This paper proposes a full-field pulse-echo ultrasonic propagation imaging (FF-PE-UPI) system for non-destructive evaluation of structural defects. The system works by detection of bulk waves that travel through the thickness of a specimen. This is achieved by joining the laser beams for the ultrasonic wave generation and sensing. This enables accurate and clear damage assessment and defect localization in the thickness with minimum signal processing since bulk waves are less susceptible to dispersion during short propagation through the thickness. The system consists of a Qswitched laser for generating the aforementioned waves, a laser Doppler vibrometer (LDV) for sensing, optical elements to combine the generating and sensing laser beams, a dual-axis automated translation stage for raster scanning of the specimen and a digitizer to record the signals. A graphical user interface (GUI) is developed to control all the individual blocks of the system. Additionally, the software also manages signal acquisition, processing, and display. The GUI is created in C++ using the QT framework. In view of the requirements posed by the Korean Air Force(KAF), the system is designed to be compact and portable to allow for in situ inspection of a selected area of a larger structure such as radome or rudder of an aircraft. The GUI is designed with a minimalistic approach to promote usability and adaptability while masking the intricacies of actual system operation. Through the use of multithreading the software is able to show the results while a specimen is still being scanned. This is achieved by real-time and concurrent acquisition, processing, and display of ultrasonic signal of the latest scan point in the scan area.

  7. Image super-resolution via sparse representation.

    PubMed

    Yang, Jianchao; Wright, John; Huang, Thomas S; Ma, Yi

    2010-11-01

    This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework.

  8. Modeling of forest canopy BRDF using DIRSIG

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan; Schott, John R.

    2016-05-01

    The characterization and temporal analysis of multispectral and hyperspectral data to extract the biophysical information of the Earth's surface can be significantly improved by understanding its aniosotropic reflectance properties, which are best described by a Bi-directional Reflectance Distribution Function (BRDF). The advancements in the field of remote sensing techniques and instrumentation have made hyperspectral BRDF measurements in the field possible using sophisticated goniometers. However, natural surfaces such as forest canopies impose limitations on both the data collection techniques, as well as, the range of illumination angles that can be collected from the field. These limitations can be mitigated by measuring BRDF in a virtual environment. This paper presents an approach to model the spectral BRDF of a forest canopy using the Digital Image and Remote Sensing Image Generation (DIRSIG) model. A synthetic forest canopy scene is constructed by modeling the 3D geometries of different tree species using OnyxTree software. The field collected spectra from the Harvard forest is used to represent the optical properties of the tree elements. The canopy radiative transfer is estimated using the DIRSIG model for specific view and illumination angles to generate BRDF measurements. A full hemispherical BRDF is generated by fitting the measured BRDF to a semi-empirical BRDF model. The results from fitting the model to the measurement indicates a root mean square error of less than 5% (2 reflectance units) relative to the forest's reflectance in the VIS-NIR-SWIR region. The process can be easily extended to generate a spectral BRDF library for various biomes.

  9. MODIS Land Data Products: Generation, Quality Assurance and Validation

    NASA Technical Reports Server (NTRS)

    Masuoka, Edward; Wolfe, Robert; Morisette, Jeffery; Sinno, Scott; Teague, Michael; Saleous, Nazmi; Devadiga, Sadashiva; Justice, Christopher; Nickeson, Jaime

    2008-01-01

    The Moderate Resolution Imaging Spectrometer (MODIS) on-board NASA's Earth Observing System (EOS) Terra and Aqua Satellites are key instruments for providing data on global land, atmosphere, and ocean dynamics. Derived MODIS land, atmosphere and ocean products are central to NASA's mission to monitor and understand the Earth system. NASA has developed and generated on a systematic basis a suite of MODIS products starting with the first Terra MODIS data sensed February 22, 2000 and continuing with the first MODIS-Aqua data sensed July 2, 2002. The MODIS Land products are divided into three product suites: radiation budget products, ecosystem products, and land cover characterization products. The production and distribution of the MODIS Land products are described, from initial software delivery by the MODIS Land Science Team, to operational product generation and quality assurance, delivery to EOS archival and distribution centers, and product accuracy assessment and validation. Progress and lessons learned since the first MODIS data were in early 2000 are described.

  10. Validation of satellite data through the remote sensing techniques and the inclusion of them into agricultural education pilot programs

    NASA Astrophysics Data System (ADS)

    Papadavid, Georgios; Kountios, Georgios; Bournaris, T.; Michailidis, Anastasios; Hadjimitsis, Diofantos G.

    2016-08-01

    Nowadays, the remote sensing techniques have a significant role in all the fields of agricultural extensions as well as agricultural economics and education but they are used more specifically in hydrology. The aim of this paper is to demonstrate the use of field spectroscopy for validation of the satellite data and how combination of remote sensing techniques and field spectroscopy can have more accurate results for irrigation purposes. For this reason vegetation indices are used which are mostly empirical equations describing vegetation parameters during the lifecycle of the crops. These numbers are generated by some combination of remote sensing bands and may have some relationship to the amount of vegetation in a given image pixel. Due to the fact that most of the commonly used vegetation indices are only concerned with red-near-infrared spectrum and can be divided to perpendicular and ratio based indices the specific goal of the research is to illustrate the effect of the atmosphere to those indices, in both categories. In this frame field spectroscopy is employed in order to derive the spectral signatures of different crops in red and infrared spectrum after a campaign of ground measurements. The main indices have been calculated using satellite images taken at interval dates during the whole lifecycle of the crops by using a GER 1500 spectro-radiomete. These indices was compared to those extracted from satellite images after applying an atmospheric correction algorithm -darkest pixel- to the satellite images at a pre-processing level so as the indices would be in comparable form to those of the ground measurements. Furthermore, there has been a research made concerning the perspectives of the inclusion of the above mentioned remote satellite techniques to agricultural education pilot programs.

  11. Target detection method by airborne and spaceborne images fusion based on past images

    NASA Astrophysics Data System (ADS)

    Chen, Shanjing; Kang, Qing; Wang, Zhenggang; Shen, ZhiQiang; Pu, Huan; Han, Hao; Gu, Zhongzheng

    2017-11-01

    To solve the problem that remote sensing target detection method has low utilization rate of past remote sensing data on target area, and can not recognize camouflage target accurately, a target detection method by airborne and spaceborne images fusion based on past images is proposed in this paper. The target area's past of space remote sensing image is taken as background. The airborne and spaceborne remote sensing data is fused and target feature is extracted by the means of airborne and spaceborne images registration, target change feature extraction, background noise suppression and artificial target feature extraction based on real-time aerial optical remote sensing image. Finally, the support vector machine is used to detect and recognize the target on feature fusion data. The experimental results have established that the proposed method combines the target area change feature of airborne and spaceborne remote sensing images with target detection algorithm, and obtains fine detection and recognition effect on camouflage and non-camouflage targets.

  12. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  13. Inverting a dispersive scene's side-scanned image

    NASA Technical Reports Server (NTRS)

    Harger, R. O.

    1983-01-01

    Consideration is given to the problem of using a remotely sensed, side-scanned image of a time-variant scene, which changes according to a dispersion relation, to estimate the structure at a given moment. Additive thermal noise is neglected in the models considered in the formal treatment. It is shown that the dispersion relation is normalized by the scanning velocity, as is the group scanning velocity component. An inversion operation is defined for noise-free images generated by SAR. The method is extended to the inversion of noisy imagery, and a formulation is defined for spectral density estimation. Finally, the methods for a radar system are used for the case of sonar.

  14. Uniform color space analysis of LACIE image products

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  15. Estimating Water Levels with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Lucero, E.; Russo, T. A.; Zentner, M.; May, J.; Nguy-Robertson, A. L.

    2016-12-01

    Reservoirs serve multiple functions and are vital for storage, electricity generation, and flood control. For many areas, traditional ground-based reservoir measurements may not be available or data dissemination may be problematic. Consistent monitoring of reservoir levels in data-poor areas can be achieved through remote sensing, providing information to researchers and the international community. Estimates of trends and relative reservoir volume can be used to identify water supply vulnerability, anticipate low power generation, and predict flood risk. Image processing with automated cloud computing provides opportunities to study multiple geographic areas in near real-time. We demonstrate the prediction capability of a cloud environment for identifying water trends at reservoirs in the US, and then apply the method to data-poor areas in North Korea, Iran, Azerbaijan, Zambia, and India. The Google Earth Engine cloud platform hosts remote sensing data and can be used to automate reservoir level estimation with multispectral imagery. We combine automated cloud-based analysis from Landsat image classification to identify reservoir surface area trends and radar altimetry to identify reservoir level trends. The study estimates water level trends using three years of data from four domestic reservoirs to validate the remote sensing method, and five foreign reservoirs to demonstrate the method application. We report correlations between ground-based reservoir level measurements in the US and our remote sensing methods, and correlations between the cloud analysis and altimetry data for reservoirs in data-poor areas. The availability of regular satellite imagery and an automated, near real-time application method provides the necessary datasets for further temporal analysis, reservoir modeling, and flood forecasting. All statements of fact, analysis, or opinion are those of the author and do not reflect the official policy or position of the Department of Defense or any of its components or the U.S. Government

  16. Knowledge Representation Of CT Scans Of The Head

    NASA Astrophysics Data System (ADS)

    Ackerman, Laurens V.; Burke, M. W.; Rada, Roy

    1984-06-01

    We have been investigating diagnostic knowledge models which assist in the automatic classification of medical images by combining information extracted from each image with knowledge specific to that class of images. In a more general sense we are trying to integrate verbal and pictorial descriptions of disease via representations of knowledge, study automatic hypothesis generation as related to clinical medicine, evolve new mathematical image measures while integrating them into the total diagnostic process, and investigate ways to augment the knowledge of the physician. Specifically, we have constructed an artificial intelligence knowledge model using the technique of a production system blending pictorial and verbal knowledge about the respective CT scan and patient history. It is an attempt to tie together different sources of knowledge representation, picture feature extraction and hypothesis generation. Our knowledge reasoning and representation system (KRRS) works with data at the conscious reasoning level of the practicing physician while at the visual perceptional level we are building another production system, the picture parameter extractor (PPE). This paper describes KRRS and its relationship to PPE.

  17. Compressive sensing using optimized sensing matrix for face verification

    NASA Astrophysics Data System (ADS)

    Oey, Endra; Jeffry; Wongso, Kelvin; Tommy

    2017-12-01

    Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.

  18. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  19. Geological terrain models

    NASA Technical Reports Server (NTRS)

    Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.

    1981-01-01

    The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.

  20. A compressed sensing approach for resolution improvement in fiber-bundle based endomicroscopy

    NASA Astrophysics Data System (ADS)

    Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.

    2018-02-01

    Endomicroscopy techniques such as confocal, multi-photon, and wide-field imaging have all been demonstrated using coherent fiber-optic imaging bundles. While the narrow diameter and flexibility of fiber bundles is clinically advantageous, the number of resolvable points in an image is conventionally limited to the number of individual fibers within the bundle. We are introducing concepts from the compressed sensing (CS) field to fiber bundle based endomicroscopy, to allow images to be recovered with more resolvable points than fibers in the bundle. The distal face of the fiber bundle is treated as a low-resolution sensor with circular pixels (fibers) arranged in a hexagonal lattice. A spatial light modulator is located conjugate to the object and distal face, applying multiple high resolution masks to the intermediate image prior to propagation through the bundle. We acquire images of the proximal end of the bundle for each (known) mask pattern and then apply CS inversion algorithms to recover a single high-resolution image. We first developed a theoretical forward model describing image formation through the mask and fiber bundle. We then imaged objects through a rigid fiber bundle and demonstrate that our CS endomicroscopy architecture can recover intra-fiber details while filling inter-fiber regions with interpolation. Finally, we examine the relationship between reconstruction quality and the ratio of the number of mask elements to the number of fiber cores, finding that images could be generated with approximately 28,900 resolvable points for a 1,000 fiber region in our platform.

  1. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  2. Method and apparatus for distinguishing actual sparse events from sparse event false alarms

    DOEpatents

    Spalding, Richard E.; Grotbeck, Carter L.

    2000-01-01

    Remote sensing method and apparatus wherein sparse optical events are distinguished from false events. "Ghost" images of actual optical phenomena are generated using an optical beam splitter and optics configured to direct split beams to a single sensor or segmented sensor. True optical signals are distinguished from false signals or noise based on whether the ghost image is presence or absent. The invention obviates the need for dual sensor systems to effect a false target detection capability, thus significantly reducing system complexity and cost.

  3. A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI

    PubMed Central

    Lyu, Mengye; Liu, Yilong; Xie, Victor B.; Feng, Yanqiu; Guo, Hua; Wu, Ed X.

    2017-01-01

    PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient. PMID:28205602

  4. A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI.

    PubMed

    Lyu, Mengye; Liu, Yilong; Xie, Victor B; Feng, Yanqiu; Guo, Hua; Wu, Ed X

    2017-02-16

    PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient.

  5. Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software

    NASA Astrophysics Data System (ADS)

    Gowda, P. H.; Moorhead, J.; Brauer, D. K.

    2017-12-01

    Evapotranspiration (ET) is a major component of the hydrologic cycle. ET data are used for a variety of water management and research purposes such as irrigation scheduling, water and crop modeling, streamflow, water availability, and many more. Remote sensing products have been widely used to create spatially representative ET data sets which provide important information from field to regional scales. As UAV capabilities increase, remote sensing use is likely to also increase. For that purpose, scientists at the USDA-ARS research laboratory in Bushland, TX developed the Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software. The BEARS software is a Java based software that allows users to process remote sensing data to generate ET outputs using predefined models, or enter custom equations and models. The capability to define new equations and build new models expands the applicability of the BEARS software beyond ET mapping to any remote sensing application. The software also includes an image viewing tool that allows users to visualize outputs, as well as draw an area of interest using various shapes. This software is freely available from the USDA-ARS Conservation and Production Research Laboratory website.

  6. Tools and Methods for the Registration and Fusion of Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Goshtasby, Arthur Ardeshir; LeMoigne, Jacqueline

    2010-01-01

    Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given.

  7. Fibre optical spectroscopy and sensing innovation at innoFSPEC Potsdam

    NASA Astrophysics Data System (ADS)

    Haynes, Roger; Reich, Oliver; Rambold, William; Hass, Roland; Janssen, Katja

    2010-07-01

    In October 2009, an interdisciplinary centre for fibre spectroscopy and sensing, innoFSPEC Potsdam, has been established as joint initiative of the Astrophysikalisches Institut Potsdam (AIP) and the Physical Chemistry group of Potsdam University (UPPC), Germany. The centre focuses on fundamental research in the two fields of fibre-coupled multi-channel spectroscopy and optical fibre-based sensing. Thanks to its interdisciplinary approach, the complementary methodologies of astrophysics on the one hand, and physical chemistry on the other hand, are expected to spawn synergies that otherwise would not normally become available in more standard research programmes. innoFSPEC Potsdam targets future innovations for next generation astrophysical instrumentation, environmental analysis, manufacturing control and process analysis, medical diagnostics, non-invasive imaging spectroscopy, biopsy, genomics/proteomics, high throughput screening, and related applications.

  8. Experimental Method of Generating Electromagnetic Gaussian Schell-model Beams

    DTIC Science & Technology

    2015-03-26

    attracted special attention for the potential use in free-space optical communications, imaging through turbulence , and remote sensing applications [11...successful experiment demonstrated a reduction in scintillation of a completely unpolarized EGSM beam propagated through simulated 1 atmospheric turbulence [1...propagate through the atmosphere using either an atmospheric phase wheel or using additional SLMs to display atmospheric phase screens. Further, the source

  9. In vivo confirmation of hydration based contrast mechanisms for terahertz medical imaging using MRI

    NASA Astrophysics Data System (ADS)

    Bajwa, Neha; Sung, Shijun; Garritano, James; Nowroozi, Bryan; Tewari, Priyamvada; Ennis, Daniel B.; Alger, Jeffery; Grundfest, Warren; Taylor, Zachary

    2014-09-01

    Terahertz (THz) detection has been proposed and applied to a variety of medical imaging applications in view of its unrivaled hydration profiling capabilities. Variations in tissue dielectric function have been demonstrated at THz frequencies to generate high contrast imagery of tissue, however, the source of image contrast remains to be verified using a modality with a comparable sensing scheme. To investigate the primary contrast mechanism, a pilot comparison study was performed in a burn wound rat model, widely known to create detectable gradients in tissue hydration through both injured and surrounding tissue. Parallel T2 weighted multi slice multi echo (T2w MSME) 7T Magnetic Resonance (MR) scans and THz surface reflectance maps were acquired of a full thickness skin burn in a rat model over a 5 hour time period. A comparison of uninjured and injured regions in the full thickness burn demonstrates a 3-fold increase in average T2 relaxation times and a 15% increase in average THz reflectivity, respectively. These results support the sensitivity and specificity of MRI for measuring in vivo burn tissue water content and the use of this modality to verify and understand the hydration sensing capabilities of THz imaging for acute assessments of the onset and evolution of diseases that affect the skin. A starting point for more sophisticated in vivo studies, this preliminary analysis may be used in the future to explore how and to what extent the release of unbound water affects imaging contrast in THz burn sensing.

  10. Advancing Partnerships Towards an Integrated Approach to Oil Spill Response

    NASA Astrophysics Data System (ADS)

    Green, D. S.; Stough, T.; Gallegos, S. C.; Leifer, I.; Murray, J. J.; Streett, D.

    2015-12-01

    Oil spills can cause enormous ecological and economic devastation, necessitating application of the best science and technology available, and remote sensing is playing a growing critical role in the detection and monitoring of oil spills, as well as facilitating validation of remote sensing oil spill products. The FOSTERRS (Federal Oil Science Team for Emergency Response Remote Sensing) interagency working group seeks to ensure that during an oil spill, remote sensing assets (satellite/aircraft/instruments) and analysis techniques are quickly, effectively, appropriately, and seamlessly available to oil spills responders. Yet significant challenges remain for addressing oils spanning a vast range of chemical properties that may be spilled from the Tropics to the Arctic, with algorithms and scientific understanding needing advances to keep up with technology. Thus, FOSTERRS promotes enabling scientific discovery to ensure robust utilization of available technology as well as identifying technologies moving up the TRL (Technology Readiness Level). A recent FOSTERRS facilitated support activity involved deployment of the AVIRIS NG (Airborne Visual Infrared Imaging Spectrometer- Next Generation) during the Santa Barbara Oil Spill to validate the potential of airborne hyperspectral imaging to real-time map beach tar coverage including surface validation data. Many developing airborne technologies have potential to transition to space-based platforms providing global readiness.

  11. Hybrid method for building extraction in vegetation-rich urban areas from very high-resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav

    2017-07-01

    A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.

  12. Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data

    NASA Astrophysics Data System (ADS)

    Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.

    2016-06-01

    The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.

  13. Support vector machine as a binary classifier for automated object detection in remotely sensed data

    NASA Astrophysics Data System (ADS)

    Wardaya, P. D.

    2014-02-01

    In the present paper, author proposes the application of Support Vector Machine (SVM) for the analysis of satellite imagery. One of the advantages of SVM is that, with limited training data, it may generate comparable or even better results than the other methods. The SVM algorithm is used for automated object detection and characterization. Specifically, the SVM is applied in its basic nature as a binary classifier where it classifies two classes namely, object and background. The algorithm aims at effectively detecting an object from its background with the minimum training data. The synthetic image containing noises is used for algorithm testing. Furthermore, it is implemented to perform remote sensing image analysis such as identification of Island vegetation, water body, and oil spill from the satellite imagery. It is indicated that SVM provides the fast and accurate analysis with the acceptable result.

  14. Interferometric Reflectance Imaging Sensor (IRIS)—A Platform Technology for Multiplexed Diagnostics and Digital Detection

    PubMed Central

    Avci, Oguzhan; Lortlar Ünlü, Nese; Yalçın Özkumur, Ayça; Ünlü, M. Selim

    2015-01-01

    Over the last decade, the growing need in disease diagnostics has stimulated rapid development of new technologies with unprecedented capabilities. Recent emerging infectious diseases and epidemics have revealed the shortcomings of existing diagnostics tools, and the necessity for further improvements. Optical biosensors can lay the foundations for future generation diagnostics by providing means to detect biomarkers in a highly sensitive, specific, quantitative and multiplexed fashion. Here, we review an optical sensing technology, Interferometric Reflectance Imaging Sensor (IRIS), and the relevant features of this multifunctional platform for quantitative, label-free and dynamic detection. We discuss two distinct modalities for IRIS: (i) low-magnification (ensemble biomolecular mass measurements) and (ii) high-magnification (digital detection of individual nanoparticles) along with their applications, including label-free detection of multiplexed protein chips, measurement of single nucleotide polymorphism, quantification of transcription factor DNA binding, and high sensitivity digital sensing and characterization of nanoparticles and viruses. PMID:26205273

  15. Demonstration of temperature imaging by H₂O absorption spectroscopy using compressed sensing tomography.

    PubMed

    An, Xinliang; Brittelle, Mack S; Lauzier, Pascal T; Gord, James R; Roy, Sukesh; Chen, Guang-Hong; Sanders, Scott T

    2015-11-01

    This paper introduces temperature imaging by total-variation-based compressed sensing (CS) tomography of H2O vapor absorption spectroscopy. A controlled laboratory setup is used to generate a constant two-dimensional temperature distribution in air (a roughly Gaussian temperature profile with a central temperature of 677 K). A wavelength-tunable laser beam is directed through the known distribution; the beam is translated and rotated using motorized stages to acquire complete absorption spectra in the 1330-1365 nm range at each of 64 beam locations and 60 view angles. Temperature reconstructions are compared to independent thermocouple measurements. Although the distribution studied is approximately axisymmetric, axisymmetry is not assumed and simulations show similar performance for arbitrary temperature distributions. We study the measurement error as a function of number of beams and view angles used in reconstruction to gauge the potential for application of CS in practical test articles where optical access is limited.

  16. Democratization of Nanoscale Imaging and Sensing Tools Using Photonics

    PubMed Central

    2015-01-01

    Providing means for researchers and citizen scientists in the developing world to perform advanced measurements with nanoscale precision can help to accelerate the rate of discovery and invention as well as improve higher education and the training of the next generation of scientists and engineers worldwide. Here, we review some of the recent progress toward making optical nanoscale measurement tools more cost-effective, field-portable, and accessible to a significantly larger group of researchers and educators. We divide our review into two main sections: label-based nanoscale imaging and sensing tools, which primarily involve fluorescent approaches, and label-free nanoscale measurement tools, which include light scattering sensors, interferometric methods, photonic crystal sensors, and plasmonic sensors. For each of these areas, we have primarily focused on approaches that have either demonstrated operation outside of a traditional laboratory setting, including for example integration with mobile phones, or exhibited the potential for such operation in the near future. PMID:26068279

  17. Hurricane Harvey Building Damage Assessment Using UAV Data

    NASA Astrophysics Data System (ADS)

    Yeom, J.; Jung, J.; Chang, A.; Choi, I.

    2017-12-01

    Hurricane Harvey which was extremely destructive major hurricane struck southern Texas, U.S.A on August 25, causing catastrophic flooding and storm damages. We visited Rockport suffered severe building destruction and conducted UAV (Unmanned Aerial Vehicle) surveying for building damage assessment. UAV provides very high resolution images compared with traditional remote sensing data. In addition, prompt and cost-effective damage assessment can be performed regardless of several limitations in other remote sensing platforms such as revisit interval of satellite platforms, complicated flight plan in aerial surveying, and cloud amounts. In this study, UAV flight and GPS surveying were conducted two weeks after hurricane damage to generate an orthomosaic image and a DEM (Digital Elevation Model). 3D region growing scheme has been proposed to quantitatively estimate building damages considering building debris' elevation change and spectral difference. The result showed that the proposed method can be used for high definition building damage assessment in a time- and cost-effective way.

  18. Active pixel sensor pixel having a photodetector whose output is coupled to an output transistor gate

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nakamura, Junichi (Inventor); Kemeny, Sabrina E. (Inventor)

    2005-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. A Simple Floating Gate (SFG) pixel structure could also be employed in the imager to provide a non-destructive readout and smaller pixel sizes.

  19. Democratization of Nanoscale Imaging and Sensing Tools Using Photonics.

    PubMed

    McLeod, Euan; Wei, Qingshan; Ozcan, Aydogan

    2015-07-07

    Providing means for researchers and citizen scientists in the developing world to perform advanced measurements with nanoscale precision can help to accelerate the rate of discovery and invention as well as improve higher education and the training of the next generation of scientists and engineers worldwide. Here, we review some of the recent progress toward making optical nanoscale measurement tools more cost-effective, field-portable, and accessible to a significantly larger group of researchers and educators. We divide our review into two main sections: label-based nanoscale imaging and sensing tools, which primarily involve fluorescent approaches, and label-free nanoscale measurement tools, which include light scattering sensors, interferometric methods, photonic crystal sensors, and plasmonic sensors. For each of these areas, we have primarily focused on approaches that have either demonstrated operation outside of a traditional laboratory setting, including for example integration with mobile phones, or exhibited the potential for such operation in the near future.

  20. Generating and Separating Twisted Light by gradient-rotation Split-Ring Antenna Metasurfaces.

    PubMed

    Zeng, Jinwei; Li, Ling; Yang, Xiaodong; Gao, Jie

    2016-05-11

    Nanoscale compact optical vortex generators promise substantially significant prospects in modern optics and photonics, leading to many advances in sensing, imaging, quantum communication, and optical manipulation. However, conventional vortex generators often suffer from bulky size, low vortex mode purity in the converted beam, or limited operation bandwidth. Here, we design and demonstrate gradient-rotation split-ring antenna metasurfaces as unique spin-to-orbital angular momentum beam converters to simultaneously generate and separate pure optical vortices in a broad wavelength range. Our proposed design has the potential for realizing miniaturized on-chip OAM-multiplexers, as well as enabling new types of metasurface devices for the manipulation of complex structured light beams.

  1. Remote sensing image segmentation based on Hadoop cloud platform

    NASA Astrophysics Data System (ADS)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  2. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    PubMed Central

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-01-01

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606

  3. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    PubMed

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  4. Shade images of forested areas obtained from LANDSAT MSS data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1989-01-01

    The pixel size in the present day Remote Sensing systems is large enough to include different types of land cover. Depending upon the target area, several components may be present within the pixel. In forested areas, generally, three main components are present: tree canopy, soil (understory), and shadow. The objective is to generate a shade (shadow) image of forested areas from multispectral measurements of LANDSAT MSS (Multispectral Scanner) data by implementing a linear mixing model, where shadow is considered as one of the primary components in a pixel. The shade images are related to the observed variation in forest structure, i.e., the proportion of inferred shadow in a pixel is related to different forest ages, forest types, and tree crown cover. The Constrained Least Squares (CLS) method is used to generate shade images for forest of eucalyptus and vegetation of cerrado using LANDSAT MSS imagery over Itapeva study area in Brazil. The resulted shade images may explain the difference on ages for forest of eucalyptus and the difference on three crown cover for vegetation of cerrado.

  5. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  6. Emerging Computer Media: On Image Interaction

    NASA Astrophysics Data System (ADS)

    Lippman, Andrew B.

    1982-01-01

    Emerging technologies such as inexpensive, powerful local computing, optical digital videodiscs, and the technologies of human-machine interaction are initiating a revolution in both image storage systems and image interaction systems. This paper will present a review of new approaches to computer media predicated upon three dimensional position sensing, speech recognition, and high density image storage. Examples will be shown such as the Spatial Data Management Systems wherein the free use of place results in intuitively clear retrieval systems and potentials for image association; the Movie-Map, wherein inherently static media generate dynamic views of data, and conferencing work-in-progress wherein joint processing is stressed. Application to medical imaging will be suggested, but the primary emphasis is on the general direction of imaging and reference systems. We are passing the age of simple possibility of computer graphics and image porcessing and entering the age of ready usability.

  7. Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.

    PubMed

    Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří

    2017-11-10

    We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.

  8. Reflective THz and MR imaging of burn wounds: a potential clinical validation of THz contrast mechanisms

    NASA Astrophysics Data System (ADS)

    Bajwa, Neha; Nowroozi, Bryan; Sung, Shijun; Garritano, James; Maccabi, Ashkan; Tewari, Priyamvada; Culjat, Martin; Singh, Rahul; Alger, Jeffry; Grundfest, Warren; Taylor, Zachary

    2012-10-01

    Terahertz (THz) imaging is an expanding area of research in the field of medical imaging due to its high sensitivity to changes in tissue water content. Previously reported in vivo rat studies demonstrate that spatially resolved hydration mapping with THz illumination can be used to rapidly and accurately detect fluid shifts following induction of burns and provide highly resolved spatial and temporal characterization of edematous tissue. THz imagery of partial and full thickness burn wounds acquired by our group correlate well with burn severity and suggest that hydration gradients are responsible for the observed contrast. This research aims to confirm the dominant contrast mechanism of THz burn imaging using a clinically accepted diagnostic method that relies on tissue water content for contrast generation to support the translation of this technology to clinical application. The hydration contrast sensing capabilities of magnetic resonance imaging (MRI), specifically T2 relaxation times and proton density values N(H), are well established and provide measures of mobile water content, lending MRI as a suitable method to validate hydration states of skin burns. This paper presents correlational studies performed with MR imaging of ex vivo porcine skin that confirm tissue hydration as the principal sensing mechanism in THz burn imaging. Insights from this preliminary research will be used to lay the groundwork for future, parallel MRI and THz imaging of in vivo rat models to further substantiate the clinical efficacy of reflective THz imaging in burn wound care.

  9. A NDVI assisted remote sensing image adaptive scale segmentation method

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  10. Colorizing SENTINEL-1 SAR Images Using a Variational Autoencoder Conditioned on SENTINEL-2 Imagery

    NASA Astrophysics Data System (ADS)

    Schmitt, M.; Hughes, L. H.; Körner, M.; Zhu, X. X.

    2018-05-01

    In this paper, we have shown an approach for the automatic colorization of SAR backscatter images, which are usually provided in the form of single-channel gray-scale imagery. Using a deep generative model proposed for the purpose of photograph colorization and a Lab-space-based SAR-optical image fusion formulation, we are able to predict artificial color SAR images, which disclose much more information to the human interpreter than the original SAR data. Future work will aim at further adaption of the employed procedure to our special case of multi-sensor remote sensing imagery. Furthermore, we will investigate if the low-level representations learned intrinsically by the deep network can be used for SAR image interpretation in an end-to-end manner.

  11. Next-generation spectrometer aids study of Mediterranean

    NASA Astrophysics Data System (ADS)

    Abrams, M. J.; Bianchi, R.; Buongiorno, M. F.

    The Mediterranean region's highly diverse topography, lithology, soils, microclimates, vegetation, and seawater result in a variety of ecosystems. Remote sensing techniques, especially imaging spectrometry, have the potential to provide data for environmental studies on a regional scale in this part of the world.A test deployment of the multispectral infrared and visible imaging spectrometer (MIVIS), a new 102-channel imaging spectrometer, was carried out in Sicily in July 1994. Active volcanoes were surveyed to differentiate volcanic products and determine SO2 emissions in plumes (Figure 1), coastlines were imaged jointly with LIDAR to study pollution, ecosystems at several ocean areas were monitored, vegetated areas were imaged to determine the health of the biota, and archeological sites were studied to reconstruct ancient land use practices. For sites, refer to Figure 2.

  12. Superresolution parallel magnetic resonance imaging: Application to functional and spectroscopic imaging

    PubMed Central

    Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan

    2009-01-01

    Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804

  13. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  14. Forest cover type analysis of New England forests using innovative WorldView-2 imagery

    NASA Astrophysics Data System (ADS)

    Kovacs, Jenna M.

    For many years, remote sensing has been used to generate land cover type maps to create a visual representation of what is occurring on the ground. One significant use of remote sensing is the identification of forest cover types. New England forests are notorious for their especially complex forest structure and as a result have been, and continue to be, a challenge when classifying forest cover types. To most accurately depict forest cover types occurring on the ground, it is essential to utilize image data that have a suitable combination of both spectral and spatial resolution. The WorldView-2 (WV2) commercial satellite, launched in 2009, is the first of its kind, having both high spectral and spatial resolutions. WV2 records eight bands of multispectral imagery, four more than the usual high spatial resolution sensors, and has a pixel size of 1.85 meters at the nadir. These additional bands have the potential to improve classification detail and classification accuracy of forest cover type maps. For this reason, WV2 imagery was utilized on its own, and in combination with Landsat 5 TM (LS5) multispectral imagery, to evaluate whether these image data could more accurately classify forest cover types. In keeping with recent developments in image analysis, an Object-Based Image Analysis (OBIA) approach was used to segment images of Pawtuckaway State Park and nearby private lands, an area representative of the typical complex forest structure found in the New England region. A Classification and Regression Tree (CART) analysis was then used to classify image segments at two levels of classification detail. Accuracies for each forest cover type map produced were generated using traditional and area-based error matrices, and additional standard accuracy measures (i.e., KAPPA) were generated. The results from this study show that there is value in analyzing imagery with both high spectral and spatial resolutions, and that WV2's new and innovative bands can be useful for the classification of complex forest structures.

  15. Evaluation Digital Elevation Model Generated by Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    Makineci, H. B.; Karabörk, H.

    2016-06-01

    Digital elevation model, showing the physical and topographical situation of the earth, is defined a tree-dimensional digital model obtained from the elevation of the surface by using of selected an appropriate interpolation method. DEMs are used in many areas such as management of natural resources, engineering and infrastructure projects, disaster and risk analysis, archaeology, security, aviation, forestry, energy, topographic mapping, landslide and flood analysis, Geographic Information Systems (GIS). Digital elevation models, which are the fundamental components of cartography, is calculated by many methods. Digital elevation models can be obtained terrestrial methods or data obtained by digitization of maps by processing the digital platform in general. Today, Digital elevation model data is generated by the processing of stereo optical satellite images, radar images (radargrammetry, interferometry) and lidar data using remote sensing and photogrammetric techniques with the help of improving technology. One of the fundamental components of remote sensing radar technology is very advanced nowadays. In response to this progress it began to be used more frequently in various fields. Determining the shape of topography and creating digital elevation model comes the beginning topics of these areas. It is aimed in this work , the differences of evaluation of quality between Sentinel-1A SAR image ,which is sent by European Space Agency ESA and Interferometry Wide Swath imaging mode and C band type , and DTED-2 (Digital Terrain Elevation Data) and application between them. The application includes RMS static method for detecting precision of data. Results show us to variance of points make a high decrease from mountain area to plane area.

  16. Optical classification for quality and defect analysis of train brakes

    NASA Astrophysics Data System (ADS)

    Glock, Stefan; Hausmann, Stefan; Gerke, Sebastian; Warok, Alexander; Spiess, Peter; Witte, Stefan; Lohweg, Volker

    2009-06-01

    In this paper we present an optical measurement system approach for quality analysis of brakes which are used in high-speed trains. The brakes consist of the so called brake discs and pads. In a deceleration process the discs will be heated up to 500°C. The quality measure is based on the fact that the heated brake discs should not generate hot spots inside the brake material. Instead, the brake disc should be heated homogeneously by the deceleration. Therefore, it makes sense to analyze the number of hot spots and their relative gradients to create a quality measure for train brakes. In this contribution we present a new approach for a quality measurement system which is based on an image analysis and classification of infra-red based heat images. Brake images which are represented in pseudo-color are first transformed in a linear grayscale space by a hue-saturation-intensity (HSI) space. This transform is necessary for the following gradient analysis which is based on gray scale gradient filters. Furthermore, different features based on Haralick's measures are generated from the gray scale and gradient images. A following Fuzzy-Pattern-Classifier is used for the classification of good and bad brakes. It has to be pointed out that the classifier returns a score value for each brake which is between 0 and 100% good quality. This fact guarantees that not only good and bad bakes can be distinguished, but also their quality can be labeled. The results show that all critical thermal patterns of train brakes can be sensed and verified.

  17. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  18. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    NASA Astrophysics Data System (ADS)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  19. Satellite Image Classification of Building Damages Using Airborne and Satellite Image Samples in a Deep Learning Approach

    NASA Astrophysics Data System (ADS)

    Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G.

    2018-05-01

    The localization and detailed assessment of damaged buildings after a disastrous event is of utmost importance to guide response operations, recovery tasks or for insurance purposes. Several remote sensing platforms and sensors are currently used for the manual detection of building damages. However, there is an overall interest in the use of automated methods to perform this task, regardless of the used platform. Owing to its synoptic coverage and predictable availability, satellite imagery is currently used as input for the identification of building damages by the International Charter, as well as the Copernicus Emergency Management Service for the production of damage grading and reference maps. Recently proposed methods to perform image classification of building damages rely on convolutional neural networks (CNN). These are usually trained with only satellite image samples in a binary classification problem, however the number of samples derived from these images is often limited, affecting the quality of the classification results. The use of up/down-sampling image samples during the training of a CNN, has demonstrated to improve several image recognition tasks in remote sensing. However, it is currently unclear if this multi resolution information can also be captured from images with different spatial resolutions like satellite and airborne imagery (from both manned and unmanned platforms). In this paper, a CNN framework using residual connections and dilated convolutions is used considering both manned and unmanned aerial image samples to perform the satellite image classification of building damages. Three network configurations, trained with multi-resolution image samples are compared against two benchmark networks where only satellite image samples are used. Combining feature maps generated from airborne and satellite image samples, and refining these using only the satellite image samples, improved nearly 4 % the overall satellite image classification of building damages.

  20. Prototype of a laser guide star wavefront sensor for the Extremely Large Telescope

    NASA Astrophysics Data System (ADS)

    Patti, M.; Lombini, M.; Schreiber, L.; Bregoli, G.; Arcidiacono, C.; Cosentino, G.; Diolaiti, E.; Foppiani, I.

    2018-06-01

    The new class of large telescopes, like the future Extremely Large Telescope (ELT), are designed to work with a laser guide star (LGS) tuned to a resonance of atmospheric sodium atoms. This wavefront sensing technique presents complex issues when applied to big telescopes for many reasons, mainly linked to the finite distance of the LGS, the launching angle, tip-tilt indetermination and focus anisoplanatism. The implementation of a laboratory prototype for the LGS wavefront sensor (WFS) at the beginning of the phase study of MAORY (Multi-conjugate Adaptive Optics Relay) for ELT first light has been indispensable in investigating specific mitigation strategies for the LGS WFS issues. This paper presents the test results of the LGS WFS prototype under different working conditions. The accuracy within which the LGS images are generated on the Shack-Hartmann WFS has been cross-checked with the MAORY simulation code. The experiments show the effect of noise on centroiding precision, the impact of LGS image truncation on wavefront sensing accuracy as well as the temporal evolution of the sodium density profile and LGS image under-sampling.

  1. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    PubMed

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  2. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    PubMed Central

    Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440

  3. An earth imaging camera simulation using wide-scale construction of reflectance surfaces

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk

    2013-10-01

    Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.

  4. Research on Method of Interactive Segmentation Based on Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Li, H.; Han, Y.; Yu, F.

    2017-09-01

    In this paper, we aim to solve the object extraction problem in remote sensing images using interactive segmentation tools. Firstly, an overview of the interactive segmentation algorithm is proposed. Then, our detailed implementation of intelligent scissors and GrabCut for remote sensing images is described. Finally, several experiments on different typical features (water area, vegetation) in remote sensing images are performed respectively. Compared with the manual result, it indicates that our tools maintain good feature boundaries and show good performance.

  5. Geometric correction of synchronous scanned Operational Modular Imaging Spectrometer II hyperspectral remote sensing images using spatial positioning data of an inertial navigation system

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaohu; Neubauer, Franz; Zhao, Dong; Xu, Shichao

    2015-01-01

    The high-precision geometric correction of airborne hyperspectral remote sensing image processing was a hard nut to crack, and conventional methods of remote sensing image processing by selecting ground control points to correct the images are not suitable in the correction process of airborne hyperspectral image. The optical scanning system of an inertial measurement unit combined with differential global positioning system (IMU/DGPS) is introduced to correct the synchronous scanned Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing images. Posture parameters, which were synchronized with the OMIS II, were first obtained from the IMU/DGPS. Second, coordinate conversion and flight attitude parameters' calculations were conducted. Third, according to the imaging principle of OMIS II, mathematical correction was applied and the corrected image pixels were resampled. Then, better image processing results were achieved.

  6. A spectral-structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.

  7. Segmentation of Polarimetric SAR Images Usig Wavelet Transformation and Texture Features

    NASA Astrophysics Data System (ADS)

    Rezaeian, A.; Homayouni, S.; Safari, A.

    2015-12-01

    Polarimetric Synthetic Aperture Radar (PolSAR) sensors can collect useful observations from earth's surfaces and phenomena for various remote sensing applications, such as land cover mapping, change and target detection. These data can be acquired without the limitations of weather conditions, sun illumination and dust particles. As result, SAR images, and in particular Polarimetric SAR (PolSAR) are powerful tools for various environmental applications. Unlike the optical images, SAR images suffer from the unavoidable speckle, which causes the segmentation of this data difficult. In this paper, we use the wavelet transformation for segmentation of PolSAR images. Our proposed method is based on the multi-resolution analysis of texture features is based on wavelet transformation. Here, we use the information of gray level value and the information of texture. First, we produce coherency or covariance matrices and then generate span image from them. In the next step of proposed method is texture feature extraction from sub-bands is generated from discrete wavelet transform (DWT). Finally, PolSAR image are segmented using clustering methods as fuzzy c-means (FCM) and k-means clustering. We have applied the proposed methodology to full polarimetric SAR images acquired by the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) L-band system, during July, in 2012 over an agricultural area in Winnipeg, Canada.

  8. Remote Sensing Image Fusion Method Based on Nonsubsampled Shearlet Transform and Sparse Representation

    NASA Astrophysics Data System (ADS)

    Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao

    2015-12-01

    The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.

  9. Present Practice And Perceived Needs-Managing Diagnostic Images

    NASA Astrophysics Data System (ADS)

    Vanden Brink, John A.

    1982-01-01

    With the advent of digital radiography and the installed base of CT, Nuclear Medicine and Ultrasound Scanners numbering in the thousands and the potential of NMR, the market potential for the electronic management of digital images is perhaps one of the most exciting, fastest growing (and most ill defined) fields in medicine today. New technology in optical data storage, electronic transmission, image reproduction, microprocessing, automation and software development provide the promise of a whole new generation of products which will simplify and enhance the diagnostic process (thereby hopefully improving diagnostic accuracy), enable implementation of archival review in a practical sense, expand the availability of diagnostic data and lower the cost/case by at least an order of magnitude.

  10. Early development in synthetic aperture lidar sensing and processing for on-demand high resolution imaging

    NASA Astrophysics Data System (ADS)

    Bergeron, Alain; Turbide, Simon; Terroux, Marc; Marchese, Linda; Harnisch, Bernd

    2017-11-01

    The quest for real-time high resolution is of prime importance for surveillance applications specially in disaster management and rescue mission. Synthetic aperture radar provides meter-range resolution images in all weather conditions. Often installed on satellites the revisit time can be too long to support real-time operations on the ground. Synthetic aperture lidar can be lightweight and offers centimeter-range resolution. Onboard airplane or unmanned air vehicle this technology would allow for timelier reconnaissance. INO has developed a synthetic aperture radar table prototype and further used a real-time optronic processor to fulfill image generation on-demand. The early positive results using both technologies are presented in this paper.

  11. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    NASA Astrophysics Data System (ADS)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  12. A light and faster regional convolutional neural network for object detection in optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan

    2018-07-01

    Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.

  13. Ontology-based classification of remote sensing images using spectral rules

    NASA Astrophysics Data System (ADS)

    Andrés, Samuel; Arvor, Damien; Mougenot, Isabelle; Libourel, Thérèse; Durieux, Laurent

    2017-05-01

    Earth Observation data is of great interest for a wide spectrum of scientific domain applications. An enhanced access to remote sensing images for "domain" experts thus represents a great advance since it allows users to interpret remote sensing images based on their domain expert knowledge. However, such an advantage can also turn into a major limitation if this knowledge is not formalized, and thus is difficult for it to be shared with and understood by other users. In this context, knowledge representation techniques such as ontologies should play a major role in the future of remote sensing applications. We implemented an ontology-based prototype to automatically classify Landsat images based on explicit spectral rules. The ontology is designed in a very modular way in order to achieve a generic and versatile representation of concepts we think of utmost importance in remote sensing. The prototype was tested on four subsets of Landsat images and the results confirmed the potential of ontologies to formalize expert knowledge and classify remote sensing images.

  14. Secure and Energy-Efficient Data Transmission System Based on Chaotic Compressive Sensing in Body-to-Body Networks.

    PubMed

    Peng, Haipeng; Tian, Ye; Kurths, Jurgen; Li, Lixiang; Yang, Yixian; Wang, Daoshun

    2017-06-01

    Applications of wireless body area networks (WBANs) are extended from remote health care to military, sports, disaster relief, etc. With the network scale expanding, nodes increasing, and links complicated, a WBAN evolves to a body-to-body network. Along with the development, energy saving and data security problems are highlighted. In this paper, chaotic compressive sensing (CCS) is proposed to solve these two crucial problems, simultaneously. Compared with the traditional compressive sensing, CCS can save vast storage space by only storing the matrix generation parameters. Additionally, the sensitivity of chaos can improve the security of data transmission. Aimed at image transmission, modified CCS is proposed, which uses two encryption mechanisms, confusion and mask, and performs a much better encryption quality. Simulation is conducted to verify the feasibility and effectiveness of the proposed methods. The results show that the energy efficiency and security are strongly improved, while the storage space is saved. And the secret key is extremely sensitive, [Formula: see text] perturbation of the secret key could lead to a total different decoding, the relative error is larger than 100%. Particularly for image encryption, the performance of the modified method is excellent. The adjacent pixel correlation is smaller than 0.04 in different directions including horizontal, vertical, and diagonal; the entropy of the cipher image with a 256-level gray value is larger than 7.98.

  15. Hardware-in-the-loop tow missile system simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldman, G.S.; Wootton, J.R.; Hobson, G.L.

    1993-07-06

    A missile system simulator is described for use in training people for target acquisition, missile launch, and missile guidance under simulated battlefield conditions comprising: simulating means for producing a digital signal representing a simulated battlefield environment including at least one target movable therewithin, the simulating means generating an infrared map representing the field-of-view and the target; interface means for converting said digital signals to an infrared image; missile system hardware including the missile acquisition, tracking, and guidance portions thereof, said hardware sensing the infrared image to determine the location of the target in a field-of-view; and, image means for generatingmore » an infrared image of a missile launched at the target and guided thereto, the image means imposing the missile image onto the field-of-view for the missile hardware to acquire the image of the missile in addition to that of the target, and to generate guidance signals to guide the missile image to the target image, wherein the interfacing means is responsive to a guidance signal from the hardware to simulate, in real-time, the response of the missile to the guidance signal, the image means including a blackbody, laser means for irradiating the blackbody to heat it to a temperature at which it emits infrared radiation, and optic means for integrating the radiant image produced by heating the blackbody into the infrared map.« less

  16. Automated railroad reconstruction from remote sensing image based on texture filter

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Lu, Kaixia

    2018-03-01

    Techniques of remote sensing have been improved incredibly in recent years and very accurate results and high resolution images can be acquired. There exist possible ways to use such data to reconstruct railroads. In this paper, an automated railroad reconstruction method from remote sensing images based on Gabor filter was proposed. The method is divided in three steps. Firstly, the edge-oriented railroad characteristics (such as line features) in a remote sensing image are detected using Gabor filter. Secondly, two response images with the filtering orientations perpendicular to each other are fused to suppress the noise and acquire a long stripe smooth region of railroads. Thirdly, a set of smooth regions can be extracted by firstly computing global threshold for the previous result image using Otsu's method and then converting it to a binary image based on the previous threshold. This workflow is tested on a set of remote sensing images and was found to deliver very accurate results in a quickly and highly automated manner.

  17. Detection of mesoscale zones of atmospheric instabilities using remote sensing and weather forecasting model data

    NASA Astrophysics Data System (ADS)

    Winnicki, I.; Jasinski, J.; Kroszczynski, K.; Pietrek, S.

    2009-04-01

    The paper presents elements of research conducted in the Faculty of Civil Engineering and Geodesy of the Military University of Technology, Warsaw, Poland, concerning application of mesoscale models and remote sensing data to determining meteorological conditions of aircraft flight directly related with atmospheric instabilities. The quality of meteorological support of aviation depends on prompt and effective forecasting of weather conditions changes. The paper presents a computer module for detecting and monitoring zones of cloud cover, precipitation and turbulence along the aircraft flight route. It consists of programs and scripts for managing, processing and visualizing meteorological and remote sensing databases. The application was developed in Matlab® for Windows®. The module uses products of COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) mesoscale non-hydrostatic model of the atmosphere developed by the US Naval Research Laboratory, satellite images acquisition system from the MSG-2 (Meteosat Second Generation) of the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) and meteorological radars data acquired from the Institute of Meteorology and Water Management (IMGW), Warsaw, Poland. The satellite images acquisition system and the COAMPS model are run operationally in the Faculty of Civil Engineering and Geodesy. The mesoscale model is run on an IA64 Feniks multiprocessor 64-bit computer cluster. The basic task of the module is to enable a complex analysis of data sets of miscellaneous information structure and to verify COAMPS results using satellite and radar data. The research is conducted using uniform cartographic projection of all elements of the database. Satellite and radar images are transformed into the Lambert Conformal projection of COAMPS. This facilitates simultaneous interpretation and supports decision making process for safe execution of flights. Forecasts are based on horizontal distributions and vertical profiles of meteorological parameters produced by the module. Verification of forecasts includes research of spatial and temporal correlations of structures generated by the model, e.g.: cloudiness, meteorological phenomena (fogs, precipitation, turbulence) and structures identified on current satellite images. The developed module determines meteorological parameters fields for vertical profiles of the atmosphere. Interpolation procedures run at user selected standard (pressure) or height levels of the model enable to determine weather conditions along any route of aircraft. Basic parameters of the procedures determining e.g. flight safety include: cloud base, visibility, cloud cover, turbulence coefficient, icing and precipitation intensity. Determining icing and turbulence characteristics is based on standard and new methods (from other mesoscale models). The research includes also investigating new generation mesoscale models, especially remote sensing data assimilation. This is required by necessity to develop and introduce objective methods of forecasting weather conditions. Current research in the Faculty of Civil Engineering and Geodesy concerns validation of the mesoscale module performance.

  18. Some Defence Applications of Civilian Remote Sensing Satellite Images

    DTIC Science & Technology

    1993-11-01

    This report is on a pilot study to demonstrate some of the capabilities of remote sensing in intelligence gathering. A wide variety of issues, both...colour images. The procedure will be presented in a companion report. Remote sensing , Satellite imagery, Image analysis, Military applications, Military intelligence.

  19. Landsat 3 return beam vidicon response artifacts

    USGS Publications Warehouse

    ,; Clark, B.

    1981-01-01

    The return beam vidicon (RBV) sensing systems employed aboard Landsats 1, 2, and 3 have all been similar in that they have utilized vidicon tube cameras. These are not mirror-sweep scanning devices such as the multispectral scanner (MSS) sensors that have also been carried aboard the Landsat satellites. The vidicons operate more like common television cameras, using an electron gun to read images from a photoconductive faceplate.In the case of Landsats 1 and 2, the RBV system consisted of three such vidicons which collected remote sensing data in three distinct spectral bands. Landsat 3, however, utilizes just two vidicon cameras, both of which sense data in a single broad band. The Landsat 3 RBV system additionally has a unique configuration. As arranged, the two cameras can be shuttered alternately, twice each, in the same time it takes for one MSS scene to be acquired. This shuttering sequence results in four RBV "subscenes" for every MSS scene acquired, similar to the four quadrants of a square. See Figure 1. Each subscene represents a ground area of approximately 98 by 98 km. The subscenes are designated A, B, C, and D, for the northwest, northeast, southwest, and southeast quarters of the full scene, respectively. RBV data products are normally ordered, reproduced, and sold on a subscene basis and are in general referred to in this way. Each exposure from the RBV camera system presents an image which is 98 km on a side. When these analog video data are subsequently converted to digital form, the picture element, or pixel, that results is 19 m on a side with an effective resolution element of 30 m. This pixel size is substantially smaller than that obtainable in MSS images (the MSS has an effective resolution element of 73.4 m), and, when RBV images are compared to equivalent MSS images, better resolution in the RBV data is clearly evident. It is for this reason that the RBV system can be a valuable tool for remote sensing of earth resources.Until recently, RBV imagery was processed directly from wideband video tape data onto 70-mm film. This changed in September 1980 when digital production of RBV data at the NASA Goddard Space Flight Center (GSFC) began. The wideband video tape data are now subjected to analog-to-digital preprocessing and corrected both radiometrically and geometrically to produce high-density digital tapes (HDT's). The HDT data are subsequently transmitted via satellite (Domsat) to the EROS Data Center (EDC) where they are used to generate 241-mm photographic images at a scale of 1:500,000. Computer-compatible tapes of the data are also generated as digital products. Of the RBV data acquired since September 1, 1980, approximately 2,800 subscenes per month have been processed at EDC.

  20. NDSI products system based on Hadoop platform

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Jiang, He; Yang, Xiaoxia; Geng, Erhui

    2015-12-01

    Snow is solid state of water resources on earth, and plays an important role in human life. Satellite remote sensing is significant in snow extraction with the advantages of cyclical, macro, comprehensiveness, objectivity, timeliness. With the continuous development of remote sensing technology, remote sensing data access to the trend of multiple platforms, multiple sensors and multiple perspectives. At the same time, in view of the remote sensing data of compute-intensive applications demand increase gradually. However, current the producing system of remote sensing products is in a serial mode, and this kind of production system is used for professional remote sensing researchers mostly, and production systems achieving automatic or semi-automatic production are relatively less. Facing massive remote sensing data, the traditional serial mode producing system with its low efficiency has been difficult to meet the requirements of mass data timely and efficient processing. In order to effectively improve the production efficiency of NDSI products, meet the demand of large-scale remote sensing data processed timely and efficiently, this paper build NDSI products production system based on Hadoop platform, and the system mainly includes the remote sensing image management module, NDSI production module, and system service module. Main research contents and results including: (1)The remote sensing image management module: includes image import and image metadata management two parts. Import mass basis IRS images and NDSI product images (the system performing the production task output) into HDFS file system; At the same time, read the corresponding orbit ranks number, maximum/minimum longitude and latitude, product date, HDFS storage path, Hadoop task ID (NDSI products), and other metadata information, and then create thumbnails, and unique ID number for each record distribution, import it into base/product image metadata database. (2)NDSI production module: includes the index calculation, production tasks submission and monitoring two parts. Read HDF images related to production task in the form of a byte stream, and use Beam library to parse image byte stream to the form of Product; Use MapReduce distributed framework to perform production tasks, at the same time monitoring task status; When the production task complete, calls remote sensing image management module to store NDSI products. (3)System service module: includes both image search and DNSI products download. To image metadata attributes described in JSON format, return to the image sequence ID existing in the HDFS file system; For the given MapReduce task ID, package several task output NDSI products into ZIP format file, and return to the download link (4)System evaluation: download massive remote sensing data and use the system to process it to get the NDSI products testing the performance, and the result shows that the system has high extendibility, strong fault tolerance, fast production speed, and the image processing results with high accuracy.

  1. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    NASA Astrophysics Data System (ADS)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  2. All-optical dynamic correction of distorted communication signals using a photorefractive polymeric hologram

    NASA Astrophysics Data System (ADS)

    Li, Guoqiang; Eralp, Muhsin; Thomas, Jayan; Tay, Savaş; Schülzgen, Axel; Norwood, Robert A.; Peyghambarian, N.

    2005-04-01

    All-optical real-time dynamic correction of wave front aberrations for image transmission is demonstrated using a photorefractive polymeric hologram. The material shows video rate response time with a low power laser. High-fidelity, high-contrast images can be reconstructed when the oil-filled phase plate generating atmospheric-like wave front aberrations is moved at 0.3mm/s. The architecture based on four-wave mixing has potential application in free-space optical communication, remote sensing, and dynamic tracking. The system offers a cost-effective alternative to closed-loop adaptive optics systems.

  3. The effects of nuclear magnetic resonance on patients with cardiac pacemakers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlicek, W.; Geisinger, M.; Castle, L.

    1983-04-01

    The effect of nuclear magnetic resonance (NMR) imaging on six representative cardiac pacemakers was studied. The results indicate that the threshold for initiating the asynchronous mode of a pacemaker is 17 gauss. Radiofrequency levels are present in an NMR unit and may confuse or possibly inhibit demand pacemakers, although sensing circuitry is normally provided with electromagnetic interference discrimination. Time-varying magnetic fields can generate pulse amplitudes and frequencies to mimic cardiac activity. A serious limitation in the possibility of imaging a patient with a pacemaker would be the alteration of normal pulsing parameters due to time-varying magnetic fields.

  4. The antibody-based magnetic microparticle immunoassay using p-FET sensing platform for Alzheimer's disease pathogenic factor

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Beom; Kim, Kwan-Soo; Song, Ki-Bong

    2013-05-01

    The importance of early Alzheimer's disease (AD) detection has been recognized to diagnose people at high risk of AD. The existence of intra/extracellular beta-amyloid (Aβ) of brain neurons has been regarded as the most archetypal hallmark of AD. The existing computed-image-based neuroimaging tools have limitations on accurate quantification of nanoscale Aβ peptides due to optical diffraction during imaging processes. Therefore, we propose a new method that is capable of evaluating a small amount of Aβ peptides by using photo-sensitive field-effect transistor (p-FET) integrated with magnetic force-based microbead collecting platform and selenium(Se) layer (thickness ~700 nm) as an optical filter. This method demonstrates a facile approach for the analysis of Aβ quantification using magnetic force and magnetic silica microparticles (diameter 0.2~0.3 μm). The microbead collecting platform mainly consists of the p-FET sensing array and the magnet (diameter ~1 mm) which are placed beneath each sensing region of the p-FET, which enables the assembly of the Aβ antibody conjugated microbeads, captures the Aβ peptides from samples, measures the photocurrents generated by the Q-dot tagged with Aβ peptides, and consequently results in the effective Aβ quantification.

  5. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  6. Approach for removing ghost-images in remote field eddy current testing of ferromagnetic pipes

    NASA Astrophysics Data System (ADS)

    Luo, Q. W.; Shi, Y. B.; Wang, Z. G.; Zhang, W.; Zhang, Y.

    2016-10-01

    In the non-destructive testing of ferromagnetic pipes based on remote field eddy currents, an array of sensing coils is often used to detect local defects. While testing, the image that is obtained by sensing coils exhibits a ghost-image, which originates from both the transmitter and sensing coils passing over the same defects in pipes. Ghost-images are caused by transmitters and lead to undesirable assessments of defects. In order to remove ghost-images, two pickup coils are coaxially set to each other in remote field. Due to the time delay between differential signals tested by the two pickup coils, a Wiener deconvolution filter is used to identify the artificial peaks that lead to ghost-images. Because the sensing coils and two pickup coils all receive the same signal from one transmitter, they all contain the same artificial peaks. By subtracting the artificial peak values obtained by the two pickup coils from the imaging data, the ghost-image caused by the transmitter is eliminated. Finally, a relatively highly accurate image of local defects is obtained by these sensing coils. With proposed method, there is no need to subtract the average value of the sensing coils, and it is sensitive to ringed defects.

  7. Approach for removing ghost-images in remote field eddy current testing of ferromagnetic pipes.

    PubMed

    Luo, Q W; Shi, Y B; Wang, Z G; Zhang, W; Zhang, Y

    2016-10-01

    In the non-destructive testing of ferromagnetic pipes based on remote field eddy currents, an array of sensing coils is often used to detect local defects. While testing, the image that is obtained by sensing coils exhibits a ghost-image, which originates from both the transmitter and sensing coils passing over the same defects in pipes. Ghost-images are caused by transmitters and lead to undesirable assessments of defects. In order to remove ghost-images, two pickup coils are coaxially set to each other in remote field. Due to the time delay between differential signals tested by the two pickup coils, a Wiener deconvolution filter is used to identify the artificial peaks that lead to ghost-images. Because the sensing coils and two pickup coils all receive the same signal from one transmitter, they all contain the same artificial peaks. By subtracting the artificial peak values obtained by the two pickup coils from the imaging data, the ghost-image caused by the transmitter is eliminated. Finally, a relatively highly accurate image of local defects is obtained by these sensing coils. With proposed method, there is no need to subtract the average value of the sensing coils, and it is sensitive to ringed defects.

  8. Transmission (forward) mode, transcranial, noninvasive optoacoustic measurements for brain monitoring, imaging, and sensing

    NASA Astrophysics Data System (ADS)

    Petrov, Irene Y.; Petrov, Yuriy; Prough, Donald S.; Richardson, C. Joan; Fonseca, Rafael A.; Robertson, Claudia S.; Asokan, C. Vasantha; Agbor, Adaeze; Esenaliev, Rinat O.

    2016-03-01

    We proposed to use transmission (forward) mode for cerebral, noninvasive, transcranial optoacoustic monitoring, imaging, and sensing in humans. In the transmission mode, the irradiation of the tissue of interest and detection of optoacoustic signals are performed from opposite hemispheres, while in the reflection (backward) mode the irradiation of the tissue of interest and detection of optoacoustic signals are performed from the same hemisphere. Recently, we developed new, transmission-mode optoacoustic probes for patients with traumatic brain injury (TBI) and for neonatal patients. The transmission mode probes have two major parts: a fiber-optic delivery system and an acoustic transducer (sensor). To obtain optoacoustic signals in the transmission mode, in this study we placed the sensor on the forehead, while light was delivered to the opposite side of the head. Using a medical grade, multi-wavelength, OPObased optoacoustic system tunable in the near infrared spectral range (680-950 nm) and a novel, compact, fiber-coupled, multi-wavelength, pulsed laser diode-based system, we recorded optoacoustic signals generated in the posterior part of the head of adults with TBI and neonates. The optoacoustic signals had two distinct peaks: the first peak from the intracranial space and the second peak from the scalp. The first peak generated by cerebral blood was used to measure cerebral blood oxygenation. Moreover, the transmission mode measurements provided detection of intracranial hematomas in the TBI patients. The obtained results suggest that the transmission mode can be used for optoacoustic brain imaging, tomography, and mapping in humans.

  9. Remote sensing programs and courses in engineering and water resources

    NASA Technical Reports Server (NTRS)

    Kiefer, R. W.

    1981-01-01

    The content of typical basic and advanced remote sensing and image interpretation courses are described and typical remote sensing graduate programs of study in civil engineering and in interdisciplinary environmental remote sensing and water resources management programs are outlined. Ideally, graduate programs with an emphasis on remote sensing and image interpretation should be built around a core of five courses: (1) a basic course in fundamentals of remote sensing upon which the more specialized advanced remote sensing courses can build; (2) a course dealing with visual image interpretation; (3) a course dealing with quantitative (computer-based) image interpretation; (4) a basic photogrammetry course; and (5) a basic surveying course. These five courses comprise up to one-half of the course work required for the M.S. degree. The nature of other course work and thesis requirements vary greatly, depending on the department in which the degree is being awarded.

  10. SENSOR: a tool for the simulation of hyperspectral remote sensing systems

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel

    The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.

  11. A patch-based convolutional neural network for remote sensing image classification.

    PubMed

    Sharma, Atharva; Liu, Xiuwen; Yang, Xiaojun; Shi, Di

    2017-11-01

    Availability of accurate land cover information over large areas is essential to the global environment sustainability; digital classification using medium-resolution remote sensing data would provide an effective method to generate the required land cover information. However, low accuracy of existing per-pixel based classification methods for medium-resolution data is a fundamental limiting factor. While convolutional neural networks (CNNs) with deep layers have achieved unprecedented improvements in object recognition applications that rely on fine image structures, they cannot be applied directly to medium-resolution data due to lack of such fine structures. In this paper, considering the spatial relation of a pixel to its neighborhood, we propose a new deep patch-based CNN system tailored for medium-resolution remote sensing data. The system is designed by incorporating distinctive characteristics of medium-resolution data; in particular, the system computes patch-based samples from multidimensional top of atmosphere reflectance data. With a test site from the Florida Everglades area (with a size of 771 square kilometers), the proposed new system has outperformed pixel-based neural network, pixel-based CNN and patch-based neural network by 24.36%, 24.23% and 11.52%, respectively, in overall classification accuracy. By combining the proposed deep CNN and the huge collection of medium-resolution remote sensing data, we believe that much more accurate land cover datasets can be produced over large areas. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Atmospheric Correction of High-Spatial-Resolution Commercial Satellite Imagery Products Using MODIS Atmospheric Products

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Holekamp, Kara; Ryan, Robert E.; Vaughan, Ronald; Russell, Jeffrey A.; Prados, Don; Stanley, Thomas

    2005-01-01

    Remotely sensed ground reflectance is the basis for many inter-sensor interoperability or change detection techniques. Satellite inter-comparisons and accurate vegetation indices such as the Normalized Difference Vegetation Index, which is used to describe or to imply a wide variety of biophysical parameters and is defined in terms of near-infrared and redband reflectance, require the generation of accurate reflectance maps. This generation relies upon the removal of solar illumination, satellite geometry, and atmospheric effects and is generally referred to as atmospheric correction. Atmospheric correction of remotely sensed imagery to ground reflectance, however, has been widely applied to only a few systems. In this study, we atmospherically corrected commercially available, high spatial resolution IKONOS and QuickBird imagery using several methods to determine the accuracy of the resulting reflectance maps. We used extensive ground measurement datasets for nine IKONOS and QuickBird scenes acquired over a two-year period to establish reflectance map accuracies. A correction approach using atmospheric products derived from Moderate Resolution Imaging Spectrometer data created excellent reflectance maps and demonstrated a reliable, effective method for reflectance map generation.

  13. Broadband Phase Retrieval for Image-Based Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A focus-diverse phase-retrieval algorithm has been shown to perform adequately for the purpose of image-based wavefront sensing when (1) broadband light (typically spanning the visible spectrum) is used in forming the images by use of an optical system under test and (2) the assumption of monochromaticity is applied to the broadband image data. Heretofore, it had been assumed that in order to obtain adequate performance, it is necessary to use narrowband or monochromatic light. Some background information, including definitions of terms and a brief description of pertinent aspects of image-based phase retrieval, is prerequisite to a meaningful summary of the present development. Phase retrieval is a general term used in optics to denote estimation of optical imperfections or aberrations of an optical system under test. The term image-based wavefront sensing refers to a general class of algorithms that recover optical phase information, and phase-retrieval algorithms constitute a subset of this class. In phase retrieval, one utilizes the measured response of the optical system under test to produce a phase estimate. The optical response of the system is defined as the image of a point-source object, which could be a star or a laboratory point source. The phase-retrieval problem is characterized as image-based in the sense that a charge-coupled-device camera, preferably of scientific imaging quality, is used to collect image data where the optical system would normally form an image. In a variant of phase retrieval, denoted phase-diverse phase retrieval [which can include focus-diverse phase retrieval (in which various defocus planes are used)], an additional known aberration (or an equivalent diversity function) is superimposed as an aid in estimating unknown aberrations by use of an image-based wavefront-sensing algorithm. Image-based phase-retrieval differs from such other wavefront-sensing methods, such as interferometry, shearing interferometry, curvature wavefront sensing, and Shack-Hartmann sensing, all of which entail disadvantages in comparison with image-based methods. The main disadvantages of these non-image based methods are complexity of test equipment and the need for a wavefront reference.

  14. Fingerprint enhancement using a multispectral sensor

    NASA Astrophysics Data System (ADS)

    Rowe, Robert K.; Nixon, Kristin A.

    2005-03-01

    The level of performance of a biometric fingerprint sensor is critically dependent on the quality of the fingerprint images. One of the most common types of optical fingerprint sensors relies on the phenomenon of total internal reflectance (TIR) to generate an image. Under ideal conditions, a TIR fingerprint sensor can produce high-contrast fingerprint images with excellent feature definition. However, images produced by the same sensor under conditions that include dry skin, dirt on the skin, and marginal contact between the finger and the sensor, are likely to be severely degraded. This paper discusses the use of multispectral sensing as a means to collect additional images with new information about the fingerprint that can significantly augment the system performance under both normal and adverse sample conditions. In the context of this paper, "multispectral sensing" is used to broadly denote a collection of images taken under different illumination conditions: different polarizations, different illumination/detection configurations, as well as different wavelength illumination. Results from three small studies using an early-stage prototype of the multispectral-TIR (MTIR) sensor are presented along with results from the corresponding TIR data. The first experiment produced data from 9 people, 4 fingers from each person and 3 measurements per finger under "normal" conditions. The second experiment provided results from a study performed to test the relative performance of TIR and MTIR images when taken under extreme dry and dirty conditions. The third experiment examined the case where the area of contact between the finger and sensor is greatly reduced.

  15. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  16. Mixed-spectrum generation mechanism analysis of dispersive hyperspectral imaging for improving environmental monitoring of coastal waters

    NASA Astrophysics Data System (ADS)

    Xie, Feng; Xiao, Gonghai; Qi, Hongxing; Shu, Rong; Wang, Jianyu; Xue, Yongqi

    2010-11-01

    At present, most part of coast zone in China belong to Case II waters with a large volume of shallow waters. Through theories and experiences of ocean water color remote sensing has a prominent improvement, there still exist many problems mainly as follows: (a) there is not a special sensor for heat pollution of coast water remote sensing up to now; (b) though many scholars have developed many water quality parameter retrieval models in the open ocean, there still exists a large gap from practical applications in turbid coastal waters. It is much more difficult due to the presence of high concentrations of suspended sediments and dissolved organic material, which overwhelm the spectral signal of sea water. Hyperspectral remote sensing allows a sensor on a moving platform to gather emitted radiation from the Earth's surface, which opens a way to reach a better analysis and understanding of coast water. Operative Modular Imaging Spectrometer (OMIS) is a type of representative imaging spectrometer developed by the Chinese Academy of Sciences. OMIS collects reflective and radiation light from ground by RC telescope with the scanning mirror cross track and flight of plane along track. In this paper, we explore the use of OMIS as the airborne sensor for the heat pollution monitoring in coast water, on the basis of an analysis on the mixed-spectrum arising from the image correcting process for geometric distortion. An airborne experiment was conducted in the winter of 2009 on the coast of the East Sea in China.

  17. Landslide Life-Cycle Monitoring and Failure Prediction using Satellite Remote Sensing

    NASA Astrophysics Data System (ADS)

    Bouali, E. H. Y.; Oommen, T.; Escobar-Wolf, R. P.

    2017-12-01

    The consequences of slope instability are severe across the world: the US Geological Survey estimates that, each year, the United States spends $3.5B to repair damages caused by landslides, 25-50 deaths occur, real estate values in affected areas are reduced, productivity decreases, and natural environments are destroyed. A 2012 study by D.N. Petley found that loss of life is typically underestimated and, between 2004 and 2010, 2,620 fatal landslides caused 32,322 deaths around the world. These statistics have led research into the study of landslide monitoring and forecasting. More specifically, this presentation focuses on assessing the potential for using satellite-based optical and radar imagery toward overall landslide life-cycle monitoring and prediction. Radar images from multiple satellites (ERS-1, ERS-2, ENVISAT, and COSMO-SkyMed) are processed using the Persistent Scatterer Interferometry (PSI) technique. Optical images, from the Worldview-2 satellite, are orthorectified and processed using the Co-registration of Optically Sensed Images and Correlation (COSI-Corr) algorithm. Both approaches, process stacks of respective images, yield ground displacement rate values. Ground displacement information is used to generate `inverse-velocity vs time' plots, a proxy relationship that is used to estimate landslide occurrence (slope failure) and derived from a relationship quantified by T. Fukuzono in 1985 and B. Voight in 1988 between a material's time of failure and the strain rate applied to that material. Successful laboratory tests have demonstrated the usefulness of `inverse-velocity vs time' plots. This presentation will investigate the applicability of this approach with remote sensing on natural landslides in the western United States.

  18. Biomedical imaging and sensing using flatbed scanners.

    PubMed

    Göröcs, Zoltán; Ozcan, Aydogan

    2014-09-07

    In this Review, we provide an overview of flatbed scanner based biomedical imaging and sensing techniques. The extremely large imaging field-of-view (e.g., ~600-700 cm(2)) of these devices coupled with their cost-effectiveness provide unique opportunities for digital imaging of samples that are too large for regular optical microscopes, and for collection of large amounts of statistical data in various automated imaging or sensing tasks. Here we give a short introduction to the basic features of flatbed scanners also highlighting the key parameters for designing scientific experiments using these devices, followed by a discussion of some of the significant examples, where scanner-based systems were constructed to conduct various biomedical imaging and/or sensing experiments. Along with mobile phones and other emerging consumer electronics devices, flatbed scanners and their use in advanced imaging and sensing experiments might help us transform current practices of medicine, engineering and sciences through democratization of measurement science and empowerment of citizen scientists, science educators and researchers in resource limited settings.

  19. Biomedical Imaging and Sensing using Flatbed Scanners

    PubMed Central

    Göröcs, Zoltán; Ozcan, Aydogan

    2014-01-01

    In this Review, we provide an overview of flatbed scanner based biomedical imaging and sensing techniques. The extremely large imaging field-of-view (e.g., ~600–700 cm2) of these devices coupled with their cost-effectiveness provide unique opportunities for digital imaging of samples that are too large for regular optical microscopes, and for collection of large amounts of statistical data in various automated imaging or sensing tasks. Here we give a short introduction to the basic features of flatbed scanners also highlighting the key parameters for designing scientific experiments using these devices, followed by a discussion of some of the significant examples, where scanner-based systems were constructed to conduct various biomedical imaging and/or sensing experiments. Along with mobile phones and other emerging consumer electronics devices, flatbed scanners and their use in advanced imaging and sensing experiments might help us transform current practices of medicine, engineering and sciences through democratization of measurement science and empowerment of citizen scientists, science educators and researchers in resource limited settings. PMID:24965011

  20. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  1. Monitoring of hourly variations in coastal water turbidity using the geostationary ocean color imager (GOCI)

    NASA Astrophysics Data System (ADS)

    Choi, J.; Ryu, J.

    2011-12-01

    Temporal variations of suspended sediment concentration (SSC) in coastal water are the key to understanding the pattern of sediment movement within coastal area, in particular, such as in the west coast of the Korean Peninsula which is influenced by semi-diurnal tides. Remote sensing techniques can effectively monitor the distribution and dynamic changes in seawater properties across wide areas. Thus, SSC on the sea surface has been investigated using various types of satellite-based sensors. An advantage of Geostationary Ocean Color Imager (GOCI), the world's first geostationary ocean color observation satellite, over other ocean color satellite images is that it can obtain data every hour during the day and makes it possible to monitor the ocean in real time. In this study, hourly variations in turbidity on the coastal waters were estimated quantitatively using GOCI. Thirty three water samples were obtained on the coastal water surface in southern Gyeonggi Bay, located on the west coast of Korea. Water samples were filtered using 25-mm glass fiber filters (GF/F) for the estimation of SSC. The radiometric characteristics of the surface water, such as the total water-leaving radiance (LwT, W/m2/nm/sr), the sky radiance (Lsky, W/m2/nm/sr) and the downwelling irradiance, were also measured at each sampling location. In situ optical properties of the surface water were converted into remote sensing reflectance (Rrs) and then were used to develop an algorithm to generate SSC images in the study area. GOCI images acquired on the same day as the samples acquisition were used to generate the map of turbidity and to estimate the difference in SSC displayed in each image. The estimation of the time-series variation in SSC in a coastal, shallow-water area affected by tides was successfully achieved using GOCI data that had been acquired at hourly intervals during the daytime.

  2. High efficient optical remote sensing images acquisition for nano-satellite: reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Feng; Xin, Lei; Fu, Jie; Huang, Puming

    2017-10-01

    Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.

  3. Using DMSP/OLS nighttime imagery to estimate carbon dioxide emission

    NASA Astrophysics Data System (ADS)

    Desheng, B.; Letu, H.; Bao, Y.; Naizhuo, Z.; Hara, M.; Nishio, F.

    2012-12-01

    This study highlighted a method for estimating CO2 emission from electric power plants using the Defense Meteorological Satellite Program's Operational Linescan System (DMSP/OLS) stable light image product for 1999. CO2 emissions from power plants account for a high percentage of CO2 emissions from fossil fuel consumptions. Thermal power plants generate the electricity by burning fossil fuels, so they emit CO2 directly. In many Asian countries such as China, Japan, India, and South Korea, the amounts of electric power generated by thermal power accounts over 58% in the total amount of electric power in 1999. So far, figures of the CO2 emission were obtained mainly by traditional statistical methods. Moreover, the statistical data were summarized as administrative regions, so it is difficult to examine the spatial distribution of non-administrative division. In some countries the reliability of such CO2 emission data is relatively low. However, satellite remote sensing can observe the earth surface without limitation of administrative regions. Thus, it is important to estimate CO2 using satellite remote sensing. In this study, we estimated the CO2 emission by fossil fuel consumption from electric power plant using stable light image of the DMSP/OLS satellite data for 1999 after correction for saturation effect in Japan. Digital number (DN) values of the stable light images in center areas of cities are saturated due to the large nighttime light intensities and characteristics of the OLS satellite sensors. To more accurately estimate the CO2 emission using the stable light images, a saturation correction method was developed by using the DMSP radiance calibration image, which does not include any saturation pixels. A regression equation was developed by the relationship between DN values of non-saturated pixels in the stable light image and those in the radiance calibration image. And, regression equation was used to adjust the DNs of the radiance calibration image. Then, saturated DNs of the stable light image was corrected using adjusted radiance calibration image. After that, regression analysis was performed with cumulative DNs of the corrected stable light image, electric power consumption, electric power generation and CO2 emission by fossil fuel consumption from electric power plant each other. Results indicated that there are good relationships (R2>90%) between DNs of the corrected stable light image and other parameters. Based on the above results, we estimated the CO2 emission from electric power plant using corrected stable light image. Keywords: DMSP/OLS, stable light, saturation light correction method, regression analysis Acknowledgment: The research was financially supported by the Sasakawa Scientific Research Grant from the Japan Science Society.

  4. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    1995-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  5. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2003-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  6. Active pixel sensor with intra-pixel charge transfer

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Mendis, Sunetra (Inventor); Kemeny, Sabrina E. (Inventor)

    2004-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate, a readout circuit including at least an output field effect transistor formed in the substrate, and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node connected to the output transistor and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node.

  7. Broadening the Earthscan Industry

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Law Environmental, Inc. is a professional engineering and Earth sciences consulting firm. When a client, who operates an electricity generating plant required assistance in evaluating the effects of a heated water discharge on aquatic life, Law proposed a Visiting Investigator Program (VIP) to Stennis Space Center (SSC). The VIP is directed toward small companies who could use remote sensing profitably, but do not have the money to explore new technologies. SSC provided remote sensing data to Law enabling it to produce images of the thermal "plume," the water area affected by the discharge. After comparisons of plant and animal life with similar life in an unaffected control area, Law concluded that the discharge effect was not significant.

  8. Chalcogenide based rib waveguide for compact on-chip supercontinuum sources in mid-infrared domain

    NASA Astrophysics Data System (ADS)

    Saini, Than Singh; Tiwari, Umesh Kumar; Sinha, Ravindra Kumar

    2017-08-01

    We have designed and analysed a rib waveguide structure in recently reported Ga-Sb-S based highly nonlinear chalcogenide glass for nonlinear applications. The proposed waveguide structure possesses a very high nonlinear coefficient and can be used to generate broadband supercontinuum in mid-infrared domain. The reported design of the chalcogenide waveguide offers two zero dispersion values at 1800 nm and 2900 nm. Such rib waveguide structure is suitable to generate efficient supercontinuum generation ranging from 500 - 7400 μm. The reported waveguide can be used for the realization of the compact on-chip supercontinuum sources which are highly applicable in optical imaging, optical coherence tomography, food quality control, security and sensing.

  9. A learning tool for optical and microwave satellite image processing and analysis

    NASA Astrophysics Data System (ADS)

    Dashondhi, Gaurav K.; Mohanty, Jyotirmoy; Eeti, Laxmi N.; Bhattacharya, Avik; De, Shaunak; Buddhiraju, Krishna M.

    2016-04-01

    This paper presents a self-learning tool, which contains a number of virtual experiments for processing and analysis of Optical/Infrared and Synthetic Aperture Radar (SAR) images. The tool is named Virtual Satellite Image Processing and Analysis Lab (v-SIPLAB) Experiments that are included in Learning Tool are related to: Optical/Infrared - Image and Edge enhancement, smoothing, PCT, vegetation indices, Mathematical Morphology, Accuracy Assessment, Supervised/Unsupervised classification etc.; Basic SAR - Parameter extraction and range spectrum estimation, Range compression, Doppler centroid estimation, Azimuth reference function generation and compression, Multilooking, image enhancement, texture analysis, edge and detection. etc.; SAR Interferometry - BaseLine Calculation, Extraction of single look SAR images, Registration, Resampling, and Interferogram generation; SAR Polarimetry - Conversion of AirSAR or Radarsat data to S2/C3/T3 matrix, Speckle Filtering, Power/Intensity image generation, Decomposition of S2/C3/T3, Classification of S2/C3/T3 using Wishart Classifier [3]. A professional quality polarimetric SAR software can be found at [8], a part of whose functionality can be found in our system. The learning tool also contains other modules, besides executable software experiments, such as aim, theory, procedure, interpretation, quizzes, link to additional reading material and user feedback. Students can have understanding of Optical and SAR remotely sensed images through discussion of basic principles and supported by structured procedure for running and interpreting the experiments. Quizzes for self-assessment and a provision for online feedback are also being provided to make this Learning tool self-contained. One can download results after performing experiments.

  10. Investigating the Potential of Deep Neural Networks for Large-Scale Classification of Very High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.

    2017-05-01

    Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.

  11. Automatically augmenting lifelog events using pervasively generated content from millions of people.

    PubMed

    Doherty, Aiden R; Smeaton, Alan F

    2010-01-01

    In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor's output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one's life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with "Web 2.0" content collected by millions of other individuals.

  12. Utilization of DIRSIG in support of real-time infrared scene generation

    NASA Astrophysics Data System (ADS)

    Sanders, Jeffrey S.; Brown, Scott D.

    2000-07-01

    Real-time infrared scene generation for hardware-in-the-loop has been a traditionally difficult challenge. Infrared scenes are usually generated using commercial hardware that was not designed to properly handle the thermal and environmental physics involved. Real-time infrared scenes typically lack details that are included in scenes rendered in no-real- time by ray-tracing programs such as the Digital Imaging and Remote Sensing Scene Generation (DIRSIG) program. However, executing DIRSIG in real-time while retaining all the physics is beyond current computational capabilities for many applications. DIRSIG is a first principles-based synthetic image generation model that produces multi- or hyper-spectral images in the 0.3 to 20 micron region of the electromagnetic spectrum. The DIRSIG model is an integrated collection of independent first principles based on sub-models, each of which works in conjunction to produce radiance field images with high radiometric fidelity. DIRSIG uses the MODTRAN radiation propagation model for exo-atmospheric irradiance, emitted and scattered radiances (upwelled and downwelled) and path transmission predictions. This radiometry submodel utilizes bidirectional reflectance data, accounts for specular and diffuse background contributions, and features path length dependent extinction and emission for transmissive bodies (plumes, clouds, etc.) which may be present in any target, background or solar path. This detailed environmental modeling greatly enhances the number of rendered features and hence, the fidelity of a rendered scene. While DIRSIG itself cannot currently be executed in real-time, its outputs can be used to provide scene inputs for real-time scene generators. These inputs can incorporate significant features such as target to background thermal interactions, static background object thermal shadowing, and partially transmissive countermeasures. All of these features represent significant improvements over the current state of the art in real-time IR scene generation.

  13. Optimized computational imaging methods for small-target sensing in lens-free holographic microscopy

    NASA Astrophysics Data System (ADS)

    Xiong, Zhen; Engle, Isaiah; Garan, Jacob; Melzer, Jeffrey E.; McLeod, Euan

    2018-02-01

    Lens-free holographic microscopy is a promising diagnostic approach because it is cost-effective, compact, and suitable for point-of-care applications, while providing high resolution together with an ultra-large field-of-view. It has been applied to biomedical sensing, where larger targets like eukaryotic cells, bacteria, or viruses can be directly imaged without labels, and smaller targets like proteins or DNA strands can be detected via scattering labels like micro- or nano-spheres. Automated image processing routines can count objects and infer target concentrations. In these sensing applications, sensitivity and specificity are critically affected by image resolution and signal-to-noise ratio (SNR). Pixel super-resolution approaches have been shown to boost resolution and SNR by synthesizing a high-resolution image from multiple, partially redundant, low-resolution images. However, there are several computational methods that can be used to synthesize the high-resolution image, and previously, it has been unclear which methods work best for the particular case of small-particle sensing. Here, we quantify the SNR achieved in small-particle sensing using regularized gradient-descent optimization method, where the regularization is based on cardinal-neighbor differences, Bayer-pattern noise reduction, or sparsity in the image. In particular, we find that gradient-descent with sparsity-based regularization works best for small-particle sensing. These computational approaches were evaluated on images acquired using a lens-free microscope that we assembled from an off-the-shelf LED array and color image sensor. Compared to other lens-free imaging systems, our hardware integration, calibration, and sample preparation are particularly simple. We believe our results will help to enable the best performance in lens-free holographic sensing.

  14. Images of a Loving God and Sense of Meaning in Life

    ERIC Educational Resources Information Center

    Stroope, Samuel; Draper, Scott; Whitehead, Andrew L.

    2013-01-01

    Although prior studies have documented a positive association between religiosity and sense of meaning in life, the role of specific religious beliefs is currently unclear. Past research on images of God suggests that loving images of God will positively correlate with a sense of meaning and purpose. Mechanisms for this hypothesized relationship…

  15. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    DTIC Science & Technology

    2013-04-01

    Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [20] A. Gurbuz, J. McClellan, and W. Scott, Jr., "Compressive sensing for subsurface imaging using...SciTech Publishing, 2010, pp. 922- 938. [45] A. C. Gurbuz, J. H. McClellan, and W. R. Scott, Jr., "Compressive sensing for subsurface imaging using

  16. Specific coil design for SENSE: a six-element cardiac array.

    PubMed

    Weiger, M; Pruessmann, K P; Leussler, C; Röschmann, P; Boesiger, P

    2001-03-01

    In sensitivity encoding (SENSE), the effects of inhomogeneous spatial sensitivity of surface coils are utilized for signal localization in addition to common Fourier encoding using magnetic field gradients. Unlike standard Fourier MRI, SENSE images exhibit an inhomogeneous noise distribution, which crucially depends on the geometrical sensitivity relations of the coils used. Thus, for optimum signal-to-noise-ratio (SNR) and noise homogeneity, specialized coil configurations are called for. In this article we study the implications of SENSE imaging for coil layout by means of simulations and imaging experiments in a phantom and in vivo. New, specific design principles are identified. For SENSE imaging, the elements of a coil array should be smaller than for common phased-array imaging. Furthermore, adjacent coil elements should not overlap. Based on the findings of initial investigations, a configuration of six coils was designed and built specifically for cardiac applications. The in vivo evaluation of this array showed a considerable SNR increase in SENSE images, as compared with a conventional array. Magn Reson Med 45:495-504, 2001. Copyright 2001 Wiley-Liss, Inc.

  17. Application of Convolutional Neural Network in Classification of High Resolution Agricultural Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.

    2017-09-01

    With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  18. Feature extraction based on extended multi-attribute profiles and sparse autoencoder for remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman

    2018-02-01

    The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.

  19. Feasibility of 4D flow MR imaging of the brain with either Cartesian y-z radial sampling or k-t SENSE: comparison with 4D Flow MR imaging using SENSE.

    PubMed

    Sekine, Tetsuro; Amano, Yasuo; Takagi, Ryo; Matsumura, Yoshio; Murai, Yasuo; Kumita, Shinichiro

    2014-01-01

    A drawback of time-resolved 3-dimensional phase contrast magnetic resonance (4D Flow MR) imaging is its lengthy scan time for clinical application in the brain. We assessed the feasibility for flow measurement and visualization of 4D Flow MR imaging using Cartesian y-z radial sampling and that using k-t sensitivity encoding (k-t SENSE) by comparison with the standard scan using SENSE. Sixteen volunteers underwent 3 types of 4D Flow MR imaging of the brain using a 3.0-tesla scanner. As the standard scan, 4D Flow MR imaging with SENSE was performed first and then followed by 2 types of acceleration scan-with Cartesian y-z radial sampling and with k-t SENSE. We measured peak systolic velocity (PSV) and blood flow volume (BFV) in 9 arteries, and the percentage of particles arriving from the emitter plane at the target plane in 3 arteries, visually graded image quality in 9 arteries, and compared these quantitative and visual data between the standard scan and each acceleration scan. 4D Flow MR imaging examinations were completed in all but one volunteer, who did not undergo the last examination because of headache. Each acceleration scan reduced scan time by 50% compared with the standard scan. The k-t SENSE imaging underestimated PSV and BFV (P < 0.05). There were significant correlations for PSV and BFV between the standard scan and each acceleration scan (P < 0.01). The percentage of particles reaching the target plane did not differ between the standard scan and each acceleration scan. For visual assessment, y-z radial sampling deteriorated the image quality of the 3 arteries. Cartesian y-z radial sampling is feasible for measuring flow, and k-t SENSE offers sufficient flow visualization; both allow acquisition of 4D Flow MR imaging with shorter scan time.

  20. Downscaling of Aircraft-, Landsat-, and MODIS-based Land Surface Temperature Images with Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Ha, W.; Gowda, P. H.; Oommen, T.; Howell, T. A.; Hernandez, J. E.

    2010-12-01

    High spatial resolution Land Surface Temperature (LST) images are required to estimate evapotranspiration (ET) at a field scale for irrigation scheduling purposes. Satellite sensors such as Landsat 5 Thematic Mapper (TM) and Moderate Resolution Imaging Spectroradiometer (MODIS) can offer images at several spectral bandwidths including visible, near-infrared (NIR), shortwave-infrared, and thermal-infrared (TIR). The TIR images usually have coarser spatial resolutions than those from non-thermal infrared bands. Due to this technical constraint of the satellite sensors on these platforms, image downscaling has been proposed in the field of ET remote sensing. This paper explores the potential of the Support Vector Machines (SVM) to perform downscaling of LST images derived from aircraft (4 m spatial resolution), TM (120 m), and MODIS (1000 m) using normalized difference vegetation index images derived from simultaneously acquired high resolution visible and NIR data (1 m for aircraft, 30 m for TM, and 250 m for MODIS). The SVM is a new generation machine learning algorithm that has found a wide application in the field of pattern recognition and time series analysis. The SVM would be ideally suited for downscaling problems due to its generalization ability in capturing non-linear regression relationship between the predictand and the multiple predictors. Remote sensing data acquired over the Texas High Plains during the 2008 summer growing season will be used in this study. Accuracy assessment of the downscaled 1, 30, and 250 m LST images will be made by comparing them with LST data measured with infrared thermometers at a small spatial scale, upscaled 30 m aircraft-based LST images, and upscaled 250 m TM-based LST images, respectively.

  1. Testing Pairwise Association between Spatially Autocorrelated Variables: A New Approach Using Surrogate Lattice Data

    PubMed Central

    Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre

    2012-01-01

    Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961

  2. Automatically assisting human memory: a SenseCam browser.

    PubMed

    Doherty, Aiden R; Moulin, Chris J A; Smeaton, Alan F

    2011-10-01

    SenseCams have many potential applications as tools for lifelogging, including the possibility of use as a memory rehabilitation tool. Given that a SenseCam can log hundreds of thousands of images per year, it is critical that these be presented to the viewer in a manner that supports the aims of memory rehabilitation. In this article we report a software browser constructed with the aim of using the characteristics of memory to organise SenseCam images into a form that makes the wealth of information stored on SenseCam more accessible. To enable a large amount of visual information to be easily and quickly assimilated by a user, we apply a series of automatic content analysis techniques to structure the images into "events", suggest their relative importance, and select representative images for each. This minimises effort when browsing and searching. We provide anecdotes on use of such a system and emphasise the need for SenseCam images to be meaningfully sorted using such a browser.

  3. a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.

    2015-07-01

    Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.

  4. MACSAT - A Near Equatorial Earth Observation Mission

    NASA Astrophysics Data System (ADS)

    Kim, B. J.; Park, S.; Kim, E.-E.; Park, W.; Chang, H.; Seon, J.

    MACSAT mission was initiated by Malaysia to launch a high-resolution remote sensing satellite into Near Equatorial Orbit (NEO). Due to its geographical location, Malaysia can have large benefits from NEO satellite operation. From the baseline circular orbit of 685 km altitude with 7 degrees of inclination, the neighboring regions around Malaysian territory can be frequently monitored. The equatorial environment around the globe can also be regularly observed with unique revisit characteristics. The primary mission objective of MACSAT program is to develop and validate technologies for a near equatorial orbit remote sensing satellite system. MACSAT is optimally designed to accommodate an electro-optic Earth observation payload, Medium-sized Aperture Camera (MAC). Malaysian and Korean joint engineering teams are formed for the effective implementation of the satellite system. An integrated team approach is adopted for the joint development for MACSAT. MAC is a pushbroom type camera with 2.5 m of Ground Sampling Distance (GSD) in panchromatic band and 5 m of GSD in four multi-spectral bands. The satellite platform is a mini-class satellite. Including MAC payload, the satellite weighs under 200 kg. Spacecraft bus is designed optimally to support payload operations during 3 years of mission life. The payload has 20 km of swath width with +/- 30 o of tilting capability. 32 Gbits of solid state recorder is implemented as the mass image storage. The ground element is an integrated ground station for mission control and payload operation. It is equipped with S- band up/down link for commanding and telemetry reception as well as 30 Mbps class X-band down link for image reception and processing. The MACSAT system is capable of generating 1:25,000-scale image maps. It is also anticipated to have capability for cross-track stereo imaging for Digital elevation Model (DEM) generation.

  5. Generating High-Temporal and Spatial Resolution TIR Image Data

    NASA Astrophysics Data System (ADS)

    Herrero-Huerta, M.; Lagüela, S.; Alfieri, S. M.; Menenti, M.

    2017-09-01

    Remote sensing imagery to monitor global biophysical dynamics requires the availability of thermal infrared data at high temporal and spatial resolution because of the rapid development of crops during the growing season and the fragmentation of most agricultural landscapes. Conversely, no single sensor meets these combined requirements. Data fusion approaches offer an alternative to exploit observations from multiple sensors, providing data sets with better properties. A novel spatio-temporal data fusion model based on constrained algorithms denoted as multisensor multiresolution technique (MMT) was developed and applied to generate TIR synthetic image data at both temporal and spatial high resolution. Firstly, an adaptive radiance model is applied based on spectral unmixing analysis of . TIR radiance data at TOA (top of atmosphere) collected by MODIS daily 1-km and Landsat - TIRS 16-day sampled at 30-m resolution are used to generate synthetic daily radiance images at TOA at 30-m spatial resolution. The next step consists of unmixing the 30 m (now lower resolution) images using the information about their pixel land-cover composition from co-registered images at higher spatial resolution. In our case study, TIR synthesized data were unmixed to the Sentinel 2 MSI with 10 m resolution. The constrained unmixing preserves all the available radiometric information of the 30 m images and involves the optimization of the number of land-cover classes and the size of the moving window for spatial unmixing. Results are still being evaluated, with particular attention for the quality of the data streams required to apply our approach.

  6. LACO-Wiki: A land cover validation tool and a new, innovative teaching resource for remote sensing and the geosciences

    NASA Astrophysics Data System (ADS)

    See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz

    2016-04-01

    The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.

  7. Information recovery through image sequence fusion under wavelet transformation

    NASA Astrophysics Data System (ADS)

    He, Qiang

    2010-04-01

    Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.

  8. Distributed Coding of Compressively Sensed Sources

    NASA Astrophysics Data System (ADS)

    Goukhshtein, Maxim

    In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.

  9. Geometric representation methods for multi-type self-defining remote sensing data sets

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1980-01-01

    Efficient and convenient representation of remote sensing data is highly important for an effective utilization. The task of merging different data types is currently dealt with by treating each case as an individual problem. A description is provided of work which is carried out to standardize the multidata merging process. The basic concept of the new approach is that of the self-defining data set (SDDS). The creation of a standard is proposed. This standard would be such that data which may be of interest in a large number of earth resources remote sensing applications would be in a format which allows convenient and automatic merging. Attention is given to details regarding the multidata merging problem, a geometric description of multitype data sets, image reconstruction from track-type data, a data set generation system, and an example multitype data set.

  10. Multilayer Markov Random Field models for change detection in optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Benedek, Csaba; Shadaydeh, Maha; Kato, Zoltan; Szirányi, Tamás; Zerubia, Josiane

    2015-09-01

    In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of Ground Truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches.

  11. Hyperspectral range imaging for transportation systems evaluation

    NASA Astrophysics Data System (ADS)

    Bridgelall, Raj; Rafert, J. B.; Atwood, Don; Tolliver, Denver D.

    2016-04-01

    Transportation agencies expend significant resources to inspect critical infrastructure such as roadways, railways, and pipelines. Regular inspections identify important defects and generate data to forecast maintenance needs. However, cost and practical limitations prevent the scaling of current inspection methods beyond relatively small portions of the network. Consequently, existing approaches fail to discover many high-risk defect formations. Remote sensing techniques offer the potential for more rapid and extensive non-destructive evaluations of the multimodal transportation infrastructure. However, optical occlusions and limitations in the spatial resolution of typical airborne and space-borne platforms limit their applicability. This research proposes hyperspectral image classification to isolate transportation infrastructure targets for high-resolution photogrammetric analysis. A plenoptic swarm of unmanned aircraft systems will capture images with centimeter-scale spatial resolution, large swaths, and polarization diversity. The light field solution will incorporate structure-from-motion techniques to reconstruct three-dimensional details of the isolated targets from sequences of two-dimensional images. A comparative analysis of existing low-power wireless communications standards suggests an application dependent tradeoff in selecting the best-suited link to coordinate swarming operations. This study further produced a taxonomy of specific roadway and railway defects, distress symptoms, and other anomalies that the proposed plenoptic swarm sensing system would identify and characterize to estimate risk levels.

  12. Floating Forests: Validation of a Citizen Science Effort to Answer Global Ecological Questions

    NASA Astrophysics Data System (ADS)

    Rosenthal, I.; Byrnes, J.; Cavanaugh, K. C.; Haupt, A. J.; Trouille, L.; Bell, T. W.; Rassweiler, A.; Pérez-Matus, A.; Assis, J.

    2017-12-01

    Researchers undertaking long term, large-scale ecological analyses face significant challenges for data collection and processing. Crowdsourcing via citizen science can provide an efficient method for analyzing large data sets. However, many scientists have raised questions about the quality of data collected by citizen scientists. Here we use Floating-Forests (http://floatingforests.org), a citizen science platform for creating a global time series of giant kelp abundance, to show that ensemble classifications of satellite data can ensure data quality. Citizen scientists view satellite images of coastlines and classify kelp forests by tracing all visible patches of kelp. Each image is classified by fifteen citizen scientists before being retired. To validate citizen science results, all fifteen classifications are converted to a raster and overlaid on a calibration dataset generated from previous studies. Results show that ensemble classifications from citizen scientists are consistently accurate when compared to calibration data. Given that all source images were acquired by Landsat satellites, we expect this consistency to hold across all regions. At present, we have over 6000 web-based citizen scientists' classifications of almost 2.5 million images of kelp forests in California and Tasmania. These results are not only useful for remote sensing of kelp forests, but also for a wide array of applications that combine citizen science with remote sensing.

  13. Fast Occlusion and Shadow Detection for High Resolution Remote Sensing Image Combined with LIDAR Point Cloud

    NASA Astrophysics Data System (ADS)

    Hu, X.; Li, X.

    2012-08-01

    The orthophoto is an important component of GIS database and has been applied in many fields. But occlusion and shadow causes the loss of feature information which has a great effect on the quality of images. One of the critical steps in true orthophoto generation is the detection of occlusion and shadow. Nowadays LiDAR can obtain the digital surface model (DSM) directly. Combined with this technology, image occlusion and shadow can be detected automatically. In this paper, the Z-Buffer is applied for occlusion detection. The shadow detection can be regarded as a same problem with occlusion detection considering the angle between the sun and the camera. However, the Z-Buffer algorithm is computationally expensive. And the volume of scanned data and remote sensing images is very large. Efficient algorithm is another challenge. Modern graphics processing unit (GPU) is much more powerful than central processing unit (CPU). We introduce this technology to speed up the Z-Buffer algorithm and get 7 times increase in speed compared with CPU. The results of experiments demonstrate that Z-Buffer algorithm plays well in occlusion and shadow detection combined with high density of point cloud and GPU can speed up the computation significantly.

  14. A synthesis of remote sensing and local knowledge approaches in land degradation assessment in the Bawku East District, Ghana

    NASA Astrophysics Data System (ADS)

    Yiran, G. A. B.; Kusimi, J. M.; Kufogbe, S. K.

    2012-02-01

    A greater percentage of Northern Ghana is under threat of land degradation and is negatively impacting on the well-being of the people owing to deforestation, increasing incidence of drought, indiscriminate bush burning and desertification. The problem is becoming severe with serious implications on the livelihoods of the people as the land is the major resource from which they eke their living. Reversing land degradation requires sustainable land use planning which should be based on detailed up-to-date information on landscape attributes. This information can be generated through remote sensing analytical studies. Therefore, an attempt has been made in this study to collect data for planning by employing remote sensing techniques and ground truthing. The analysis included satellite image classification and change detection between Landsat images captured in 1989, 1999 and 2006. The images were classified into the following classes: water bodies, close savannah woodland, open savannah woodland, grassland/unharvested farmland, exposed soil, burnt scars, and settlement. Change detection performed between the 1989 and 1999 and 1989 and 2006 showed that the environment is deteriorating. Land covers such as close savannah woodland, open savannah woodland and exposed soil diminished over the period whereas settlement and water bodies increased. The grassland/unharvested farmland showed high increases because the images were captured at the time that some farms were still crops or crop residue. Urbanization, land clearing for farming, over grazing, firewood fetching and bush burning were identified as some of the underlying forces of vegetal cover degradation. The socio-cultural beliefs and practices of the people also influenced land cover change as sacred groves as well as medicinal plants are preserved. Local knowledge is recognized and used in the area but it is not properly integrated with scientific knowledge for effective planning for sustainable land management. This is due to lack of expertise in remote sensing and geographic information systems (GIS) in the area.

  15. Design of a temperature control system using incremental PID algorithm for a special homemade shortwave infrared spatial remote sensor based on FPGA

    NASA Astrophysics Data System (ADS)

    Xu, Zhipeng; Wei, Jun; Li, Jianwei; Zhou, Qianting

    2010-11-01

    An image spectrometer of a spatial remote sensing satellite requires shortwave band range from 2.1μm to 3μm which is one of the most important bands in remote sensing. We designed an infrared sub-system of the image spectrometer using a homemade 640x1 InGaAs shortwave infrared sensor working on FPA system which requires high uniformity and low level of dark current. The working temperature should be -15+/-0.2 Degree Celsius. This paper studies the model of noise for focal plane array (FPA) system, investigated the relationship with temperature and dark current noise, and adopts Incremental PID algorithm to generate PWM wave in order to control the temperature of the sensor. There are four modules compose of the FPGA module design. All of the modules are coded by VHDL and implemented in FPGA device APA300. Experiment shows the intelligent temperature control system succeeds in controlling the temperature of the sensor.

  16. Recent Advances of Activatable Molecular Probes Based on Semiconducting Polymer Nanoparticles in Sensing and Imaging

    PubMed Central

    Lyu, Yan

    2017-01-01

    Molecular probes that change their signals in response to the target of interest have a critical role in fundamental biology and medicine. Semiconducting polymer nanoparticles (SPNs) have recently emerged as a new generation of purely organic photonic nanoagents with desirable properties for biological applications. In particular, tunable optical properties of SPNs allow them to be developed into photoluminescence, chemiluminescence, and photoacoustic probes, wherein SPNs usually serve as the energy donor and internal reference for luminescence and photoacoustic probes, respectively. Moreover, facile surface modification and intraparticle engineering provide the versatility to make them responsive to various biologically and pathologically important substances and indexes including small‐molecule mediators, proteins, pH and temperature. This article focuses on recent advances in the development of SPN‐based activatable molecular probes for sensing and imaging. The designs and applications of these probes are discussed in details, and the present challenges to further advance them into life science are also analyzed. PMID:28638783

  17. Securing image information using double random phase encoding and parallel compressive sensing with updated sampling processes

    NASA Astrophysics Data System (ADS)

    Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing

    2017-11-01

    Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.

  18. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  19. Real-time computational photon-counting LiDAR

    NASA Astrophysics Data System (ADS)

    Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles

    2018-03-01

    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.

  20. Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method

    NASA Astrophysics Data System (ADS)

    Xin, L.

    2018-04-01

    Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.

  1. LORAKS Makes Better SENSE: Phase-Constrained Partial Fourier SENSE Reconstruction without Phase Calibration

    PubMed Central

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.

    2016-01-01

    Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836

  2. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  3. A Fourier dimensionality reduction model for big data interferometric imaging

    NASA Astrophysics Data System (ADS)

    Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves

    2017-06-01

    Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the proposed reduction method is available on GitHub.

  4. Searches over graphs representing geospatial-temporal remote sensing data

    DOEpatents

    Brost, Randolph; Perkins, David Nikolaus

    2018-03-06

    Various technologies pertaining to identifying objects of interest in remote sensing images by searching over geospatial-temporal graph representations are described herein. Graphs are constructed by representing objects in remote sensing images as nodes, and connecting nodes with undirected edges representing either distance or adjacency relationships between objects and directed edges representing changes in time. Geospatial-temporal graph searches are made computationally efficient by taking advantage of characteristics of geospatial-temporal data in remote sensing images through the application of various graph search techniques.

  5. A parallel method of atmospheric correction for multispectral high spatial resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin

    2018-03-01

    The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.

  6. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  7. AgIIS, Agricultural Irrigation Imaging System, design and application

    NASA Astrophysics Data System (ADS)

    Haberland, Julio Andres

    Remote sensing is a tool that is increasingly used in agriculture for crop management purposes. A ground-based remote sensing data acquisition system was designed, constructed, and implemented to collect high spatial and temporal resolution data in irrigated agriculture. The system was composed of a rail that mounts on a linear move irrigation machine, and a small cart that runs back and forth on the rail. The cart was equipped with a sensors package that measured reflectance in four discrete wavelengths (550 nm, 660 nm, 720 nm, and 810 nm, all 10 nm bandwidth) and an infrared thermometer. A global positioning system and triggers on the rail indicated cart position. The data was postprocessed in order to generate vegetation maps, N and water status maps and other indices relevant for site-specific crop management. A geographic information system (GIS) was used to generate images of the field on any desired day. The system was named AgIIS (A&barbelow;gricultural I&barbelow;rrigation I&barbelow;maging S&barbelow;ystem). This ground based remote sensing acquisition system was developed at the Agricultural and Biosystems Engineering Department at the University of Arizona in conjunction with the U.S. Water Conservation Laboratory in Phoenix, as part of a cooperative study primarily funded by the Idaho National Environmental and Engineering Laboratory. A second phase of the study utilized data acquired with AgIIS during the 1999 cotton growing season to model petiole nitrate (PNO3 -) and total leaf N. A latin square experimental design with optimal and low water and optimal and low N was used to evaluate N status under water and no water stress conditions. Multivariable models were generated with neural networks (NN) and multilinear regression (MLR). Single variable models were generated from chlorophyll meter readings (SPAD) and from the Canopy Chlorophyll Content Index (CCCI). All models were evaluated against observed PNO3- and total leaf N levels. The NN models showed the highest correlation with PNO3- and total leaf N. AgIIS was a reliable and efficient data acquisition system for research and also showed potential for use in commercial farming systems.

  8. Tunable terahertz wave generation through a bimodal laser diode and plasmonic photomixer.

    PubMed

    Yang, S-H; Watts, R; Li, X; Wang, N; Cojocaru, V; O'Gorman, J; Barry, L P; Jarrahi, M

    2015-11-30

    We demonstrate a compact, robust, and stable terahertz source based on a novel two section digital distributed feedback laser diode and plasmonic photomixer. Terahertz wave generation is achieved through difference frequency generation by pumping the plasmonic photomixer with two output optical beams of the two section digital distributed feedback laser diode. The laser is designed to offer an adjustable terahertz frequency difference between the emitted wavelengths by varying the applied currents to the laser sections. The plasmonic photomixer is comprised of an ultrafast photoconductor with plasmonic contact electrodes integrated with a logarithmic spiral antenna. We demonstrate terahertz wave generation with 0.15-3 THz frequency tunability, 2 MHz linewidth, and less than 5 MHz frequency stability over 1 minute, at useful power levels for practical imaging and sensing applications.

  9. Improving the scalability of hyperspectral imaging applications on heterogeneous platforms using adaptive run-time data compression

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Paz, Abel

    2010-10-01

    Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.

  10. Compact opto-electronic engine for high-speed compressive sensing

    NASA Astrophysics Data System (ADS)

    Tidman, James; Weston, Tyler; Hewitt, Donna; Herman, Matthew A.; McMackin, Lenore

    2013-09-01

    The measurement efficiency of Compressive Sensing (CS) enables the computational construction of images from far fewer measurements than what is usually considered necessary by the Nyquist- Shannon sampling theorem. There is now a vast literature around CS mathematics and applications since the development of its theoretical principles about a decade ago. Applications include quantum information to optical microscopy to seismic and hyper-spectral imaging. In the application of shortwave infrared imaging, InView has developed cameras based on the CS single-pixel camera architecture. This architecture is comprised of an objective lens to image the scene onto a Texas Instruments DLP® Micromirror Device (DMD), which by using its individually controllable mirrors, modulates the image with a selected basis set. The intensity of the modulated image is then recorded by a single detector. While the design of a CS camera is straightforward conceptually, its commercial implementation requires significant development effort in optics, electronics, hardware and software, particularly if high efficiency and high-speed operation are required. In this paper, we describe the development of a high-speed CS engine as implemented in a lab-ready workstation. In this engine, configurable measurement patterns are loaded into the DMD at speeds up to 31.5 kHz. The engine supports custom reconstruction algorithms that can be quickly implemented. Our work includes optical path design, Field programmable Gate Arrays for DMD pattern generation, and circuit boards for front end data acquisition, ADC and system control, all packaged in a compact workstation.

  11. Wide-Field Imaging Using Nitrogen Vacancies

    NASA Technical Reports Server (NTRS)

    Englund, Dirk Robert (Inventor); Trusheim, Matthew Edwin (Inventor)

    2017-01-01

    Nitrogen vacancies in bulk diamonds and nanodiamonds can be used to sense temperature, pressure, electromagnetic fields, and pH. Unfortunately, conventional sensing techniques use gated detection and confocal imaging, limiting the measurement sensitivity and precluding wide-field imaging. Conversely, the present sensing techniques do not require gated detection or confocal imaging and can therefore be used to image temperature, pressure, electromagnetic fields, and pH over wide fields of view. In some cases, wide-field imaging supports spatial localization of the NVs to precisions at or below the diffraction limit. Moreover, the measurement range can extend over extremely wide dynamic range at very high sensitivity.

  12. Accelerated radial Fourier-velocity encoding using compressed sensing.

    PubMed

    Hilbert, Fabian; Wech, Tobias; Hahn, Dietbert; Köstler, Herbert

    2014-09-01

    Phase Contrast Magnetic Resonance Imaging (MRI) is a tool for non-invasive determination of flow velocities inside blood vessels. Because Phase Contrast MRI only measures a single mean velocity per voxel, it is only applicable to vessels significantly larger than the voxel size. In contrast, Fourier Velocity Encoding measures the entire velocity distribution inside a voxel, but requires a much longer acquisition time. For accurate diagnosis of stenosis in vessels on the scale of spatial resolution, it is important to know the velocity distribution of a voxel. Our aim was to determine velocity distributions with accelerated Fourier Velocity Encoding in an acquisition time required for a conventional Phase Contrast image. We imaged the femoral artery of healthy volunteers with ECG-triggered, radial CINE acquisition. Data acquisition was accelerated by undersampling, while missing data were reconstructed by Compressed Sensing. Velocity spectra of the vessel were evaluated by high resolution Phase Contrast images and compared to spectra from fully sampled and undersampled Fourier Velocity Encoding. By means of undersampling, it was possible to reduce the scan time for Fourier Velocity Encoding to the duration required for a conventional Phase Contrast image. Acquisition time for a fully sampled data set with 12 different Velocity Encodings was 40 min. By applying a 12.6-fold retrospective undersampling, a data set was generated equal to 3:10 min acquisition time, which is similar to a conventional Phase Contrast measurement. Velocity spectra from fully sampled and undersampled Fourier Velocity Encoded images are in good agreement and show the same maximum velocities as compared to velocity maps from Phase Contrast measurements. Compressed Sensing proved to reliably reconstruct Fourier Velocity Encoded data. Our results indicate that Fourier Velocity Encoding allows an accurate determination of the velocity distribution in vessels in the order of the voxel size. Thus, compared to normal Phase Contrast measurements delivering only mean velocities, no additional scan time is necessary to retrieve meaningful velocity spectra in small vessels. Copyright © 2013. Published by Elsevier GmbH.

  13. All-optical pulse-echo ultrasound probe for intravascular imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Colchester, Richard J.; Noimark, Sacha; Mosse, Charles A.; Zhang, Edward Z.; Beard, Paul C.; Parkin, Ivan P.; Papakonstantinou, Ioannis; Desjardins, Adrien E.

    2016-02-01

    High frequency ultrasound probes such as intravascular ultrasound (IVUS) and intracardiac echocardiography (ICE) catheters can be invaluable for guiding minimally invasive medical procedures in cardiology such as coronary stent placement and ablation. With current-generation ultrasound probes, ultrasound is generated and received electrically. The complexities involved with fabricating these electrical probes can result in high costs that limit their clinical applicability. Additionally, it can be challenging to achieve wide transmission bandwidths and adequate wideband reception sensitivity with small piezoelectric elements. Optical methods for transmitting and receiving ultrasound are emerging as alternatives to their electrical counterparts. They offer several distinguishing advantages, including the potential to generate and detect the broadband ultrasound fields (tens of MHz) required for high resolution imaging. In this study, we developed a miniature, side-looking, pulse-echo ultrasound probe for intravascular imaging, with fibre-optic transmission and reception. The axial resolution was better than 70 microns, and the imaging depth in tissue was greater than 1 cm. Ultrasound transmission was performed by photoacoustic excitation of a carbon nanotube/polydimethylsiloxane composite material; ultrasound reception, with a fibre-optic Fabry-Perot cavity. Ex vivo tissue studies, which included healthy swine tissue and diseased human tissue, demonstrated the strong potential of this technique. To our knowledge, this is the first study to achieve an all-optical pulse-echo ultrasound probe for intravascular imaging. The potential for performing all-optical B-mode imaging (2D and 3D) with virtual arrays of transmit/receive elements, and hybrid imaging with pulse-echo ultrasound and photoacoustic sensing are discussed.

  14. Remote sensing: a tool for park planning and management

    USGS Publications Warehouse

    Draeger, William C.; Pettinger, Lawrence R.

    1981-01-01

    Remote sensing may be defined as the science of imaging or measuring objects from a distance. More commonly, however, the term is used in reference to the acquisition and use of photographs, photo-like images, and other data acquired from aircraft and satellites. Thus, remote sensing includes the use of such diverse materials as photographs taken by hand from a light aircraft, conventional aerial photographs obtained with a precision mapping camera, satellite images acquired with sophisticated scanning devices, radar images, and magnetic and gravimetric data that may not even be in image form. Remotely sensed images may be color or black and white, can vary in scale from those that cover only a few hectares of the earth's surface to those that cover tens of thousands of square kilometers, and they may be interpreted visually or with the assistance of computer systems. This article attempts to describe several of the commonly available types of remotely sensed data, to discuss approaches to data analysis, and to demonstrate (with image examples) typical applications that might interest managers of parks and natural areas.

  15. Real-time appraisal of the spatially distributed heat related health risk and energy demand of cities

    NASA Astrophysics Data System (ADS)

    Keramitsoglou, Iphigenia; Kiranoudis, Chris T.; Sismanidis, Panagiotis

    2016-08-01

    The Urban Heat Island (UHI) is an adverse environmental effect of urbanization that increases the energy demand of cities, impacts the human health, and intensifies and prolongs heatwave events. To facilitate the study of UHIs the Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of the National Observatory of Athens (IAASARS/NOA) has developed an operational real-time system that exploits remote sensing image data from Meteosat Second Generation - Spinning Enhanced Visible and Infrared Imager (MSG-SEVIRI) and generates high spatiotemporal land surface temperature (LST) and 2 m air temperature (TA) time series. These datasets form the basis for the generation of higher value products and services related to energy demand and heat-related health issues. These products are the heatwave hazard (HZ); the HUMIDEX (i.e. an index that describes the temperature felt by an individual exposed to heat and humidity); and the cooling degrees (CD; i.e. a measure that reflects the energy needed to cool a building). The spatiotemporal characteristics of HZ, HUMIDEX and CD are unique (1 km/5 min) and enable the appraisal of the spatially distributed heat related health risk and energy demand of cities. In this paper, the real time generation of the high spatiotemporal HZ, HUMIDEX and CD products is discussed. In addition, a case study corresponding to Athens' September 2015 heatwave is presented so as to demonstrate their capabilities. The overall aim of the system is to provide high quality data to several different end users, such as health responders, and energy suppliers. The urban thermal monitoring web service is available at http://snf-652558.vm.okeanos.grnet.gr/treasure/portal/info.html.

  16. Testbed Demonstration of Low Order Wavefront Sensing and Control Technology for WFIRST Coronagraph

    NASA Astrophysics Data System (ADS)

    Shi, Fang; Balasubramanian, K.; Cady, E.; Kern, B.; Lam, R.; Mandic, M.; Patterson, K.; Poberezhskiy, I.; Shields, J.; Seo, J.; Tang, H.; Truong, T.; Wilson, D.

    2017-01-01

    NASA’s WFIRST-AFTA Coronagraph will be capable of directly imaging and spectrally characterizing giant exoplanets similar to Neptune and Jupiter, and possibly even super-Earths, around nearby stars. To maintain the required coronagraph performance in a realistic space environment, a Low Order Wavefront Sensing and Control (LOWFS/C) subsystem is necessary. The LOWFS/C will use the rejected stellar light to sense and suppress the telescope pointing drift and jitter as well as low order wavefront errors due to the changes in thermal loading of the telescope and the rest of the observatory. The LOWFS/C uses a Zernike phase contrast wavefront sensor with the phase shifting disk combined with the stellar light rejecting occulting mask, a key concept to minimize the non-common path error. Developed as a part of the Dynamic High Contrast Imaging Testbed (DHCIT), the LOWFS/C subsystem also consists of an Optical Telescope Assembly Simulator (OTA-S) to generate the realistic line-of-sight (LoS) drift and jitter as well as low order wavefront error from WFIRST-AFTA telescope’s vibration and thermal drift. The entire LOWFS/C subsystem have been integrated, calibrated, and tested in the Dynamic High Contrast Imaging Testbed. In this presentation we will show the results of LOWFS/C performance during the dynamic coronagraph tests in which we have demonstrated that LOWFS/C is able to maintain the coronagraph contrast with the presence of WFIRST like line-of-sight drift and jitter as well as low order wavefront drifts.

  17. Remote-sensing image encryption in hybrid domains

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong

    2012-04-01

    Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.

  18. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  19. Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Piao, Yan

    2018-04-01

    In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.

  20. Conoscopic holography for image registration: a feasibility study

    NASA Astrophysics Data System (ADS)

    Lathrop, Ray A.; Cheng, Tiffany T.; Webster, Robert J., III

    2009-02-01

    Preoperative image data can facilitate intrasurgical guidance by revealing interior features of opaque tissues, provided image data can be accurately registered to the physical patient. Registration is challenging in organs that are deformable and lack features suitable for use as alignment fiducials (e.g. liver, kidneys, etc.). However, provided intraoperative sensing of surface contours can be accomplished, a variety of rigid and deformable 3D surface registration techniques become applicable. In this paper, we evaluate the feasibility of conoscopic holography as a new method to sense organ surface shape. We also describe potential advantages of conoscopic holography, including the promise of replacing open surgery with a laparoscopic approach. Our feasibility study investigated use of a tracked off-the-shelf conoscopic holography unit to perform a surface scans on several types of biological and synthetic phantom tissues. After first exploring baseline accuracy and repeatability of distance measurements, we performed a number of surface scan experiments on the phantom and ex vivo tissues with a variety of surface properties and shapes. These indicate that conoscopic holography is capable of generating surface point clouds of at least comparable (and perhaps eventually improved) accuracy in comparison to published experimental laser triangulation-based surface scanning results.

  1. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  2. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  3. Remote sensing, imaging, and signal engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brase, J.M.

    1993-03-01

    This report discusses the Remote Sensing, Imaging, and Signal Engineering (RISE) trust area which has been very active in working to define new directions. Signal and image processing have always been important support for existing programs at Lawrence Livermore National Laboratory (LLNL), but now these technologies are becoming central to the formation of new programs. Exciting new applications such as high-resolution telescopes, radar remote sensing, and advanced medical imaging are allowing us to participate in the development of new programs.

  4. Nonlinear Photonic Systems for V- and W-Band Antenna Remoting Applications

    DTIC Science & Technology

    2016-10-22

    for commercial, academic, and military purposes delivering microwaves through fibers to remote areas for wireless sensing , imaging, and detection...academic, and military purposes, which use optical carriers to deliver microwave signals to remote areas for wireless sensing , imaging, and...and military purposes, which use optical carriers to deliver microwave signals to remote areas for wireless sensing , imaging, and detection

  5. First results of ground-based LWIR hyperspectral imaging remote gas detection

    NASA Astrophysics Data System (ADS)

    Zheng, Wei-jian; Lei, Zheng-gang; Yu, Chun-chao; Wang, Hai-yang; Fu, Yan-peng; Liao, Ning-fang; Su, Jun-hong

    2014-11-01

    The new progress of ground-based long-wave infrared remote sensing is presented. The LWIR hyperspectral imaging by using the windowing spatial and temporal modulation Fourier spectroscopy, and the results of outdoor ether gas detection, verify the features of LWIR hyperspectral imaging remote sensing and technical approach. It provides a new technical means for ground-based gas remote sensing.

  6. Real time automated inspection

    DOEpatents

    Fant, Karl M.; Fundakowski, Richard A.; Levitt, Tod S.; Overland, John E.; Suresh, Bindinganavle R.; Ulrich, Franz W.

    1985-01-01

    A method and apparatus relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges are segmented out by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections.

  7. Generating land cover boundaries from remotely sensed data using object-based image analysis: overview and epidemiological application

    PubMed Central

    Maxwell, Susan K.

    2010-01-01

    Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. PMID:21135917

  8. Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.

    2017-09-01

    It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.

  9. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    NASA Astrophysics Data System (ADS)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  10. Direct Imaging of a Zero-Field Target Skyrmion and Its Polarity Switch in a Chiral Magnetic Nanodisk

    NASA Astrophysics Data System (ADS)

    Zheng, Fengshan; Li, Hang; Wang, Shasha; Song, Dongsheng; Jin, Chiming; Wei, Wenshen; Kovács, András; Zang, Jiadong; Tian, Mingliang; Zhang, Yuheng; Du, Haifeng; Dunin-Borkowski, Rafal E.

    2017-11-01

    A target Skyrmion is a flux-closed spin texture that has twofold degeneracy and is promising as a binary state in next generation universal memories. Although its formation in nanopatterned chiral magnets has been predicted, its observation has remained challenging. Here, we use off-axis electron holography to record images of target Skyrmions in a 160-nm-diameter nanodisk of the chiral magnet FeGe. We compare experimental measurements with numerical simulations, demonstrate switching between two stable degenerate target Skyrmion ground states that have opposite polarities and rotation senses, and discuss the observed switching mechanism.

  11. Landsat: A Global Land-Imaging Project

    USGS Publications Warehouse

    Headley, Rachel

    2010-01-01

    Across nearly four decades since 1972, Landsat satellites continuously have acquired space-based images of the Earth's land surface, coastal shallows, and coral reefs. The Landsat Program, a joint effort of the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA), was established to routinely gather land imagery from space; consequently, NASA develops remote-sensing instruments and spacecraft, then launches and validates the satellites. The USGS then assumes ownership and operation of the satellites, in addition to managing all ground-data reception, archiving, product generation, and distribution. The result of this program is a visible, long-term record of natural and human-induced changes on the global landscape.

  12. Landsat: a global land imaging program

    USGS Publications Warehouse

    Byrnes, Raymond A.

    2012-01-01

    Landsat satellites have continuously acquired space-based images of the Earth's land surface, coastal shallows, and coral reefs across four decades. The Landsat Program, a joint effort of the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA), was established to routinely gather land imagery from space. In practice, NASA develops remote-sensing instruments and spacecraft, launches satellites, and validates their performance. The USGS then assumes ownership and operation of the satellites, in addition to managing all ground-data reception, archiving, product generation, and distribution. The result of this program is a visible, long-term record of natural and human-induced changes on the global landscape.

  13. Improved damage imaging in aerospace structures using a piezoceramic hybrid pin-force wave generation model

    NASA Astrophysics Data System (ADS)

    Ostiguy, Pierre-Claude; Quaegebeur, Nicolas; Masson, Patrice

    2014-03-01

    In this study, a correlation-based imaging technique called "Excitelet" is used to monitor an aerospace grade aluminum plate, representative of an aircraft component. The principle is based on ultrasonic guided wave generation and sensing using three piezoceramic (PZT) transducers, and measurement of reflections induced by potential defects. The method uses a propagation model to correlate measured signals with a bank of signals and imaging is performed using a roundrobin procedure (Full-Matrix Capture). The formulation compares two models for the complex transducer dynamics: one where the shear stress at the tip of the PZT is considered to vary as a function of the frequency generated, and one where the PZT is discretized in order to consider the shear distribution under the PZT. This method allows taking into account the transducer dynamics and finite dimensions, multi-modal and dispersive characteristics of the material and complex interactions between guided wave and damages. Experimental validation has been conducted on an aerospace grade aluminum joint instrumented with three circular PZTs of 10 mm diameter. A magnet, acting as a reflector, is used in order to simulate a local reflection in the structure. It is demonstrated that the defect can be accurately detected and localized. The two models proposed are compared to the classical pin-force model, using narrow and broad-band excitations. The results demonstrate the potential of the proposed imaging techniques for damage monitoring of aerospace structures considering improved models for guided wave generation and propagation.

  14. Hazard avoidance via descent images for safe landing

    NASA Astrophysics Data System (ADS)

    Yan, Ruicheng; Cao, Zhiguo; Zhu, Lei; Fang, Zhiwen

    2013-10-01

    In planetary or lunar landing missions, hazard avoidance is critical for landing safety. Therefore, it is very important to correctly detect hazards and effectively find a safe landing area during the last stage of descent. In this paper, we propose a passive sensing based HDA (hazard detection and avoidance) approach via descent images to lower the landing risk. In hazard detection stage, a statistical probability model on the basis of the hazard similarity is adopted to evaluate the image and detect hazardous areas, so that a binary hazard image can be generated. Afterwards, a safety coefficient, which jointly utilized the proportion of hazards in the local region and the inside hazard distribution, is proposed to find potential regions with less hazards in the binary hazard image. By using the safety coefficient in a coarse-to-fine procedure and combining it with the local ISD (intensity standard deviation) measure, the safe landing area is determined. The algorithm is evaluated and verified with many simulated descent downward looking images rendered from lunar orbital satellite images.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schalkoff, R.J.

    This report summarizes work after 4 years of a 3-year project (no-cost extension of the above-referenced project for a period of 12 months granted). The fourth generation of a vision sensing head for geometric and photometric scene sensing has been built and tested. Estimation algorithms for automatic sensor calibration updating under robot motion have been developed and tested. We have modified the geometry extraction component of the rendering pipeline. Laser scanning now produces highly accurate points on segmented curves. These point-curves are input to a NURBS (non-uniform rational B-spline) skinning procedure to produce interpolating surface segments. The NURBS formulation includesmore » quadrics as a sub-class, thus this formulation allows much greater flexibility without the attendant instability of generating an entire quadric surface. We have also implemented correction for diffuse lighting and specular effects. The QRobot joint level control was extended to a complete semi-autonomous robot control system for D and D operations. The imaging and VR subsystems have been integrated and tested.« less

  16. A change detection method for remote sensing image based on LBP and SURF feature

    NASA Astrophysics Data System (ADS)

    Hu, Lei; Yang, Hao; Li, Jin; Zhang, Yun

    2018-04-01

    Finding the change in multi-temporal remote sensing image is important in many the image application. Because of the infection of climate and illumination, the texture of the ground object is more stable relative to the gray in high-resolution remote sensing image. And the texture features of Local Binary Patterns (LBP) and Speeded Up Robust Features (SURF) are outstanding in extracting speed and illumination invariance. A method of change detection for matched remote sensing image pair is present, which compares the similarity by LBP and SURF to detect the change and unchanged of the block after blocking the image. And region growing is adopted to process the block edge zone. The experiment results show that the method can endure some illumination change and slight texture change of the ground object.

  17. Cloud masking and removal in remote sensing image time series

    NASA Astrophysics Data System (ADS)

    Gómez-Chova, Luis; Amorós-López, Julia; Mateo-García, Gonzalo; Muñoz-Marí, Jordi; Camps-Valls, Gustau

    2017-01-01

    Automatic cloud masking of Earth observation images is one of the first required steps in optical remote sensing data processing since the operational use and product generation from satellite image time series might be hampered by undetected clouds. The high temporal revisit of current and forthcoming missions and the scarcity of labeled data force us to cast cloud screening as an unsupervised change detection problem in the temporal domain. We introduce a cloud screening method based on detecting abrupt changes along the time dimension. The main assumption is that image time series follow smooth variations over land (background) and abrupt changes will be mainly due to the presence of clouds. The method estimates the background surface changes using the information in the time series. In particular, we propose linear and nonlinear least squares regression algorithms that minimize both the prediction and the estimation error simultaneously. Then, significant differences in the image of interest with respect to the estimated background are identified as clouds. The use of kernel methods allows the generalization of the algorithm to account for higher-order (nonlinear) feature relations. After the proposed cloud masking and cloud removal, cloud-free time series at high spatial resolution can be used to obtain a better monitoring of land cover dynamics and to generate more elaborated products. The method is tested in a dataset with 5-day revisit time series from SPOT-4 at high resolution and with Landsat-8 time series. Experimental results show that the proposed method yields more accurate cloud masks when confronted with state-of-the-art approaches typically used in operational settings. In addition, the algorithm has been implemented in the Google Earth Engine platform, which allows us to access the full Landsat-8 catalog and work in a parallel distributed platform to extend its applicability to a global planetary scale.

  18. New Tools for Viewing Spectrally and Temporally-Rich Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Bradley, E. S.; Toomey, M. P.; Roberts, D. A.; Still, C. J.

    2010-12-01

    High frequency, temporally extensive remote sensing datasets (GOES: 30 minutes, Santa Cruz Island webcam: nearly 5 years at every 10 min.) and airborne imaging spectrometry (AVIRIS with 224 spectral bands), present exciting opportunities for education, synthesis, and analysis. However, the large file volume / size can make holistic review and exploration difficult. In this research, we explore two options for visualization (1) a web-based portal for time-series analysis, PanOpt, and (2) Google Earth-based timestamped image overlays. PanOpt is an interactive website (http://zulu.geog.ucsb.edu/panopt/), which integrates high frequency (GOES) and multispectral (MODIS) satellite imagery with webcam ground-based repeat photography. Side-by-side comparison of satellite imagery with webcam images supports analysis of atmospheric and environmental phenomena. In this proof of concept, we have integrated four years of imagery for a multi-view FogCam on Santa Cruz Island off the coast of Southern California with two years of GOES-11 and four years of MODIS Aqua imagery subsets for the area (14,000 km2). From the PHP-based website, users can search the data (date, time of day, etc.) and specify timestep and display size; and then view the image stack as animations or in a matrix form. Extracted metrics for regions of interest (ROIs) can be viewed in different formats, including time-series and scatter plots. Through click and mouseover actions over the hyperlink-enabled data points, users can view the corresponding images. This directly melds the quantitative and qualitative aspects and could be particularly effective for both education as well as anomaly interpretation. We have also extended this project to Google Earth with timestamped GOES and MODIS image overlays, which can be controlled using the temporal slider and linked to a screen chart of ancillary meteorological data. The automated ENVI/IDL script for generating KMZ overlays was also applied for generating same-day visualization of AVIRIS acquisitions as part of the Gulf of Mexico oil spill response. This supports location-focused imagery review and synthesis, which is critical for successfully imaging moving targets, such as oil slicks.

  19. Analysis of geologic terrain models for determination of optimum SAR sensor configuration and optimum information extraction for exploration of global non-renewable resources. Pilot study: Arkansas Remote Sensing Laboratory, part 1, part 2, and part 3

    NASA Technical Reports Server (NTRS)

    Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.; Stiles, J. A.; Frost, F. S.; Shanmugam, K. S.; Smith, S. A.; Narayanan, V.; Holtzman, J. C. (Principal Investigator)

    1982-01-01

    Computer-generated radar simulations and mathematical geologic terrain models were used to establish the optimum radar sensor operating parameters for geologic research. An initial set of mathematical geologic terrain models was created for three basic landforms and families of simulated radar images were prepared from these models for numerous interacting sensor, platform, and terrain variables. The tradeoffs between the various sensor parameters and the quantity and quality of the extractable geologic data were investigated as well as the development of automated techniques of digital SAR image analysis. Initial work on a texture analysis of SEASAT SAR imagery is reported. Computer-generated radar simulations are shown for combinations of two geologic models and three SAR angles of incidence.

  20. Color display and encryption with a plasmonic polarizing metamirror

    NASA Astrophysics Data System (ADS)

    Song, Maowen; Li, Xiong; Pu, Mingbo; Guo, Yinghui; Liu, Kaipeng; Yu, Honglin; Ma, Xiaoliang; Luo, Xiangang

    2018-01-01

    Structural colors emerge when a particular wavelength range is filtered out from a broadband light source. It is regarded as a valuable platform for color display and digital imaging due to the benefits of environmental friendliness, higher visibility, and durability. However, current devices capable of generating colors are all based on direct transmission or reflection. Material loss, thick configuration, and the lack of tunability hinder their transition to practical applications. In this paper, a novel mechanism that generates high-purity colors by photon spin restoration on ultrashallow plasmonic grating is proposed. We fabricated the sample by interference lithography and experimentally observed full color display, tunable color logo imaging, and chromatic sensing. The unique combination of high efficiency, high-purity colors, tunable chromatic display, ultrathin structure, and friendliness for fabrication makes this design an easy way to bridge the gap between theoretical investigations and daily-life applications.

  1. DSPACE hardware architecture for on-board real-time image/video processing in European space missions

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Donati, Massimiliano; Fanucci, Luca; Odendahl, Maximilian; Leupers, Reiner; Errico, Walter

    2013-02-01

    The on-board data processing is a vital task for any satellite and spacecraft due to the importance of elaborate the sensing data before sending them to the Earth, in order to exploit effectively the bandwidth to the ground station. In the last years the amount of sensing data collected by scientific and commercial space missions has increased significantly, while the available downlink bandwidth is comparatively stable. The increasing demand of on-board real-time processing capabilities represents one of the critical issues in forthcoming European missions. Faster and faster signal and image processing algorithms are required to accomplish planetary observation, surveillance, Synthetic Aperture Radar imaging and telecommunications. The only available space-qualified Digital Signal Processor (DSP) free of International Traffic in Arms Regulations (ITAR) restrictions faces inadequate performance, thus the development of a next generation European DSP is well known to the space community. The DSPACE space-qualified DSP architecture fills the gap between the computational requirements and the available devices. It leverages a pipelined and massively parallel core based on the Very Long Instruction Word (VLIW) paradigm, with 64 registers and 8 operational units, along with cache memories, memory controllers and SpaceWire interfaces. Both the synthesizable VHDL and the software development tools are generated from the LISA high-level model. A Xilinx-XC7K325T FPGA is chosen to realize a compact PCI demonstrator board. Finally first synthesis results on CMOS standard cell technology (ASIC 180 nm) show an area of around 380 kgates and a peak performance of 1000 MIPS and 750 MFLOPS at 125MHz.

  2. Frequency-doubled vertical-external-cavity surface-emitting laser

    DOEpatents

    Raymond, Thomas D.; Alford, William J.; Crawford, Mary H.; Allerman, Andrew A.

    2002-01-01

    A frequency-doubled semiconductor vertical-external-cavity surface-emitting laser (VECSEL) is disclosed for generating light at a wavelength in the range of 300-550 nanometers. The VECSEL includes a semiconductor multi-quantum-well active region that is electrically or optically pumped to generate lasing at a fundamental wavelength in the range of 600-1100 nanometers. An intracavity nonlinear frequency-doubling crystal then converts the fundamental lasing into a second-harmonic output beam. With optical pumping with 330 milliWatts from a semiconductor diode pump laser, about 5 milliWatts or more of blue light can be generated at 490 nm. The device has applications for high-density optical data storage and retrieval, laser printing, optical image projection, chemical-sensing, materials processing and optical metrology.

  3. Assessing diversity of prairie plants using remote sensing

    NASA Astrophysics Data System (ADS)

    Gamon, J. A.; Wang, R.

    2017-12-01

    Biodiversity loss endangers ecosystem services and is considered as a global change that may generate unacceptable environmental consequences for the Earth system. Global biodiversity observations are needed to provide a better understanding of biodiversity - ecosystem services relationships and to provide a stronger foundation for conserving the Earth's biodiversity. While remote sensing metrics have been applied to estimate α biodiversity directly through optical diversity, a better understanding of the mechanisms behind the optical diversity-biodiversity relationship is needed. We designed a series of experiments at Cedar Creek Ecosystem Science Reserve, MN, to investigate the scale dependence of optical diversity and explore how species richness, evenness, and composition affect optical diversity. We collected hyperspectral reflectance of 16 prairie species using both a full-range field spectrometer fitted with a leaf clip, and an imaging spectrometer carried by a tram system to simulate plot-level images with different species richness, evenness, and composition. Two indicators of spectral diversity were explored: the coefficient of variation (CV) of spectral reflectance in space, and spectral classification using a Partial Least Squares Discriminant Analysis (PLS-DA). Our results showed that sampling methods (leaf clip-derived data vs. image-derived data) affected the optical diversity estimation. Both optical diversity indices were affected by species richness and evenness (P<0.001 for each case). At fine spatial scales, species composition also had a substantial influence on optical diversity. CV was sensitive to the background soil influence, but the spectral classification method was insensitive to background. These results provide a critical foundation for assessing biodiversity using imaging spectrometry and these findings can be used to guide regional studies of biodiversity estimation using high spatial and spectral resolution remote sensing.

  4. Fluorescence enhancement of photoswitchable metal ion sensors

    NASA Astrophysics Data System (ADS)

    Sylvia, Georgina; Heng, Sabrina; Abell, Andrew D.

    2016-12-01

    Spiropyran-based fluorescence sensors are an ideal target for intracellular metal ion sensing, due to their biocompatibility, red emission frequency and photo-controlled reversible analyte binding for continuous signal monitoring. However, increasing the brightness of spiropyran-based sensors would extend their sensing capability for live-cell imaging. In this work we look to enhance the fluorescence of spiropyran-based sensors, by incorporating an additional fluorophore into the sensor design. We report a 5-membered monoazacrown bearing spiropyran with metal ion specificity, modified to incorporate the pyrene fluorophore. The effect of N-indole pyrene modification on the behavior of the spiropyran molecule is explored, with absorbance and fluorescence emission characterization. This first generation sensor provides an insight into fluorescence-enhancement of spiropyran molecules.

  5. Single-scan patient-specific scatter correction in computed tomography using peripheral detection of scatter and compressed sensing scatter retrieval

    PubMed Central

    Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.

    2013-01-01

    Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098

  6. LORAKS makes better SENSE: Phase-constrained partial fourier SENSE reconstruction without phase calibration.

    PubMed

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P

    2017-03-01

    Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely used calibrationless uniformly undersampled trajectories. Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. The SENSE-LORAKS framework provides promising new opportunities for highly accelerated MRI. Magn Reson Med 77:1021-1035, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  7. Estimating spatially distributed soil texture using time series of thermal remote sensing - a case study in central Europe

    NASA Astrophysics Data System (ADS)

    Müller, Benjamin; Bernhardt, Matthias; Jackisch, Conrad; Schulz, Karsten

    2016-09-01

    For understanding water and solute transport processes, knowledge about the respective hydraulic properties is necessary. Commonly, hydraulic parameters are estimated via pedo-transfer functions using soil texture data to avoid cost-intensive measurements of hydraulic parameters in the laboratory. Therefore, current soil texture information is only available at a coarse spatial resolution of 250 to 1000 m. Here, a method is presented to derive high-resolution (15 m) spatial topsoil texture patterns for the meso-scale Attert catchment (Luxembourg, 288 km2) from 28 images of ASTER (advanced spaceborne thermal emission and reflection radiometer) thermal remote sensing. A principle component analysis of the images reveals the most dominant thermal patterns (principle components, PCs) that are related to 212 fractional soil texture samples. Within a multiple linear regression framework, distributed soil texture information is estimated and related uncertainties are assessed. An overall root mean squared error (RMSE) of 12.7 percentage points (pp) lies well within and even below the range of recent studies on soil texture estimation, while requiring sparser sample setups and a less diverse set of basic spatial input. This approach will improve the generation of spatially distributed topsoil maps, particularly for hydrologic modeling purposes, and will expand the usage of thermal remote sensing products.

  8. A Self-Powered and Flexible Organometallic Halide Perovskite Photodetector with Very High Detectivity.

    PubMed

    Leung, Siu-Fung; Ho, Kang-Ting; Kung, Po-Kai; Hsiao, Vincent K S; Alshareef, Husam N; Wang, Zhong Lin; He, Jr-Hau

    2018-02-01

    Flexible and self-powered photodetectors (PDs) are highly desirable for applications in image sensing, smart building, and optical communications. In this paper, a self-powered and flexible PD based on the methylammonium lead iodide (CH 3 NH 3 PBI 3 ) perovskite is demonstrated. Such a self-powered PD can operate even with irregular motion such as human finger tapping, which enables it to work without a bulky external power source. In addition, with high-quality CH 3 NH 3 PBI 3 perovskite thin film fabricated with solvent engineering, the PD exhibits an impressive detectivity of 1.22 × 10 13 Jones. In the self-powered voltage detection mode, it achieves a large responsivity of up to 79.4 V mW -1 cm -2 and a voltage response of up to ≈90%. Moreover, as the PD is made of flexible and transparent polymer films, it can operate under bending and functions at 360 ° of illumination. As a result, the self-powered, flexible, 360 ° omnidirectional perovskite PD, featuring high detectivity and responsivity along with real-world sensing capability, suggests a new direction for next-generation optical communications, sensing, and imaging applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. A case study of comparing radiometrically calibrated reflectance of an image mosaic from unmanned aerial system with that of a single image from manned aircraft over a same area

    USDA-ARS?s Scientific Manuscript database

    Although conventional high-altitude airborne remote sensing and low-altitude unmanned aerial system (UAS) based remote sensing share many commonalities, one of the major differences between the two remote sensing platforms is that the latter has much smaller image footprint. To cover the same area o...

  10. Immune systems are not just for making you feel better: they are for controlling autonomous robots

    NASA Astrophysics Data System (ADS)

    Rosenblum, Mark

    2005-05-01

    The typical algorithm for robot autonomous navigation in off-road complex environments involves building a 3D map of the robot's surrounding environment using a 3D sensing modality such as stereo vision or active laser scanning, and generating an instantaneous plan to navigate around hazards. Although there has been steady progress using these methods, these systems suffer from several limitations that cannot be overcome with 3D sensing and planning alone. Geometric sensing alone has no ability to distinguish between compressible and non-compressible materials. As a result, these systems have difficulty in heavily vegetated environments and require sensitivity adjustments across different terrain types. On the planning side, these systems have no ability to learn from their mistakes and avoid problematic environmental situations on subsequent encounters. We have implemented an adaptive terrain classification system based on the Artificial Immune System (AIS) computational model, which is loosely based on the biological immune system, that combines various forms of imaging sensor inputs to produce a "feature labeled" image of the scene categorizing areas as benign or detrimental for autonomous robot navigation. Because of the qualities of the AIS computation model, the resulting system will be able to learn and adapt on its own through interaction with the environment by modifying its interpretation of the sensor data. The feature labeled results from the AIS analysis are inserted into a map and can then be used by a planner to generate a safe route to a goal point. The coupling of diverse visual cues with the malleable AIS computational model will lead to autonomous robotic ground vehicles that require less human intervention for deployment in novel environments and more robust operation as a result of the system's ability to improve its performance through interaction with the environment.

  11. Development and Operation of the Americas ALOS Data Node

    NASA Astrophysics Data System (ADS)

    Arko, S. A.; Marlin, R. H.; La Belle-Hamer, A. L.

    2004-12-01

    In the spring of 2005, the Japanese Aerospace Exploration Agency (JAXA) will launch the next generation in advanced, remote sensing satellites. The Advanced Land Observing Satellite (ALOS) includes three sensors, two visible imagers and one L-band polarimetric SAR, providing high-quality remote sensing data to the scientific and commercial communities throughout the world. Focusing on remote sensing and scientific pursuits, ALOS will image nearly the entire Earth using all three instruments during its expected three-year lifetime. These data sets offer the potential for data continuation of older satellite missions as well as new products for the growing user community. One of the unique features of the ALOS mission is the data distribution approach. JAXA has created a worldwide cooperative data distribution network. The data nodes are NOAA /ASF representing the Americas ALOS Data Node (AADN), ESA representing the ALOS European and African Node (ADEN), Geoscience Australia representing Oceania and JAXA representing the Asian continent. The AADN is the sole agency responsible for archival, processing and distribution of L0 and L1 products to users in both North and South America. In support of this mission, AADN is currently developing a processing and distribution infrastructure to provide easy access to these data sets. Utilizing a custom, grid-based process controller and media generation system, the overall infrastructure has been designed to provide maximum throughput while requiring a minimum of operator input and maintenance. This paper will present an overview of the ALOS system, details of each sensor's capabilities and of the processing and distribution system being developed by AADN to provide these valuable data sets to users throughout North and South America.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, J.M.

    Remote sensing allows the petroleum industry to make better and quicker interpretations of geological and environmental conditions in areas of present and future operations. Often remote sensing (including aerial photographs) is required because existing maps are out-of-date, too small of scale, or provide only limited information. Implementing remote sensing can lead to lower project costs and reduced risk. The same satellite and airborne data can be used effectively for both geological and environmental applications. For example, earth scientists can interpret new lithologic, structural, and geomorphic information from near-infrared and radar imagery in terrains as diverse as barren desert and tropicalmore » jungle. Environmental applications with these and other imagery include establishing baselines, assessing impact by documenting changes through time, and mapping land-use, habitat, and vegetation. Higher resolution sensors provide an up-to-date overview of onshore and offshore petroleum facilities, whereas sensors capable of oblique viewing can be used to generate topographic maps. Geological application in Yemen involved merging Landsat TM and SPOT imagery to obtain exceptional lithologic discrimination. In the Congo, a topographic map to plan field operations was interpreted from the overlapping radar strips. Landsat MSS and TM, SPOT, and Russian satellite images with new aerial photographs are being used in the Tengiz supergiant oil field of Kazakhstan to help establish an environmental baseline, generate a base map, locate wells, plan facilities, and support a geographical information system (GIS). In the Niger delta, Landsat TM and SPOT are being used to plan pipeline routes and seismic lines, and to monitor rapid shoreline changes and population growth. Accurate coastlines, facility locations, and shoreline types are being extracted from satellite images for use in oil spill models.« less

  13. The mass remote sensing image data management based on Oracle InterMedia

    NASA Astrophysics Data System (ADS)

    Zhao, Xi'an; Shi, Shaowei

    2013-07-01

    With the development of remote sensing technology, getting the image data more and more, how to apply and manage the mass image data safely and efficiently has become an urgent problem to be solved. According to the methods and characteristics of the mass remote sensing image data management and application, this paper puts forward to a new method that takes Oracle Call Interface and Oracle InterMedia to store the image data, and then takes this component to realize the system function modules. Finally, it successfully takes the VC and Oracle InterMedia component to realize the image data storage and management.

  14. Confocal acoustic radiation force optical coherence elastography using a ring ultrasonic transducer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Wenjuan; Department of Chemical Engineering and Materials Science, University of California, Irvine, Irvine, California 92697; Li, Rui

    2014-03-24

    We designed and developed a confocal acoustic radiation force optical coherence elastography system. A ring ultrasound transducer was used to achieve reflection mode excitation and generate an oscillating acoustic radiation force in order to generate displacements within the tissue, which were detected using the phase-resolved optical coherence elastography method. Both phantom and human tissue tests indicate that this system is able to sense the stiffness difference of samples and quantitatively map the elastic property of materials. Our confocal setup promises a great potential for point by point elastic imaging in vivo and differentiation of diseased tissues from normal tissue.

  15. I think therefore I am: Rest-related prefrontal cortex neural activity is involved in generating the sense of self.

    PubMed

    Gruberger, M; Levkovitz, Y; Hendler, T; Harel, E V; Harari, H; Ben Simon, E; Sharon, H; Zangen, A

    2015-05-01

    The sense of self has always been a major focus in the psychophysical debate. It has been argued that this complex ongoing internal sense cannot be explained by any physical measure and therefore substantiates a mind-body differentiation. Recently, however, neuro-imaging studies have associated self-referential spontaneous thought, a core-element of the ongoing sense of self, with synchronous neural activations during rest in the medial prefrontal cortex (PFC), as well as the medial and lateral parietal cortices. By applying deep transcranial magnetic stimulation (TMS) over human PFC before rest, we disrupted activity in this neural circuitry thereby inducing reports of lowered self-awareness and strong feelings of dissociation. This effect was not found with standard or sham TMS, or when stimulation was followed by a task instead of rest. These findings demonstrate for the first time a critical, causal role of intact rest-related PFC activity patterns in enabling integrated, enduring, self-referential mental processing. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Aerial Vehicle Surveys of other Planetary Atmospheres and Surfaces: Imaging, Remote-sensing, and Autonomy Technology Requirements

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Pisanich, Gregory; Ippolito, Corey; Alena, Rick

    2005-01-01

    The objective of this paper is to review the anticipated imaging and remote-sensing technology requirements for aerial vehicle survey missions to other planetary bodies in our Solar system that can support in-atmosphere flight. In the not too distant future such planetary aerial vehicle (a.k.a. aerial explorers) exploration missions will become feasible. Imaging and remote-sensing observations will be a key objective for these missions. Accordingly, it is imperative that optimal solutions in terms of imaging acquisition and real-time autonomous analysis of image data sets be developed for such vehicles.

  17. HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing

    PubMed Central

    Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori

    2018-01-01

    Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022

  18. a Coarse-To Model for Airplane Detection from Large Remote Sensing Images Using Saliency Modle and Deep Learning

    NASA Astrophysics Data System (ADS)

    Song, Z. N.; Sui, H. G.

    2018-04-01

    High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.

  19. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    PubMed

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  20. The integrated design and archive of space-borne signal processing and compression coding

    NASA Astrophysics Data System (ADS)

    He, Qiang-min; Su, Hao-hang; Wu, Wen-bo

    2017-10-01

    With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.

  1. Methods and potentials for using satellite image classification in school lessons

    NASA Astrophysics Data System (ADS)

    Voss, Kerstin; Goetzke, Roland; Hodam, Henryk

    2011-11-01

    The FIS project - FIS stands for Fernerkundung in Schulen (Remote Sensing in Schools) - aims at a better integration of the topic "satellite remote sensing" in school lessons. According to this, the overarching objective is to teach pupils basic knowledge and fields of application of remote sensing. Despite the growing significance of digital geomedia, the topic "remote sensing" is not broadly supported in schools. Often, the topic is reduced to a short reflection on satellite images and used only for additional illustration of issues relevant for the curriculum. Without addressing the issue of image data, this can hardly contribute to the improvement of the pupils' methodical competences. Because remote sensing covers more than simple, visual interpretation of satellite images, it is necessary to integrate remote sensing methods like preprocessing, classification and change detection. Dealing with these topics often fails because of confusing background information and the lack of easy-to-use software. Based on these insights, the FIS project created different simple analysis tools for remote sensing in school lessons, which enable teachers as well as pupils to be introduced to the topic in a structured way. This functionality as well as the fields of application of these analysis tools will be presented in detail with the help of three different classification tools for satellite image classification.

  2. Simulation of Image Performance Characteristics of the Landsat Data Continuity Mission (LDCM) Thermal Infrared Sensor (TIRS)

    NASA Technical Reports Server (NTRS)

    Schott, John; Gerace, Aaron; Brown, Scott; Gartley, Michael; Montanaro, Matthew; Reuter, Dennis C.

    2012-01-01

    The next Landsat satellite, which is scheduled for launch in early 2013, will carry two instruments: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). Significant design changes over previous Landsat instruments have been made to these sensors to potentially enhance the quality of Landsat image data. TIRS, which is the focus of this study, is a dual-band instrument that uses a push-broom style architecture to collect data. To help understand the impact of design trades during instrument build, an effort was initiated to model TIRS imagery. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool was used to produce synthetic "on-orbit" TIRS data with detailed radiometric, geometric, and digital image characteristics. This work presents several studies that used DIRSIG simulated TIRS data to test the impact of engineering performance data on image quality in an effort to determine if the image data meet specifications or, in the event that they do not, to determine if the resulting image data are still acceptable.

  3. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  4. Discrimination of malignant lymphomas and leukemia using Radon transform based-higher order spectra

    NASA Astrophysics Data System (ADS)

    Luo, Yi; Celenk, Mehmet; Bejai, Prashanth

    2006-03-01

    A new algorithm that can be used to automatically recognize and classify malignant lymphomas and leukemia is proposed in this paper. The algorithm utilizes the morphological watersheds to obtain boundaries of cells from cell images and isolate them from the surrounding background. The areas of cells are extracted from cell images after background subtraction. The Radon transform and higher-order spectra (HOS) analysis are utilized as an image processing tool to generate class feature vectors of different type cells and to extract testing cells' feature vectors. The testing cells' feature vectors are then compared with the known class feature vectors for a possible match by computing the Euclidean distances. The cell in question is classified as belonging to one of the existing cell classes in the least Euclidean distance sense.

  5. Image acquisition system using on sensor compressed sampling technique

    NASA Astrophysics Data System (ADS)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  6. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  7. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  8. Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest

    NASA Astrophysics Data System (ADS)

    Feng, W.; Sui, H.; Chen, X.

    2018-04-01

    Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.

  9. a New Graduation Algorithm for Color Balance of Remote Sensing Image

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Liu, X.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Pan, Q.

    2018-05-01

    In order to expand the field of view to obtain more data and information when doing research on remote sensing image, workers always need to mosaicking images together. However, the image after mosaic always has the large color differences and produces the gap line. This paper based on the graduation algorithm of tarigonometric function proposed a new algorithm of Two Quarter-rounds Curves (TQC). The paper uses the Gaussian filter to solve the program about the image color noise and the gap line. The paper used one of Greenland compiled data acquired in 1963 from Declassified Intelligence Photography Project (DISP) by ARGON KH-5 satellite, and used the photography of North Gulf, China, by Landsat satellite to experiment. The experimental results show that the proposed method has improved the accuracy of the results in two parts: on the one hand, for the large color differences remote sensing image will become more balanced. On the other hands, the remote sensing image will achieve more smooth transition.

  10. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    NASA Astrophysics Data System (ADS)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory Digital Image Processing, A Remote Sensing Perspective" authored by John Jensen. The textbook is widely adopted in the geography departments around the world for training students on digital processing of remote sensing images. In the traditional teaching setting for the course, the instructor prepares a set of sample remote sensing images to be used for the course. Commercial desktop remote sensing software, such as ERDAS, is used for students to do the lab exercises. The students have to do the excurses in the lab and can only use the simple images. For this specific course at GMU, we developed GeoBrain-based lab excurses for the course. With GeoBrain, students now can explore petabytes of remote sensing images in the NASA, NOAA, and USGS data archives instead of dealing only with sample images. Students have a much more powerful computing facility available for their lab excurses. They can explore the data and do the excurses any time at any place they want as long as they can access the Internet through the Web Browser. The feedbacks from students are all very positive about the learning experience on the digital image processing with the help of GeoBrain web processing services. The teaching/lab materials and GeoBrain services are freely available to anyone at http://www.laits.gmu.edu.

  11. Supervised classification of aerial imagery and multi-source data fusion for flood assessment

    NASA Astrophysics Data System (ADS)

    Sava, E.; Harding, L.; Cervone, G.

    2015-12-01

    Floods are among the most devastating natural hazards and the ability to produce an accurate and timely flood assessment before, during, and after an event is critical for their mitigation and response. Remote sensing technologies have become the de-facto approach for observing the Earth and its environment. However, satellite remote sensing data are not always available. For these reasons, it is crucial to develop new techniques in order to produce flood assessments during and after an event. Recent advancements in data fusion techniques of remote sensing with near real time heterogeneous datasets have allowed emergency responders to more efficiently extract increasingly precise and relevant knowledge from the available information. This research presents a fusion technique using satellite remote sensing imagery coupled with non-authoritative data such as Civil Air Patrol (CAP) and tweets. A new computational methodology is proposed based on machine learning algorithms to automatically identify water pixels in CAP imagery. Specifically, wavelet transformations are paired with multiple classifiers, run in parallel, to build models discriminating water and non-water regions. The learned classification models are first tested against a set of control cases, and then used to automatically classify each image separately. A measure of uncertainty is computed for each pixel in an image proportional to the number of models classifying the pixel as water. Geo-tagged tweets are continuously harvested and stored on a MongoDB and queried in real time. They are fused with CAP classified data, and with satellite remote sensing derived flood extent results to produce comprehensive flood assessment maps. The final maps are then compared with FEMA generated flood extents to assess their accuracy. The proposed methodology is applied on two test cases, relative to the 2013 floods in Boulder CO, and the 2015 floods in Texas.

  12. Developmental Cryogenic Active Telescope Testbed, a Wavefront Sensing and Control Testbed for the Next Generation Space Telescope

    NASA Technical Reports Server (NTRS)

    Leboeuf, Claudia M.; Davila, Pamela S.; Redding, David C.; Morell, Armando; Lowman, Andrew E.; Wilson, Mark E.; Young, Eric W.; Pacini, Linda K.; Coulter, Dan R.

    1998-01-01

    As part of the technology validation strategy of the next generation space telescope (NGST), a system testbed is being developed at GSFC, in partnership with JPL and Marshall Space Flight Center (MSFC), which will include all of the component functions envisioned in an NGST active optical system. The system will include an actively controlled, segmented primary mirror, actively controlled secondary, deformable, and fast steering mirrors, wavefront sensing optics, wavefront control algorithms, a telescope simulator module, and an interferometric wavefront sensor for use in comparing final obtained wavefronts from different tests. The developmental. cryogenic active telescope testbed (DCATT) will be implemented in three phases. Phase 1 will focus on operating the testbed at ambient temperature. During Phase 2, a cryocapable segmented telescope will be developed and cooled to cryogenic temperature to investigate the impact on the ability to correct the wavefront and stabilize the image. In Phase 3, it is planned to incorporate industry developed flight-like components, such as figure controlled mirror segments, cryogenic, low hold power actuators, or different wavefront sensing and control hardware or software. A very important element of the program is the development and subsequent validation of the integrated multidisciplinary models. The Phase 1 testbed objectives, plans, configuration, and design will be discussed.

  13. Compressive Sensing Image Sensors-Hardware Implementation

    PubMed Central

    Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram

    2013-01-01

    The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123

  14. Atmospheric correction for remote sensing image based on multi-spectral information

    NASA Astrophysics Data System (ADS)

    Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen

    2018-03-01

    The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.

  15. Using learned under-sampling pattern for increasing speed of cardiac cine MRI based on compressive sensing principles

    NASA Astrophysics Data System (ADS)

    Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid

    2012-12-01

    This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.

  16. Limited-angle multi-energy CT using joint clustering prior and sparsity regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Huayu; Xing, Yuxiang

    2016-03-01

    In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM). We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.

  17. Verification technology of remote sensing camera satellite imaging simulation based on ray tracing

    NASA Astrophysics Data System (ADS)

    Gu, Qiongqiong; Chen, Xiaomei; Yang, Deyun

    2017-08-01

    Remote sensing satellite camera imaging simulation technology is broadly used to evaluate the satellite imaging quality and to test the data application system. But the simulation precision is hard to examine. In this paper, we propose an experimental simulation verification method, which is based on the test parameter variation comparison. According to the simulation model based on ray-tracing, the experiment is to verify the model precision by changing the types of devices, which are corresponding the parameters of the model. The experimental results show that the similarity between the imaging model based on ray tracing and the experimental image is 91.4%, which can simulate the remote sensing satellite imaging system very well.

  18. Contrast-based sensorless adaptive optics for retinal imaging.

    PubMed

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew

    2015-09-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.

  19. Compressive sensing method for recognizing cat-eye effect targets.

    PubMed

    Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo

    2013-10-01

    This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.

  20. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications.

    PubMed

    Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun

    2010-12-29

    In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors' architecture on the basis of the type of electric measurement or imaging functionalities.

  1. Effective and efficient analysis of spatio-temporal data

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongnan

    Spatio-temporal data mining, i.e., mining knowledge from large amount of spatio-temporal data, is a highly demanding field because huge amounts of spatio-temporal data have been collected in various applications, ranging from remote sensing, to geographical information systems (GIS), computer cartography, environmental assessment and planning, etc. The collection data far exceeded human's ability to analyze which make it crucial to develop analysis tools. Recent studies on data mining have extended to the scope of data mining from relational and transactional datasets to spatial and temporal datasets. Among the various forms of spatio-temporal data, remote sensing images play an important role, due to the growing wide-spreading of outer space satellites. In this dissertation, we proposed two approaches to analyze the remote sensing data. The first one is about applying association rules mining onto images processing. Each image was divided into a number of image blocks. We built a spatial relationship for these blocks during the dividing process. This made a large number of images into a spatio-temporal dataset since each image was shot in time-series. The second one implemented co-occurrence patterns discovery from these images. The generated patterns represent subsets of spatial features that are located together in space and time. A weather analysis is composed of individual analysis of several meteorological variables. These variables include temperature, pressure, dew point, wind, clouds, visibility and so on. Local-scale models provide detailed analysis and forecasts of meteorological phenomena ranging from a few kilometers to about 100 kilometers in size. When some of above meteorological variables have some special change tendency, some kind of severe weather will happen in most cases. Using the discovery of association rules, we found that some special meteorological variables' changing has tight relation with some severe weather situation that will happen very soon. This dissertation is composed of three parts: an introduction, some basic knowledges and relative works, and my own three contributions to the development of approaches for spatio-temporal data mining: DYSTAL algorithm, STARSI algorithm, and COSTCOP+ algorithm.

  2. Practical security and privacy attacks against biometric hashing using sparse recovery

    NASA Astrophysics Data System (ADS)

    Topcu, Berkay; Karabat, Cagatay; Azadmanesh, Matin; Erdogan, Hakan

    2016-12-01

    Biometric hashing is a cancelable biometric verification method that has received research interest recently. This method can be considered as a two-factor authentication method which combines a personal password (or secret key) with a biometric to obtain a secure binary template which is used for authentication. We present novel practical security and privacy attacks against biometric hashing when the attacker is assumed to know the user's password in order to quantify the additional protection due to biometrics when the password is compromised. We present four methods that can reconstruct a biometric feature and/or the image from a hash and one method which can find the closest biometric data (i.e., face image) from a database. Two of the reconstruction methods are based on 1-bit compressed sensing signal reconstruction for which the data acquisition scenario is very similar to biometric hashing. Previous literature introduced simple attack methods, but we show that we can achieve higher level of security threats using compressed sensing recovery techniques. In addition, we present privacy attacks which reconstruct a biometric image which resembles the original image. We quantify the performance of the attacks using detection error tradeoff curves and equal error rates under advanced attack scenarios. We show that conventional biometric hashing methods suffer from high security and privacy leaks under practical attacks, and we believe more advanced hash generation methods are necessary to avoid these attacks.

  3. Load estimation from photoelastic fringe patterns under combined normal and shear forces

    NASA Astrophysics Data System (ADS)

    Dubey, V. N.; Grewal, G. S.

    2009-08-01

    Recently there has been some spurt of interests to use photoelastic materials for sensing applications. This has been successfully applied for designing a number of signal-based sensors, however, there have been limited efforts to design image-based sensors on photoelasticity which can have wider applications in term of actual loading and visualisation. The main difficulty in achieving this is the infinite loading conditions that may generate same image on the material surface. This, however, can be useful for known loading situations as this can provide dynamic and actual conditions of loading in real time. This is particularly useful for separating components of forces in and out of the loading plane. One such application is the separation of normal and shear forces acting on the plantar surface of foot of diabetic patients for predicting ulceration. In our earlier work we have used neural networks to extract normal force information from the fringe patterns using image intensity. This paper considers geometric and various other statistical parameters in addition to the image intensity to extract normal as well as shear force information from the fringe pattern in a controlled experimental environment. The results of neural network output with the above parameters and their combinations are compared and discussed. The aim is to generalise the technique for a range of loading conditions that can be exploited for whole-field load visualisation and sensing applications in biomedical field.

  4. Hyperspectral remote sensing for terrestrial applications

    USGS Publications Warehouse

    Thenkabail, Prasad S.; Teluguntla, Pardhasaradhi G.; Murali Krishna Gumma,; Venkateswarlu Dheeravath,

    2015-01-01

    Remote sensing data are considered hyperspectral when the data are gathered from numerous wavebands, contiguously over an entire range of the spectrum (e.g., 400–2500 nm). Goetz (1992) defines hyperspectral remote sensing as “The acquisition of images in hundreds of registered, contiguous spectral bands such that for each picture element of an image it is possible to derive a complete reflectance spectrum.” However, Jensen (2004) defines hyperspectral remote sensing as “The simultaneous acquisition of images in many relatively narrow, contiguous and/or non contiguous spectral bands throughout the ultraviolet, visible, and infrared portions of the electromagnetic spectrum.

  5. Generating land cover boundaries from remotely sensed data using object-based image analysis: overview and epidemiological application.

    PubMed

    Maxwell, Susan K

    2010-12-01

    Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. Copyright © 2010. Published by Elsevier Ltd.

  6. Development of Hyperspectral Remote Sensing Capability For the Early Detection and Monitoring of Harmful Algal Blooms (HABs) in the Great Lakes

    NASA Technical Reports Server (NTRS)

    Lekki, John; Anderson, Robert; Nguyen, Quang-Viet; Demers, James; Leshkevich, George; Flatico, Joseph; Kojima, Jun

    2013-01-01

    Hyperspectral imagers have significant capability for detecting and classifying waterborne constituents. One particularly appropriate application of such instruments in the Great Lakes is to detect and monitor the development of potentially Harmful Algal Blooms (HABs). Two generations of small hyperspectral imagers have been built and tested for aircraft based monitoring of harmful algal blooms. In this paper a discussion of the two instruments as well as field studies conducted using these instruments will be presented. During the second field study, in situ reflectance data was obtained from the Research Vessel Lake Guardian in conjunction with reflectance data obtained with the hyperspectral imager from overflights of the same locations. A comparison of these two data sets shows that the airborne hyperspectral imager closely matches measurements obtained from instruments on the lake surface and thus positively supports its utilization for detecting and monitoring HABs.

  7. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    NASA Astrophysics Data System (ADS)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  8. Passive Infrared Thermographic Imaging for Mobile Robot Object Identification

    NASA Astrophysics Data System (ADS)

    Hinders, M. K.; Fehlman, W. L.

    2010-02-01

    The usefulness of thermal infrared imaging as a mobile robot sensing modality is explored, and a set of thermal-physical features used to characterize passive thermal objects in outdoor environments is described. Objects that extend laterally beyond the thermal camera's field of view, such as brick walls, hedges, picket fences, and wood walls as well as compact objects that are laterally within the thermal camera's field of view, such as metal poles and tree trunks, are considered. Classification of passive thermal objects is a subtle process since they are not a source for their own emission of thermal energy. A detailed analysis is included of the acquisition and preprocessing of thermal images, as well as the generation and selection of thermal-physical features from these objects within thermal images. Classification performance using these features is discussed, as a precursor to the design of a physics-based model to automatically classify these objects.

  9. Real time automated inspection

    DOEpatents

    Fant, K.M.; Fundakowski, R.A.; Levitt, T.S.; Overland, J.E.; Suresh, B.R.; Ulrich, F.W.

    1985-05-21

    A method and apparatus are described relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections. 43 figs.

  10. Method of determining forest production from remotely sensed forest parameters

    DOEpatents

    Corey, J.C.; Mackey, H.E. Jr.

    1987-08-31

    A method of determining forest production entirely from remotely sensed data in which remotely sensed multispectral scanner (MSS) data on forest 5 composition is combined with remotely sensed radar imaging data on forest stand biophysical parameters to provide a measure of forest production. A high correlation has been found to exist between the remotely sensed radar imaging data and on site measurements of biophysical 10 parameters such as stand height, diameter at breast height, total tree height, mean area per tree, and timber stand volume.

  11. Biomedical imaging with THz waves

    NASA Astrophysics Data System (ADS)

    Nguyen, Andrew

    2010-03-01

    We discuss biomedical imaging using radio waves operating in the terahertz (THz) range between 300 GHz to 3 THz. Particularly, we present the concept for two THz imaging systems. One system employs single antenna, transmitter and receiver operating over multi-THz-frequency simultaneously for sensing and imaging small areas of the human body or biological samples. Another system consists of multiple antennas, a transmitter, and multiple receivers operating over multi-THz-frequency capable of sensing and imaging simultaneously the whole body or large biological samples. Using THz waves for biomedical imaging promises unique and substantial medical benefits including extremely small medical devices, extraordinarily fine spatial resolution, and excellent contrast between images of diseased and healthy tissues. THz imaging is extremely attractive for detection of cancer in the early stages, sensing and imaging of tissues near the skin, and study of disease and its growth versus time.

  12. Remote Sensing and Wetland Ecology: a South African Case Study.

    PubMed

    De Roeck, Els R; Verhoest, Niko E C; Miya, Mtemi H; Lievens, Hans; Batelaan, Okke; Thomas, Abraham; Brendonck, Luc

    2008-05-26

    Remote sensing offers a cost efficient means for identifying and monitoring wetlands over a large area and at different moments in time. In this study, we aim at providing ecologically relevant information on characteristics of temporary and permanent isolated open water wetlands, obtained by standard techniques and relatively cheap imagery. The number, surface area, nearest distance, and dynamics of isolated temporary and permanent wetlands were determined for the Western Cape, South Africa. Open water bodies (wetlands) were mapped from seven Landsat images (acquired during 1987 - 2002) using supervised maximum likelihood classification. The number of wetlands fluctuated over time. Most wetlands were detected in the winter of 2000 and 2002, probably related to road constructions. Imagery acquired in summer contained fewer wetlands than in winter. Most wetlands identified from Landsat images were smaller than one hectare. The average distance to the nearest wetland was larger in summer. In comparison to temporary wetlands, fewer, but larger permanent wetlands were detected. In addition, classification of non-vegetated wetlands on an Envisat ASAR radar image (acquired in June 2005) was evaluated. The number of detected small wetlands was lower for radar imagery than optical imagery (acquired in June 2002), probably because of deterioration of the spatial information content due the extensive pre-processing requirements of the radar image. Both optical and radar classifications allow to assess wetland characteristics that potentially influence plant and animal metacommunity structure. Envisat imagery, however, was less suitable than Landsat imagery for the extraction of detailed ecological information, as only large wetlands can be detected. This study has indicated that ecologically relevant data can be generated for the larger wetlands through relatively cheap imagery and standard techniques, despite the relatively low resolution of Landsat and Envisat imagery. For the characterisation of very small wetlands, high spatial resolution optical or radar images are needed. This study exemplifies the benefits of integrating remote sensing and ecology and hence stimulates interdisciplinary research of isolated wetlands.

  13. Multispectral remote sensing from unmanned aircraft: image processing workflows and applications for rangeland environments

    USDA-ARS?s Scientific Manuscript database

    Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Most image acquisitions from UAS have been in the visible bands, while multispectral remote sensing ap...

  14. Satellites, Remote Sensing, and Classroom Geography for Canadian Teachers.

    ERIC Educational Resources Information Center

    Kirman, Joseph M.

    1998-01-01

    Argues that remote sensing images are a powerful tool for teaching geography. Discusses the use of remote sensing images in the classroom and provides a number of sources for them, some free, many on the World Wide Web. Reviews each source's usefulness for different grade levels and geographic topics. (DSK)

  15. Reliable clarity automatic-evaluation method for optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  16. Real-Time Classification of Hand Motions Using Ultrasound Imaging of Forearm Muscles.

    PubMed

    Akhlaghi, Nima; Baker, Clayton A; Lahlou, Mohamed; Zafar, Hozaifah; Murthy, Karthik G; Rangwala, Huzefa S; Kosecka, Jana; Joiner, Wilsaan M; Pancrazio, Joseph J; Sikdar, Siddhartha

    2016-08-01

    Surface electromyography (sEMG) has been the predominant method for sensing electrical activity for a number of applications involving muscle-computer interfaces, including myoelectric control of prostheses and rehabilitation robots. Ultrasound imaging for sensing mechanical deformation of functional muscle compartments can overcome several limitations of sEMG, including the inability to differentiate between deep contiguous muscle compartments, low signal-to-noise ratio, and lack of a robust graded signal. The objective of this study was to evaluate the feasibility of real-time graded control using a computationally efficient method to differentiate between complex hand motions based on ultrasound imaging of forearm muscles. Dynamic ultrasound images of the forearm muscles were obtained from six able-bodied volunteers and analyzed to map muscle activity based on the deformation of the contracting muscles during different hand motions. Each participant performed 15 different hand motions, including digit flexion, different grips (i.e., power grasp and pinch grip), and grips in combination with wrist pronation. During the training phase, we generated a database of activity patterns corresponding to different hand motions for each participant. During the testing phase, novel activity patterns were classified using a nearest neighbor classification algorithm based on that database. The average classification accuracy was 91%. Real-time image-based control of a virtual hand showed an average classification accuracy of 92%. Our results demonstrate the feasibility of using ultrasound imaging as a robust muscle-computer interface. Potential clinical applications include control of multiarticulated prosthetic hands, stroke rehabilitation, and fundamental investigations of motor control and biomechanics.

  17. Research Issues in Image Registration for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Eastman, Roger D.; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    Image registration is an important element in data processing for remote sensing with many applications and a wide range of solutions. Despite considerable investigation the field has not settled on a definitive solution for most applications and a number of questions remain open. This article looks at selected research issues by surveying the experience of operational satellite teams, application-specific requirements for Earth science, and our experiments in the evaluation of image registration algorithms with emphasis on the comparison of algorithms for subpixel accuracy. We conclude that remote sensing applications put particular demands on image registration algorithms to take into account domain-specific knowledge of geometric transformations and image content.

  18. Assessing the effectiveness of Landsat 8 chlorophyll a retrieval algorithms for regional freshwater monitoring.

    PubMed

    Boucher, Jonah; Weathers, Kathleen C; Norouzi, Hamid; Steele, Bethel

    2018-06-01

    Predicting algal blooms has become a priority for scientists, municipalities, businesses, and citizens. Remote sensing offers solutions to the spatial and temporal challenges facing existing lake research and monitoring programs that rely primarily on high-investment, in situ measurements. Techniques to remotely measure chlorophyll a (chl a) as a proxy for algal biomass have been limited to specific large water bodies in particular seasons and narrow chl a ranges. Thus, a first step toward prediction of algal blooms is generating regionally robust algorithms using in situ and remote sensing data. This study explores the relationship between in-lake measured chl a data from Maine and New Hampshire, USA lakes and remotely sensed chl a retrieval algorithm outputs. Landsat 8 images were obtained and then processed after required atmospheric and radiometric corrections. Six previously developed algorithms were tested on a regional scale on 11 scenes from 2013 to 2015 covering 192 lakes. The best performing algorithm across data from both states had a 0.16 correlation coefficient (R 2 ) and P ≤ 0.05 when Landsat 8 images within 5 d, and improved to R 2 of 0.25 when data from Maine only were used. The strength of the correlation varied with the specificity of the time window in relation to the in-situ sampling date, explaining up to 27% of the variation in the data across several scenes. Two previously published algorithms using Landsat 8's Bands 1-4 were best correlated with chl a, and for particular late-summer scenes, they accounted for up to 69% of the variation in in-situ measurements. A sensitivity analysis revealed that a longer time difference between in situ measurements and the satellite image increased uncertainty in the models, and an effect of the time of year on several indices was demonstrated. A regional model based on the best performing remote sensing algorithm was developed and was validated using independent in situ measurements and satellite images. These results suggest that, despite challenges including seasonal effects and low chl a thresholds, remote sensing could be an effective and accessible regional-scale tool for chl a monitoring programs in lakes. © 2018 The Authors. Ecological Applications published by Wiley Periodicals, Inc. on behalf of Ecological Society of America.

  19. Golden-ratio rotated stack-of-stars acquisition for improved volumetric MRI.

    PubMed

    Zhou, Ziwu; Han, Fei; Yan, Lirong; Wang, Danny J J; Hu, Peng

    2017-12-01

    To develop and evaluate an improved stack-of-stars radial sampling strategy for reducing streaking artifacts. The conventional stack-of-stars sampling strategy collects the same radial angle for every partition (slice) encoding. In an undersampled acquisition, such an aligned acquisition generates coherent aliasing patterns and introduces strong streaking artifacts. We show that by rotating the radial spokes in a golden-angle manner along the partition-encoding direction, the aliasing pattern is modified, resulting in improved image quality for gridding and more advanced reconstruction methods. Computer simulations were performed and phantom as well as in vivo images for three different applications were acquired. Simulation, phantom, and in vivo experiments confirmed that the proposed method was able to generate images with less streaking artifact and sharper structures based on undersampled acquisitions in comparison with the conventional aligned approach at the same acceleration factors. By combining parallel imaging and compressed sensing in the reconstruction, streaking artifacts were mostly removed with improved delineation of fine structures using the proposed strategy. We present a simple method to reduce streaking artifacts and improve image quality in 3D stack-of-stars acquisitions by re-arranging the radial spoke angles in the 3D partition direction, which can be used for rapid volumetric imaging. Magn Reson Med 78:2290-2298, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Noninvasive measurement of pharmacokinetics by near-infrared fluorescence imaging in the eye of mice

    NASA Astrophysics Data System (ADS)

    Dobosz, Michael; Strobel, Steffen; Stubenrauch, Kay-Gunnar; Osl, Franz; Scheuer, Werner

    2014-01-01

    Purpose: For generating preclinical pharmacokinetics (PKs) of compounds, blood is drawn at different time points and levels are quantified by different analytical methods. In order to receive statistically meaningful data, 3 to 5 animals are used for each time point to get serum peak-level and half-life of the compound. Both characteristics are determined by data interpolation, which may influence the accuracy of these values. We provide a method that allows continuous monitoring of blood levels noninvasively by measuring the fluorescence intensity of labeled compounds in the eye and other body regions of anesthetized mice. Procedures: The method evaluation was performed with four different fluorescent compounds: (i) indocyanine green, a nontargeting dye; (ii) OsteoSense750, a bone targeting agent; (iii) tumor targeting Trastuzumab-Alexa750; and (iv) its F(-alxea750 fragment. The latter was used for a direct comparison between fluorescence imaging and classical blood analysis using enzyme-linked immunosorbent assay (ELISA). Results: We found an excellent correlation between blood levels measured by noninvasive eye imaging with the results generated by classical methods. A strong correlation between eye imaging and ELISA was demonstrated for the F( fragment. Whole body imaging revealed a compound accumulation in the expected regions (e.g., liver, bone). Conclusions: The combination of eye and whole body fluorescence imaging enables the simultaneous measurement of blood PKs and biodistribution of fluorescent-labeled compounds.

  1. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and reasonably low dimension error ratio. Again proving the applicability of the algorithm.

  2. Validation of "AW3D" Global Dsm Generated from Alos Prism

    NASA Astrophysics Data System (ADS)

    Takaku, Junichi; Tadono, Takeo; Tsutsui, Ken; Ichikawa, Mayumi

    2016-06-01

    Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM), one of onboard sensors carried by Advanced Land Observing Satellite (ALOS), was designed to generate worldwide topographic data with its optical stereoscopic observation. It has an exclusive ability to perform a triplet stereo observation which views forward, nadir, and backward along the satellite track in 2.5 m ground resolution, and collected its derived images all over the world during the mission life of the satellite from 2006 through 2011. A new project, which generates global elevation datasets with the image archives, was started in 2014. The data is processed in unprecedented 5 m grid spacing utilizing the original triplet stereo images in 2.5 m resolution. As the number of processed data is growing steadily so that the global land areas are almost covered, a trend of global data qualities became apparent. This paper reports on up-to-date results of the validations for the accuracy of data products as well as the status of data coverage in global areas. The accuracies and error characteristics of datasets are analyzed by the comparison with existing global datasets such as Ice, Cloud, and land Elevation Satellite (ICESat) data, as well as ground control points (GCPs) and the reference Digital Elevation Model (DEM) derived from the airborne Light Detection and Ranging (LiDAR).

  3. Comparison of Landsat MSS and merged MSS/RBV data for analysis of natural vegetation

    NASA Technical Reports Server (NTRS)

    Roller, N. E. G.; Cox, S.

    1980-01-01

    Improved resolution could make satellite remote sensing data more useful for surveys of natural vegetation. Although improved satellite/sensor systems appear to be several years away, one potential interim solution to the problem of achieving greater resolution without sacrificing spectral sensitivity is through the merging of Landsat RBV and MSS data. This paper describes the results of a study performed to obtain a preliminary evaluation of the usefulness of two types of products that can be made by merging Landsat RBV and MSS data. The products generated were a false color composite image and a computer recognition map. Of these two products, the false color composite image appears to be the most useful.

  4. Analysis of Benefits and Pitfalls of Satellite SAR for Coastal Area Monitoring

    NASA Astrophysics Data System (ADS)

    Nunziata, F.; Buono, A.; Mgliaccio, M.; Li, X.; Wei, Y.

    2016-08-01

    This study aims at describing the outcomes of the Dragon-3 project no. 10689. The undertaken activities deal with coastal area monitoring and they include sea pollution and coastline extraction. The key remote sensing tool is the Synthetic Aperture Radar (SAR) that provides fine resolution images of the microwave reflectivity of the observed scene. However, the interpretation of SAR images is not at all straightforward and all the above-mentioned coastal area applications cannot be easily addressed using single-polarization SAR. Hence, the main outcome of this project is investigating the capability of multi-polarization SAR measurements to generate added-vale product in the frame of coastal area management.

  5. Landsat: A global land-imaging mission

    USGS Publications Warehouse

    ,

    2012-01-01

    Across four decades since 1972, Landsat satellites have continuously acquired space-based images of the Earth's land surface, coastal shallows, and coral reefs. The Landsat Program, a joint effort of the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA), was established to routinely gather land imagery from space. NASA develops remote-sensing instruments and spacecraft, then launches and validates the performance of the instruments and satellites. The USGS then assumes ownership and operation of the satellites, in addition to managing all ground reception, data archiving, product generation, and distribution. The result of this program is a long-term record of natural and human induced changes on the global landscape.

  6. Illumination invariant feature point matching for high-resolution planetary remote sensing images

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zeng, Hai; Hu, Han

    2018-03-01

    Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.

  7. Earth view: A business guide to orbital remote sensing

    NASA Technical Reports Server (NTRS)

    Bishop, Peter C.

    1990-01-01

    The following subject areas are covered: Earth view - a guide to orbital remote sensing; current orbital remote sensing systems (LANDSAT, SPOT image, MOS-1, Soviet remote sensing systems); remote sensing satellite; and remote sensing organizations.

  8. Ocean Remote Sensing from Chinese Spaceborne Microwave Sensors

    NASA Astrophysics Data System (ADS)

    Yang, J.

    2017-12-01

    GF-3 (GF stands for GaoFen, which means High Resolution in Chinese) is the China's first C band multi-polarization high resolution microwave remote sensing satellite. It was successfully launched on Aug. 10, 2016 in Taiyuan satellite launch center. The synthetic aperture radar (SAR) on board GF-3 works at incidence angles ranging from 20 to 50 degree with several polarization modes including single-polarization, dual-polarization and quad-polarization. GF-3 SAR is also the world's most imaging modes SAR satellite, with 12 imaging modes consisting of some traditional ones like stripmap and scanSAR modes and some new ones like spotlight, wave and global modes. GF-3 SAR is thus a multi-functional satellite for both land and ocean observation by switching the different imaging modes. TG-2 (TG stands for TianGong, which means Heavenly Palace in Chinese) is a Chinese space laboratory which was launched on 15 Sep. 2016 from Jiuquan Satellite Launch Centre aboard a Long March 2F rocket. The onboard Interferometric Imaging Radar Altimeter (InIRA) is a new generation radar altimeter developed by China and also the first on orbit wide swath imaging radar altimeter, which integrates interferometry, synthetic aperture, and height tracking techniques at small incidence angles and a swath of 30 km. The InIRA was switch on to acquire data during this mission on 22 September. This paper gives some preliminary results for the quantitative remote sensing of ocean winds and waves from the GF-3 SAR and the TG-2 InIRA. The quantitative analysis and ocean wave spectra retrieval have been given from the SAR imagery. The image spectra which contain ocean wave information are first estimated from image's modulation using fast Fourier transform. Then, the wave spectra are retrieved from image spectra based on Hasselmann's classical quasi-linear SAR-ocean wave mapping model and the estimation of three modulation transfer functions (MTFs) including tilt, hydrodynamic and velocity bunching modulation. The wind speed is retrieved from InIRA data using a Ku-band low incidence backscatter model (KuLMOD), which relates the backscattering coefficients to the wind speeds and incidence angles. The ocean wave spectra are retrieved linearly from image spectra which extracted first from InIRA data, using a similar procedure for GF-3 SAR data.

  9. Remote Sensing Application to Land Use Classification in a Rapidly Changing Agricultural/Urban Area: City of Virginia Beach, Virginia. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Odenyo, V. A. O.

    1975-01-01

    Remote sensing data on computer-compatible tapes of LANDSAT 1 multispectral scanner imager were analyzed to generate a land use map of the City of Virginia Beach. All four bands were used in both the supervised and unsupervised approaches with the LAYSYS software system. Color IR imagery of a U-2 flight of the same area was also digitized and two sample areas were analyzed via the unsupervised approach. The relationships between the mapped land use and the soils of the area were investigated. A land use land cover map at a scale of 1:24,000 was obtained from the supervised analysis of LANDSAT 1 data. It was concluded that machine analysis of remote sensing data to produce land use maps was feasible; that the LAYSYS software system was usable for this purpose; and that the machine analysis was capable of extracting detailed information from the relatively small scale LANDSAT data in a much shorter time without compromising accuracy.

  10. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  11. A scene-analysis approach to remote sensing. [San Francisco, California

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M. (Principal Investigator); Fischler, M. A.; Wolf, H. C.

    1978-01-01

    The author has identified the following significant results. Geometric correspondance between a sensed image and a symbolic map is established in an initial stage of processing by adjusting parameters of a sensed model so that the image features predicted from the map optimally match corresponding features extracted from the sensed image. Information in the map is then used to constrain where to look in an image, what to look for, and how to interpret what is seen. For simple monitoring tasks involving multispectral classification, these constraints significantly reduce computation, simplify interpretation, and improve the utility of the resulting information. Previously intractable tasks requiring spatial and textural analysis may become straightforward in the context established by the map knowledge. The use of map-guided image analysis in monitoring the volume of water in a reservoir, the number of boxcars in a railyard, and the number of ships in a harbor is demonstrated.

  12. Can Satellite Remote Sensing be Applied in Geological Mapping in Tropics?

    NASA Astrophysics Data System (ADS)

    Magiera, Janusz

    2018-03-01

    Remote sensing (RS) techniques are based on spectral data registered by RS scanners as energy reflected from the Earth's surface or emitted by it. In "geological" RS the reflectance (or emittence) should come from rock or sediment. The problem in tropical and subtropical areas is a dense vegetation. Spectral response from the rocks and sediments is gathered only from the gaps among the trees and shrubs. Images of high resolution are appreciated here, therefore. New generation of satellites and scanners (Digital Globe WV2, WV3 and WV4) yield imagery of spatial resolution of 2 m and up to 16 spectral bands (WV3). Images acquired by Landsat (TM, ETM+, OLI) and Sentinel 2 have good spectral resolution too (6-12 bands in visible and infrared) and, despite lower spatial resolution (10-60 m of pixel size) are useful in extracting lithological information too. Lithological RS map may reveal good precision (down to a single rock or outcrop of a meter size). Supplemented with the analysis of Digital Elevation Model and high resolution ortophotomaps (Google Maps, Bing etc.) allows for quick and cheap mapping of unsurveyed areas.

  13. Science plan for the Alaska SAR facility program. Phase 1: Data from the first European sensing satellite, ERS-1

    NASA Technical Reports Server (NTRS)

    Carsey, Frank D.

    1989-01-01

    Science objectives, opportunities and requirements are discussed for the utilization of data from the Synthetic Aperture Radar (SAR) on the European First Remote Sensing Satellite, to be flown by the European Space Agency in the early 1990s. The principal applications of the imaging data are in studies of geophysical processes taking place within the direct-reception area of the Alaska SAR Facility in Fairbanks, Alaska, essentially the area within 2000 km of the receiver. The primary research that will be supported by these data include studies of the oceanography and sea ice phenomena of Alaskan and adjacent polar waters and the geology, glaciology, hydrology, and ecology of the region. These studies focus on the area within the reception mask of ASF, and numerous connections are made to global processes and thus to the observation and understanding of global change. Processes within the station reception area both affect and are affected by global phenomena, in some cases quite critically. Requirements for data processing and archiving systems, prelaunch research, and image processing for geophysical product generation are discussed.

  14. Signature simulation of mixed materials

    NASA Astrophysics Data System (ADS)

    Carson, Tyler D.; Salvaggio, Carl

    2015-05-01

    Soil target signatures vary due to geometry, chemical composition, and scene radiometry. Although radiative transfer models and function-fit physical models may describe certain targets in limited depth, the ability to incorporate all three signature variables is difficult. This work describes a method to simulate the transient signatures of soil by first considering scene geometry synthetically created using 3D physics engines. Through the assignment of spectral data from the Nonconventional Exploitation Factors Data System (NEFDS), the synthetic scene is represented as a physical mixture of particles. Finally, first principles radiometry is modeled using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. With DIRSIG, radiometric and sensing conditions were systematically manipulated to produce and record goniometric signatures. The implementation of this virtual goniometer allows users to examine how a target bidirectional reflectance distribution function (BRDF) will change with geometry, composition, and illumination direction. By using 3D computer graphics models, this process does not require geometric assumptions that are native to many radiative transfer models. It delivers a discrete method to circumnavigate the significant cost of time and treasure associated with hardware-based goniometric data collections.

  15. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  16. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  17. Airborne imaging spectrometers developed in China

    NASA Astrophysics Data System (ADS)

    Wang, Jianyu; Xue, Yongqi

    1998-08-01

    Airborne imaging spectral technology, principle means in airborne remote sensing, has been developed rapidly both in the world and in China recently. This paper describes Modular Airborne Imaging Spectrometer (MAIS), Operational Modular Airborne Imaging Spectrometer (OMAIS) and Pushbroom Hyperspectral Imagery (PHI) that have been developed or are being developed in Airborne Remote Sensing Lab of Shanghai Institute of Technical Physics, CAS.

  18. Contrast-based sensorless adaptive optics for retinal imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T.O.; He, Zheng; Metha, Andrew

    2015-01-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525

  19. Visual Image Sensor Organ Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  20. Demonstrating the Value of Near Real-time Satellite-based Earth Observations in a Research and Education Framework

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Hao, X.; Kinter, J. L.; Stearn, G.; Aliani, M.

    2017-12-01

    The launch of GOES-16 series provides an opportunity to advance near real-time applications in natural hazard detection, monitoring and warning. This study demonstrates the capability and values of receiving real-time satellite-based Earth observations over a fast terrestrial networks and processing high-resolution remote sensing data in a university environment. The demonstration system includes 4 components: 1) Near real-time data receiving and processing; 2) data analysis and visualization; 3) event detection and monitoring; and 4) information dissemination. Various tools are developed and integrated to receive and process GRB data in near real-time, produce images and value-added data products, and detect and monitor extreme weather events such as hurricane, fire, flooding, fog, lightning, etc. A web-based application system is developed to disseminate near-real satellite images and data products. The images are generated with GIS-compatible format (GeoTIFF) to enable convenient use and integration in various GIS platforms. This study enhances the capacities for undergraduate and graduate education in Earth system and climate sciences, and related applications to understand the basic principles and technology in real-time applications with remote sensing measurements. It also provides an integrated platform for near real-time monitoring of extreme weather events, which are helpful for various user communities.

  1. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications

    PubMed Central

    Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun

    2010-01-01

    In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors’ architecture on the basis of the type of electric measurement or imaging functionalities. PMID:28879978

  2. GF-7 Imaging Simulation and Dsm Accuracy Estimate

    NASA Astrophysics Data System (ADS)

    Yue, Q.; Tang, X.; Gao, X.

    2017-05-01

    GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.

  3. Cherenkov imaging and biochemical sensing in vivo during radiation therapy

    NASA Astrophysics Data System (ADS)

    Zhang, Rongxiao

    While Cherenkov emission was discovered more than eighty years ago, the potential applications of imaging this during radiation therapy have just recently been explored. With approximately half of all cancer patients being treated by radiation at some point during their cancer management, there is a constant challenge to ensure optimal treatment efficiency is achieved with maximal tumor to normal tissue therapeutic ratio. To achieve this, the treatment process as well as biological information affecting the treatment should ideally be effective and directly derived from the delivery of radiation to the patient. The value of Cherenkov emission imaging was examined here, primarily for visualization of treatment monitoring and then secondarily for Cherenkov-excited luminescence for tissue biochemical sensing within tissue. Through synchronized gating to the short radiation pulses of a linear accelerator (200Hz & 3 micros pulses), and applying a gated intensified camera for imaging, the Cherenkov radiation can be captured near video frame rates (30 frame per sec) with dim ambient room lighting. This procedure, sometimes termed Cherenkoscopy, is readily visualized without affecting the normal process of external beam radiation therapy. With simulation, phantoms and clinical trial data, each application of Cherenkoscopy was examined: i) for treatment monitoring, ii) for patient position monitoring and motion tracking, and iii) for superficial dose imaging. The temporal dynamics of delivered radiation fields can easily be directly imaged on the patient's surface. Image registration and edge detection of Cherenkov images were used to verify patient positioning during treatment. Inter-fraction setup accuracy and intra-fraction patient motion was detectable to better than 1 mm accuracy. Cherenkov emission in tissue opens up a new field of biochemical sensing within the tissue environment, using luminescent agents which can be activated by this light. In the first study of its kind with external beam irradiation, a dendritic platinum-based phosphor (PtG4) was used at micro-molar concentrations (~5 microM) to generate Cherenkov-induced luminescent signals, which are sensitive to the partial pressure of oxygen. Both tomographic reconstruction methods and linear scanned imaging were investigated here to examine the limits of detection. Recovery of optical molecular distributions was shown in tissue phantoms and small animals, with high accuracy (~1 microM), high spatial resolution (~0.2 mm) and deep-tissue detectability (~2 cm for Cherenkov luminescence scanned imaging (CELSI)), indicating potentials for in vivo and clinical use. In summary, many of the physical and technological details of Cherenkov imaging and Cherenkov-excited emission imaging were specified in this study.

  4. Advancing spaceborne tools for the characterization of planetary ionospheres and circumstellar environments

    NASA Astrophysics Data System (ADS)

    Douglas, Ewan Streets

    This work explores remote sensing of planetary atmospheres and their circumstellar surroundings. The terrestrial ionosphere is a highly variable space plasma embedded in the thermosphere. Generated by solar radiation and predominantly composed of oxygen ions at high altitudes, the ionosphere is dynamically and chemically coupled to the neutral atmosphere. Variations in ionospheric plasma density impact radio astronomy and communications. Inverting observations of 83.4 nm photons resonantly scattered by singly ionized oxygen holds promise for remotely sensing the ionospheric plasma density. This hypothesis was tested by comparing 83.4 nm limb profiles recorded by the Remote Atmospheric and Ionospheric Detection System aboard the International Space Station to a forward model driven by coincident plasma densities measured independently via ground-based incoherent scatter radar. A comparison study of two separate radar overflights with different limb profile morphologies found agreement between the forward model and measured limb profiles. A new implementation of Chapman parameter retrieval via Markov chain Monte Carlo techniques quantifies the precision of the plasma densities inferred from 83.4 nm emission profiles. This first study demonstrates the utility of 83.4 nm emission for ionospheric remote sensing. Future visible and ultraviolet spectroscopy will characterize the composition of exoplanet atmospheres; therefore, the second study advances technologies for the direct imaging and spectroscopy of exoplanets. Such spectroscopy requires the development of new technologies to separate relatively dim exoplanet light from parent star light. High-contrast observations at short wavelengths require spaceborne telescopes to circumvent atmospheric aberrations. The Planet Imaging Concept Testbed Using a Rocket Experiment (PICTURE) team designed a suborbital sounding rocket payload to demonstrate visible light high-contrast imaging with a visible nulling coronagraph. Laboratory operations of the PICTURE coronagraph achieved the high-contrast imaging sensitivity necessary to test for the predicted warm circumstellar belt around Epsilon Eridani. Interferometric wavefront measurements of calibration target Beta Orionis recorded during the second test flight in November 2015 demonstrate the first active wavefront sensing with a piezoelectric mirror stage and activation of a micromachine deformable mirror in space. These two studies advance our "close-to-home'' knowledge of atmospheres and move exoplanetary studies closer to detailed measurements of atmospheres outside our solar system.

  5. Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.

    PubMed

    Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili

    2015-12-15

    Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.

  6. Assessing groundwater accessibility in the Kharga Basin, Egypt: A remote sensing approach

    NASA Astrophysics Data System (ADS)

    Parks, Shawna; Byrnes, Jeffrey; Abdelsalam, Mohamed G.; Laó Dávila, Daniel A.; Atekwana, Estella A.; Atya, Magdy A.

    2017-12-01

    We used multi-map analysis of remote sensing and ancillary data to identify potentially accessible sites for groundwater resources in the Kharga Basin in the Western Desert of Egypt. This basin is dominated by Cretaceous sandstone formations and extends within the Nubian Sandstone Aquifer. It is dissected by N-S and E-W trending faults, possibly acting as conduits for upward migration of groundwater. Analysis of paleo-drainage using Digital Elevation Model (DEM) generated from the Shuttle Radar Topography Mission (SRTM) data shows that the Kharga was a closed basin that might have been the site of a paleo-lake. Lake water recharged the Nubian Sandstone Aquifer during the wetter Holocene time. We generated the following layers for the multi-map analysis: (1) Fracture density map from the interpretation of Landsat Operational Land Imager (OLI), SRTM DEM, and RADARSAT data. (2) Thermal Inertia (TI) map (for moisture content imaging) from the Moderate Resolution Imaging Spectroradiometer (MODIS) data. (3) Hydraulic conductivity map from mapping lithological units using the Landsat OLI and previously published data. (4) Aquifer thickness map from previously published data. We quantitatively ranked the Kharga Basin by considering that regions of high fracture density, high TI, thicker aquifer, and high hydraulic conductivity have higher potential for groundwater accessibility. Our analysis shows that part of the southern Kharga Basin is suitable for groundwater extraction. This region is where N-S and E-W trending faults intersect, has relatively high TI and it is underlain by thick aquifer. However, the suitability of this region for groundwater use will be reduced significantly when considering the changes in land suitability and economic depth to groundwater extraction in the next 50 years.

  7. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  8. Combining Remote Sensing imagery of both fine and coarse spatial resolution to Estimate Crop Evapotranspiration and quantifying its Influence on Crop Growth Monitoring.

    NASA Astrophysics Data System (ADS)

    Sepulcre-Cantó, Guadalupe; Gellens-Meulenberghs, Françoise; Arboleda, Alirio; Duveiller, Gregory; Piccard, Isabelle; de Wit, Allard; Tychon, Bernard; Bakary, Djaby; Defourny, Pierre

    2010-05-01

    This study has been carried out in the framework of the GLOBAM -Global Agricultural Monitoring system by integration of earth observation and modeling techniques- project whose objective is to fill the methodological gap between the state of the art of local crop monitoring and the operational requirements of the global monitoring system programs. To achieve this goal, the research aims to develop an integrated approach using remote sensing and crop growth modeling. Evapotranspiration (ET) is a valuable parameter in the crop monitoring context since it provides information on the plant water stress status, which strongly influences crop development and, by extension, crop yield. To assess crop evapotranspiration over the GLOBAM study areas (300x300 km sites in Northern Europe and Central Ethiopia), a Soil-Vegetation-Atmosphere Transfer (SVAT) model forced with remote sensing and numerical weather prediction data has been used. This model runs at pre-operational level in the framework of the EUMETSAT LSA-SAF (Land Surface Analysis Satellite Application Facility) using SEVIRI and ECMWF data, as well as the ECOCLIMAP database to characterize the vegetation. The model generates ET images at the Meteosat Second Generation (MSG) spatial resolution (3 km at subsatellite point),with a temporal resolution of 30 min and monitors the entire MSG disk which covers Europe, Africa and part of Sud America . The SVAT model was run for 2007 using two approaches. The first approach is at the standard pre-operational mode. The second incorporates remote sensing information at various spatial resolutions going from LANDSAT (30m) to SEVIRI (3-5 km) passing by AWIFS (56m) and MODIS (250m). Fine spatial resolution data consists of crop type classification which enable to identify areas where pure crop specific MODIS time series can be compiled and used to derive Leaf Area Index estimations for the most important crops (wheat and maize). The use of this information allowed to characterize the type of vegetation and its state of development in a more accurate way than using the ECOCLIMAP database. Finally, the CASA method was applied using the evapotranspiration images with FAPAR (Fraction of Absorbed Photosynthetically Active Radiation) images from LSA-SAF to obtain Dry Matter Productivity (DMP) and crop yield. The potential of using evapotranspiration obtained from remote sensing in crop growth modeling is studied and discussed. Results of comparing the evapotranspiration obtained with ground truth data are shown as well as the influence of using high resolution information to characterize the vegetation in the evapotranspiration estimation. The values of DMP and yield obtained with the CASA method are compared with those obtained using crop growth modeling and field data, showing the potential of using this simplified remote sensing method for crop monitoring and yield forecasting. This methodology could be applied in an operative way to the entire MSG disk, allowing the continuous crop growth monitoring.

  9. A multi-signal fluorescent probe for simultaneously distinguishing and sequentially sensing cysteine/homocysteine, glutathione, and hydrogen sulfide in living cells† †Electronic supplementary information (ESI) available: Experimental details for chemical synthesis of all compounds, chemical structure characterization, supplementary spectra of probe, and fluorescence imaging methods and data. See DOI: 10.1039/c7sc00423k Click here for additional data file.

    PubMed Central

    He, Longwei; Yang, Xueling; Xu, Kaixin; Kong, Xiuqi

    2017-01-01

    Biothiols, which have a close network of generation and metabolic pathways among them, are essential reactive sulfur species (RSS) in the cells and play vital roles in human physiology. However, biothiols possess highly similar chemical structures and properties, resulting in it being an enormous challenge to simultaneously discriminate them from each other. Herein, we develop a unique fluorescent probe (HMN) for not only simultaneously distinguishing Cys/Hcy, GSH, and H2S from each other, but also sequentially sensing Cys/Hcy/GSH and H2S using a multi-channel fluorescence mode for the first time. When responding to the respective biothiols, the robust probe exhibits multiple sets of fluorescence signals at three distinct emission bands (blue-green-red). The new probe can also sense H2S at different concentration levels with changes of fluorescence at the blue and red emission bands. In addition, the novel probe HMN is able to discriminate and sequentially sense biothiols in biological environments via three-color fluorescence imaging. We expect that the development of the robust probe HMN will provide a powerful strategy to design fluorescent probes for the discrimination and sequential detection of biothiols, and offer a promising tool for exploring the interrelated roles of biothiols in various physiological and pathological conditions. PMID:28989659

  10. Data-intensive multispectral remote sensing of the nighttime Earth for environmental monitoring and emergency response

    NASA Astrophysics Data System (ADS)

    Zhizhin, M.; Poyda, A.; Velikhov, V.; Novikov, A.; Polyakov, A.

    2016-02-01

    All Most of the remote sensing applications rely on the daytime visible and infrared images of the Earth surface. Increase in the number of satellites, their spatial resolution as well as the number of the simultaneously observed spectral bands ensure a steady growth of the data volumes and computational complexity in the remote sensing sciences. Recent advance in the night time remote sensing is related to the enhanced sensitivity of the on-board instruments and to the unique opportunity to observe “pure” emitters in visible infrared spectra without contamination from solar heat and reflected light. A candidate set of the night-time emitters observable from the low-orbiting and geostationary satellites include steady state and temporal changes in the city and traffic electric lights, fishing boats, high-temperature industrial objects such as steel mills, oil cracking refineries and power plants, forest and agricultural fires, gas flares, volcanic eruptions and similar catastrophic events. Current satellite instruments can detect at night 10 times more of such objects compared to daytime. We will present a new data-intensive workflow of the night time remote sensing algorithms for map-reduce processing of visible and infrared images from the multispectral radiometers flown by the modern NOAA/NASA Suomi NPP and the USGS Landsat 8 satellites. Similar radiometers are installed on the new generation of the US geostationary GOES-R satellite to be launched in 2016. The new set of algorithms allows us to detect with confidence and track the abrupt changes and long-term trends in the energy of city lights, number of fishing boats, as well as the size, geometry, temperature of gas flares and to estimate monthly and early flared gas volumes by site or by country. For real-time analysis of the night time multispectral satellite images with global coverage we need gigabit network, petabyte data storage and parallel compute cluster with more than 20 nodes. To meet the processing requirements, we have used the supercomputer at the Kurchatov Institute in Moscow.

  11. Phase and amplitude modification of a laser beam by two deformable mirrors using conventional 4f image encryption techniques

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Rzasa, John Robertson; Davis, Christopher C.

    2017-08-01

    The image encryption and decryption technique using lens components and random phase screens has attracted a great deal of research interest in the past few years. In general, the optical encryption technique can translate a positive image into an image with nearly a white speckle pattern that is impossible to decrypt. However, with the right keys as conjugated random phase screens, the white noise speckle pattern can be decoded into the original image. We find that the fundamental ideas in image encryption can be borrowed and applied to carry out beam corrections through turbulent channels. Based on our detailed analysis, we show that by using two deformable mirrors arranged in similar fashions as in the image encryption technique, a large number of controllable phase and amplitude distribution patterns can be generated from a collimated Gaussian beam. Such a result can be further coupled with wavefront sensing techniques to achieve laser beam correction against turbulence distortions. In application, our approach leads to a new type of phase conjugation mirror that could be beneficial for directed energy systems.

  12. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    NASA Astrophysics Data System (ADS)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  13. QR-decomposition based SENSE reconstruction using parallel architecture.

    PubMed

    Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad

    2018-04-01

    Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Shearlet Features for Registration of Remotely Sensed Multitemporal Images

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline

    2015-01-01

    We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.

  15. Simulating optoelectronic systems for remote sensing with SENSOR

    NASA Astrophysics Data System (ADS)

    Boerner, Anko

    2003-04-01

    The consistent end-to-end simulation of airborne and spaceborne remote sensing systems is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software ENvironment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. It allows the simulation of a wide range of optoelectronic systems for remote sensing. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. Part three consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimization requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and examples of its use are given. The verification of SENSOR is demonstrated.

  16. System design and implementation of digital-image processing using computational grids

    NASA Astrophysics Data System (ADS)

    Shen, Zhanfeng; Luo, Jiancheng; Zhou, Chenghu; Huang, Guangyu; Ma, Weifeng; Ming, Dongping

    2005-06-01

    As a special type of digital image, remotely sensed images are playing increasingly important roles in our daily lives. Because of the enormous amounts of data involved, and the difficulties of data processing and transfer, an important issue for current computer and geo-science experts is developing internet technology to implement rapid remotely sensed image processing. Computational grids are able to solve this problem effectively. These networks of computer workstations enable the sharing of data and resources, and are used by computer experts to solve imbalances of network resources and lopsided usage. In China, computational grids combined with spatial-information-processing technology have formed a new technology: namely, spatial-information grids. In the field of remotely sensed images, spatial-information grids work more effectively for network computing, data processing, resource sharing, task cooperation and so on. This paper focuses mainly on the application of computational grids to digital-image processing. Firstly, we describe the architecture of digital-image processing on the basis of computational grids, its implementation is then discussed in detail with respect to the technology of middleware. The whole network-based intelligent image-processing system is evaluated on the basis of the experimental analysis of remotely sensed image-processing tasks; the results confirm the feasibility of the application of computational grids to digital-image processing.

  17. Quality assessment of remote sensing image fusion using feature-based fourth-order correlation coefficient

    NASA Astrophysics Data System (ADS)

    Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing

    2016-04-01

    In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.

  18. Hypothesis on human eye perceiving optical spectrum rather than an image

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Szu, Harold

    2015-05-01

    It is a common knowledge that we see the world because our eyes can perceive an optical image. A digital camera seems a good example of simulating the eye imaging system. However, the signal sensing and imaging on human retina is very complicated. There are at least five layers (of neurons) along the signal pathway: photoreceptors (cones and rods), bipolar, horizontal, amacrine and ganglion cells. To sense an optical image, it seems that photoreceptors (as sensors) plus ganglion cells (converting to electrical signals for transmission) are good enough. Image sensing does not require ununiformed distribution of photoreceptors like fovea. There are some challenging questions, for example, why don't we feel the "blind spots" (never fibers exiting the eyes)? Similar situation happens to glaucoma patients who do not feel their vision loss until 50% or more nerves died. Now our hypothesis is that human retina initially senses optical (i.e., Fourier) spectrum rather than optical image. Due to the symmetric property of Fourier spectrum the signal loss from a blind spot or the dead nerves (for glaucoma patients) can be recovered. Eye logarithmic response to input light intensity much likes displaying Fourier magnitude. The optics and structures of human eyes satisfy the needs of optical Fourier spectrum sampling. It is unsure that where and how inverse Fourier transform is performed in human vision system to obtain an optical image. Phase retrieval technique in compressive sensing domain enables image reconstruction even without phase inputs. The spectrum-based imaging system can potentially tolerate up to 50% of bad sensors (pixels), adapt to large dynamic range (with logarithmic response), etc.

  19. Criteria for the optimal selection of remote sensing optical images to map event landslides

    NASA Astrophysics Data System (ADS)

    Fiorucci, Federica; Giordan, Daniele; Santangelo, Michele; Dutto, Furio; Rossi, Mauro; Guzzetti, Fausto

    2018-01-01

    Landslides leave discernible signs on the land surface, most of which can be captured in remote sensing images. Trained geomorphologists analyse remote sensing images and map landslides through heuristic interpretation of photographic and morphological characteristics. Despite a wide use of remote sensing images for landslide mapping, no attempt to evaluate how the image characteristics influence landslide identification and mapping exists. This paper presents an experiment to determine the effects of optical image characteristics, such as spatial resolution, spectral content and image type (monoscopic or stereoscopic), on landslide mapping. We considered eight maps of the same landslide in central Italy: (i) six maps obtained through expert heuristic visual interpretation of remote sensing images, (ii) one map through a reconnaissance field survey, and (iii) one map obtained through a real-time kinematic (RTK) differential global positioning system (dGPS) survey, which served as a benchmark. The eight maps were compared pairwise and to a benchmark. The mismatch between each map pair was quantified by the error index, E. Results show that the map closest to the benchmark delineation of the landslide was obtained using the higher resolution image, where the landslide signature was primarily photographical (in the landslide source and transport area). Conversely, where the landslide signature was mainly morphological (in the landslide deposit) the best mapping result was obtained using the stereoscopic images. Albeit conducted on a single landslide, the experiment results are general, and provide useful information to decide on the optimal imagery for the production of event, seasonal and multi-temporal landslide inventory maps.

  20. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  1. Polarization Imaging Apparatus with Auto-Calibration

    NASA Technical Reports Server (NTRS)

    Zou, Yingyin Kevin (Inventor); Zhao, Hongzhi (Inventor); Chen, Qiushui (Inventor)

    2013-01-01

    A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5 deg, a second variable phase retarder with its optical axis aligned 45 deg, a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I(sub 0), I(sub 1), I(sub 2) and I(sub 3) of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (pi,0), (pi,pi) and (pi/2,pi), respectively. Then four Stokes components of a Stokes image, S(sub 0), S(sub 1), S(sub 2) and S(sub 3) were calculated using the four intensity images.

  2. Polarization imaging apparatus with auto-calibration

    DOEpatents

    Zou, Yingyin Kevin; Zhao, Hongzhi; Chen, Qiushui

    2013-08-20

    A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5.degree., a second variable phase retarder with its optical axis aligned 45.degree., a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively. Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.

  3. Enhancing Remotely Sensed TIR Data for Public Health Applications: Is West Nile Virus Heat-Related?

    NASA Astrophysics Data System (ADS)

    Weng, Q.; Liu, H.; Jiang, Y.

    2014-12-01

    Public health studies often require thermal infrared (TIR) images at both high temporal and spatial resolution to retrieve LST. However, currently, no single satellite sensors can deliver TIR data at both high temporal and spatial resolution. This technological limitation prevents the wide usage of remote sensing data in epidemiological studies. To solve this issue, we have developed a few image fusion techniques to generate high temporally-resolved image data. We downscaled GOES LST data to 15-minute 1-km resolution to assess community-based heat-related risk in Los Angeles County, California and simulated ASTER datasets by fusing ASTER and MODIS data to derive biophysical variables, including LST, NDVI, and normalized difference water index, to examine the effects of those environmental characteristics on WNV outbreak and dissemination. A spatio-temporal analysis of WNV outbreak and dissemination was conducted by synthesizing the remote sensing variables and mosquito surveillance data, and by focusing on WNV risk areas in July through September due to data sufficiency of mosquito pools. Moderate- and high-risk areas of WNV infections in mosquitoes were identified for five epidemiological weeks. These identified WNV-risk areas were then collocated in GIS with heat hazard, exposure, and vulnerability maps to answer the question of whether WNV is a heat related virus. The results show that elevation and built-up conditions were negatively associated with the WNV propagation, while LST positively correlated with the viral transmission. NDVI was not significantly associated with WNV transmission. San Fernando Valley was found to be the most vulnerable to mosquito infections of WNV. This research provides important insights into how high temporal resolution remote sensing imagery may be used to study time-dependant events in public health, especially in the operational surveillance and control of vector-borne, water-borne, or other epidemic diseases.

  4. Evaluation of SPOT imagery for the estimation of grassland biomass

    NASA Astrophysics Data System (ADS)

    Dusseux, P.; Hubert-Moy, L.; Corpetti, T.; Vertès, F.

    2015-06-01

    In many regions, a decrease in grasslands and change in their management, which are associated with agricultural intensification, have been observed in the last half-century. Such changes in agricultural practices have caused negative environmental effects that include water pollution, soil degradation and biodiversity loss. Moreover, climate-driven changes in grassland productivity could have serious consequences for the profitability of agriculture. The aim of this study was to assess the ability of remotely sensed data with high spatial resolution to estimate grassland biomass in agricultural areas. A vegetation index, namely the Normalized Difference Vegetation Index (NDVI), and two biophysical variables, the Leaf Area Index (LAI) and the fraction of Vegetation Cover (fCOVER) were computed using five SPOT images acquired during the growing season. In parallel, ground-based information on grassland growth was collected to calculate biomass values. The analysis of the relationship between the variables derived from the remotely sensed data and the biomass observed in the field shows that LAI outperforms NDVI and fCOVER to estimate biomass (R2 values of 0.68 against 0.30 and 0.50, respectively). The squared Pearson correlation coefficient between observed and estimated biomass using LAI derived from SPOT images reached 0.73. Biomass maps generated from remotely sensed data were then used to estimate grass reserves at the farm scale in the perspective of operational monitoring and forecasting.

  5. Exploration of genetically encoded voltage indicators based on a chimeric voltage sensing domain.

    PubMed

    Mishina, Yukiko; Mutoh, Hiroki; Song, Chenchen; Knöpfel, Thomas

    2014-01-01

    Deciphering how the brain generates cognitive function from patterns of electrical signals is one of the ultimate challenges in neuroscience. To this end, it would be highly desirable to monitor the activities of very large numbers of neurons while an animal engages in complex behaviors. Optical imaging of electrical activity using genetically encoded voltage indicators (GEVIs) has the potential to meet this challenge. Currently prevalent GEVIs are based on the voltage-sensitive fluorescent protein (VSFP) prototypical design or on the voltage-dependent state transitions of microbial opsins. We recently introduced a new VSFP design in which the voltage-sensing domain (VSD) is sandwiched between a fluorescence resonance energy transfer pair of fluorescent proteins (termed VSFP-Butterflies) and also demonstrated a series of chimeric VSD in which portions of the VSD of Ciona intestinalis voltage-sensitive phosphatase are substituted by homologous portions of a voltage-gated potassium channel subunit. These chimeric VSD had faster sensing kinetics than that of the native Ci-VSD. Here, we describe a new set of VSFPs that combine chimeric VSD with the Butterfly structure. We show that these chimeric VSFP-Butterflies can report membrane voltage oscillations of up to 200 Hz in cultured cells and report sensory evoked cortical population responses in living mice. This class of GEVIs may be suitable for imaging of brain rhythms in behaving mammalians.

  6. New optical sensor systems for high-resolution satellite, airborne and terrestrial imaging systems

    NASA Astrophysics Data System (ADS)

    Eckardt, Andreas; Börner, Anko; Lehmann, Frank

    2007-10-01

    The department of Optical Information Systems (OS) at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR) has more than 25 years experience with high-resolution imaging technology. The technology changes in the development of detectors, as well as the significant change of the manufacturing accuracy in combination with the engineering research define the next generation of spaceborne sensor systems focusing on Earth observation and remote sensing. The combination of large TDI lines, intelligent synchronization control, fast-readable sensors and new focal-plane concepts open the door to new remote-sensing instruments. This class of instruments is feasible for high-resolution sensor systems regarding geometry and radiometry and their data products like 3D virtual reality. Systemic approaches are essential for such designs of complex sensor systems for dedicated tasks. The system theory of the instrument inside a simulated environment is the beginning of the optimization process for the optical, mechanical and electrical designs. Single modules and the entire system have to be calibrated and verified. Suitable procedures must be defined on component, module and system level for the assembly test and verification process. This kind of development strategy allows the hardware-in-the-loop design. The paper gives an overview about the current activities at DLR in the field of innovative sensor systems for photogrammetric and remote sensing purposes.

  7. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  8. Hyperspectral remote sensing image retrieval system using spectral and texture features.

    PubMed

    Zhang, Jing; Geng, Wenhao; Liang, Xi; Li, Jiafeng; Zhuo, Li; Zhou, Qianlan

    2017-06-01

    Although many content-based image retrieval systems have been developed, few studies have focused on hyperspectral remote sensing images. In this paper, a hyperspectral remote sensing image retrieval system based on spectral and texture features is proposed. The main contributions are fourfold: (1) considering the "mixed pixel" in the hyperspectral image, endmembers as spectral features are extracted by an improved automatic pixel purity index algorithm, then the texture features are extracted with the gray level co-occurrence matrix; (2) similarity measurement is designed for the hyperspectral remote sensing image retrieval system, in which the similarity of spectral features is measured with the spectral information divergence and spectral angle match mixed measurement and in which the similarity of textural features is measured with Euclidean distance; (3) considering the limited ability of the human visual system, the retrieval results are returned after synthesizing true color images based on the hyperspectral image characteristics; (4) the retrieval results are optimized by adjusting the feature weights of similarity measurements according to the user's relevance feedback. The experimental results on NASA data sets can show that our system can achieve comparable superior retrieval performance to existing hyperspectral analysis schemes.

  9. Automatic Tracking Of Remote Sensing Precipitation Data Using Genetic Algorithm Image Registration Based Automatic Morphing: September 1999 Storm Floyd Case Study

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.

    U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms ­ Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD

  10. Crack displacement sensing and measurement in concrete using circular grating moire fringes and pattern matching

    NASA Astrophysics Data System (ADS)

    Chan, H. M.; Yen, K. S.; Ratnam, M. M.

    2008-09-01

    The moire method has been extensively studied in the past and applied in various engineering applications. Several techniques are available for generating the moire fringes in these applications, which include moire interferometry, projection moire, shadow moire, moire deflectometry etc. Most of these methods use the superposition of linear gratings to generate the moire patterns. The use of non-linear gratings, such as circular, radial and elongated gratings has received less attention from the research community. The potential of non-linear gratings in engineering measurement has been realized in a limited number of applications, such as rotation measurement, measurement of linear displacement, measurement of expansion coefficients of materials and measurement of strain distribution. In this work, circular gratings of different pitch were applied to the sensing and measurement of crack displacement in concrete structures. Gratings of pitch 0.50 mm and 0.55 mm were generated using computer software and attached to two overlapping acrylic plates that were bonded to either side of the crack. The resulting moire patterns were captured using a standard digital camera and compared with a set of reference patterns generated using a precision positioning stage. Using several image pre-processing stages, such as filtering and morphological operations, and pattern matching the magnitude displacements along two orthogonal axes can be detected with a resolution of 0.05 mm.

  11. Linear- and Repetitive Feature Detection Within Remotely Sensed Imagery

    DTIC Science & Technology

    2017-04-01

    applicable to Python or other pro- gramming languages with image- processing capabilities. 4.1 Classification machine learning The first methodology uses...remotely sensed images that are in panchromatic or true-color formats. Image- processing techniques, in- cluding Hough transforms, machine learning, and...data fusion .................................................................................................... 44 6.3 Context-based processing

  12. Quantification of Local Warming Trend: A Remote Sensing-Based Approach

    PubMed Central

    Rahaman, Khan Rubayet; Hassan, Quazi K.

    2017-01-01

    Understanding the warming trends at local level is critical; and, the development of relevant adaptation and mitigation policies at those levels are quite challenging. Here, our overall goal was to generate local warming trend map at 1 km spatial resolution by using: (i) Moderate Resolution Imaging Spectroradiometer (MODIS)-based 8-day composite surface temperature data; (ii) weather station-based yearly average air temperature data; and (iii) air temperature normal (i.e., 30 year average) data over the Canadian province of Alberta during the period 1961–2010. Thus, we analysed the station-based air temperature data in generating relationships between air temperature normal and yearly average air temperature in order to facilitate the selection of year-specific MODIS-based surface temperature data. These MODIS data in conjunction with weather station-based air temperature normal data were then used to model local warming trends. We observed that almost 88% areas of the province experienced warming trends (i.e., up to 1.5°C). The study concluded that remote sensing technology could be useful for delineating generic trends associated with local warming. PMID:28072857

  13. A Grassroots Remote Sensing Toolkit Using Live Coding, Smartphones, Kites and Lightweight Drones

    PubMed Central

    Anderson, K.; Griffiths, D.; DeBell, L.; Hancock, S.; Duffy, J. P.; Shutler, J. D.; Reinhardt, W. J.; Griffiths, A.

    2016-01-01

    This manuscript describes the development of an android-based smartphone application for capturing aerial photographs and spatial metadata automatically, for use in grassroots mapping applications. The aim of the project was to exploit the plethora of on-board sensors within modern smartphones (accelerometer, GPS, compass, camera) to generate ready-to-use spatial data from lightweight aerial platforms such as drones or kites. A visual coding ‘scheme blocks’ framework was used to build the application (‘app’), so that users could customise their own data capture tools in the field. The paper reports on the coding framework, then shows the results of test flights from kites and lightweight drones and finally shows how open-source geospatial toolkits were used to generate geographical information system (GIS)-ready GeoTIFF images from the metadata stored by the app. Two Android smartphones were used in testing–a high specification OnePlus One handset and a lower cost Acer Liquid Z3 handset, to test the operational limits of the app on phones with different sensor sets. We demonstrate that best results were obtained when the phone was attached to a stable single line kite or to a gliding drone. Results show that engine or motor vibrations from powered aircraft required dampening to ensure capture of high quality images. We demonstrate how the products generated from the open-source processing workflow are easily used in GIS. The app can be downloaded freely from the Google store by searching for ‘UAV toolkit’ (UAV toolkit 2016), and used wherever an Android smartphone and aerial platform are available to deliver rapid spatial data (e.g. in supporting decision-making in humanitarian disaster-relief zones, in teaching or for grassroots remote sensing and democratic mapping). PMID:27144310

  14. A Grassroots Remote Sensing Toolkit Using Live Coding, Smartphones, Kites and Lightweight Drones.

    PubMed

    Anderson, K; Griffiths, D; DeBell, L; Hancock, S; Duffy, J P; Shutler, J D; Reinhardt, W J; Griffiths, A

    2016-01-01

    This manuscript describes the development of an android-based smartphone application for capturing aerial photographs and spatial metadata automatically, for use in grassroots mapping applications. The aim of the project was to exploit the plethora of on-board sensors within modern smartphones (accelerometer, GPS, compass, camera) to generate ready-to-use spatial data from lightweight aerial platforms such as drones or kites. A visual coding 'scheme blocks' framework was used to build the application ('app'), so that users could customise their own data capture tools in the field. The paper reports on the coding framework, then shows the results of test flights from kites and lightweight drones and finally shows how open-source geospatial toolkits were used to generate geographical information system (GIS)-ready GeoTIFF images from the metadata stored by the app. Two Android smartphones were used in testing-a high specification OnePlus One handset and a lower cost Acer Liquid Z3 handset, to test the operational limits of the app on phones with different sensor sets. We demonstrate that best results were obtained when the phone was attached to a stable single line kite or to a gliding drone. Results show that engine or motor vibrations from powered aircraft required dampening to ensure capture of high quality images. We demonstrate how the products generated from the open-source processing workflow are easily used in GIS. The app can be downloaded freely from the Google store by searching for 'UAV toolkit' (UAV toolkit 2016), and used wherever an Android smartphone and aerial platform are available to deliver rapid spatial data (e.g. in supporting decision-making in humanitarian disaster-relief zones, in teaching or for grassroots remote sensing and democratic mapping).

  15. Distributed Compressive Sensing vs. Dynamic Compressive Sensing: Improving the Compressive Line Sensing Imaging System through Their Integration

    DTIC Science & Technology

    2015-01-01

    streak tube imaging Lidar [15]. Nevertheless, instead of one- dimensional (1D) fan beam, a laser source modulates the digital micromirror device DMD and...Trans. Inform. Theory, vol. 52, pp. 1289-1306, 2006. [10] D. Dudley, W. Duncan and J. Slaughter, "Emerging Digital Micromirror Device (DMD) Applications

  16. Remote sensing fire and fuels in southern California

    Treesearch

    Philip Riggan; Lynn Wolden; Bob Tissell; David Weise; J. Coen

    2011-01-01

    Airborne remote sensing at infrared wavelengths has the potential to quantify large-fire properties related to energy release or intensity, residence time, fuel-consumption rate, rate of spread, and soil heating. Remote sensing at a high temporal rate can track fire-line outbreaks and acceleration and spotting ahead of a fire front. Yet infrared imagers and imaging...

  17. Anisotropy of thermal infrared remote sensing over urban areas : assessment from airborne data and modeling approach

    NASA Astrophysics Data System (ADS)

    Hénon, A.; Mestayer, P.; Lagouarde, J.-P.; Lee, J. H.

    2009-09-01

    Due to the morphological complexity of the urban canopy and to the variability in thermal properties of the building materials, the heterogeneity of the surface temperatures generates a strong directional anisotropy of thermal infrared remote sensing signal. Thermal infrared (TIR) data obtained with an airborne FLIR camera over Toulouse (France) city centre during the CAPITOUL experiment (feb. 2004 - feb. 2005) show brightness temperature anisotropies ranging from 3 °C by night to more than 10 °C by sunny days. These data have been analyzed in view of developing a simple approach to correct TIR satellite remote sensing from the canopy-generated anisotropy, and to further evaluate the sensible heat fluxes. The methodology is based on the identification of 6 different classes of surfaces: roofs, walls and grounds, sunlit or shaded, respectively. The thermo-radiative model SOLENE is used to simulate, with a 1 m resolution computational grid, the surface temperatures of an 18000 m² urban district, in the same meteorological conditions as during the observation. A pixel-by-pixel comparison with both hand-held temperature measurements and airborne camera images allows to assess the actual values of the radiative and thermal parameters of the scene elements. SOLENE is then used to simulate a generic street-canyon geometry, whose sizes average the morphological parameters of the actual streets in the district, for 18 different geographical orientations. The simulated temperatures are then integrated for different viewing positions, taking into account shadowing and masking, and directional temperatures are determined for the 6 surface classes. The class ratios in each viewing direction are derived from images of the district generated by using the POVRAY software, and used to weigh the temperatures of each class and to compute the resulting directional brightness temperature at the district scale for a given sun direction (time in the day). Simulated and measured anisotropies are finally compared for several flights over Toulouse in summer and winter. An inverse method is further proposed to obtain the surface temperatures from the directional brightness temperatures, which may be extended to deduce the sensible heat fluxes separately from the buildings and from the ground.

  18. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  19. Self-Referenced Smartphone-Based Nanoplasmonic Imaging Platform for Colorimetric Biochemical Sensing.

    PubMed

    Wang, Xinhao; Chang, Te-Wei; Lin, Guohong; Gartia, Manas Ranjan; Liu, Gang Logan

    2017-01-03

    Colorimetric sensors usually suffer due to errors from variation in light source intensity, the type of light source, the Bayer filter algorithm, and the sensitivity of the camera to incoming light. Here, we demonstrate a self-referenced portable smartphone-based plasmonic sensing platform integrated with an internal reference sample along with an image processing method to perform colorimetric sensing. Two sensing principles based on unique nanoplasmonics enabled phenomena from a nanostructured plasmonic sensor, named as nanoLCA (nano Lycurgus cup array), were demonstrated here for colorimetric biochemical sensing: liquid refractive index sensing and optical absorbance enhancement sensing. Refractive indices of colorless liquids were measured by simple smartphone imaging and color analysis. Optical absorbance enhancement in the colorimetric biochemical assay was achieved by matching the plasmon resonance wavelength with the chromophore's absorbance peak wavelength. Such a sensing mechanism improved the limit of detection (LoD) by 100 times in a microplate reader format. Compared with a traditional colorimetric assay such as urine testing strips, a smartphone plasmon enhanced colorimetric sensing system provided 30 times improvement in the LoD. The platform was applied for simulated urine testing to precisely identify the samples with higher protein concentration, which showed potential point-of-care and early detection of kidney disease with the smartphone plasmonic resonance sensing system.

  20. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

Top