Sample records for pixel super-resolution algorithm

  1. Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy

    NASA Astrophysics Data System (ADS)

    Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan

    2016-03-01

    Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.

  2. Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution

    PubMed Central

    Bishara, Waheb; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan

    2010-01-01

    We demonstrate lensfree holographic microscopy on a chip to achieve ~0.6 µm spatial resolution corresponding to a numerical aperture of ~0.5 over a large field-of-view of ~24 mm2. By using partially coherent illumination from a large aperture (~50 µm), we acquire lower resolution lensfree in-line holograms of the objects with unit fringe magnification. For each lensfree hologram, the pixel size at the sensor chip limits the spatial resolution of the reconstructed image. To circumvent this limitation, we implement a sub-pixel shifting based super-resolution algorithm to effectively recover much higher resolution digital holograms of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area, which is also equivalent to the imaging field-of-view (24 mm2) due to unit magnification. We demonstrate the success of this pixel super-resolution approach by imaging patterned transparent substrates, blood smear samples, as well as Caenoharbditis Elegans. PMID:20588977

  3. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  4. Super-resolved refocusing with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu

    2011-03-01

    This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).

  5. Single image super-resolution via regularized extreme learning regression for imagery from microgrid polarimeters

    NASA Astrophysics Data System (ADS)

    Sargent, Garrett C.; Ratliff, Bradley M.; Asari, Vijayan K.

    2017-08-01

    The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity and peak signal-to-noise ratio.

  6. Sub-pixel mapping of hyperspectral imagery using super-resolution

    NASA Astrophysics Data System (ADS)

    Sharma, Shreya; Sharma, Shakti; Buddhiraju, Krishna M.

    2016-04-01

    With the development of remote sensing technologies, it has become possible to obtain an overview of landscape elements which helps in studying the changes on earth's surface due to climate, geological, geomorphological and human activities. Remote sensing measures the electromagnetic radiations from the earth's surface and match the spectral similarity between the observed signature and the known standard signatures of the various targets. However, problem lies when image classification techniques assume pixels to be pure. In hyperspectral imagery, images have high spectral resolution but poor spatial resolution. Therefore, the spectra obtained is often contaminated due to the presence of mixed pixels and causes misclassification. To utilise this high spectral information, spatial resolution has to be enhanced. Many factors make the spatial resolution one of the most expensive and hardest to improve in imaging systems. To solve this problem, post-processing of hyperspectral images is done to retrieve more information from the already acquired images. The algorithm to enhance spatial resolution of the images by dividing them into sub-pixels is known as super-resolution and several researches have been done in this domain.In this paper, we propose a new method for super-resolution based on ant colony optimization and review the popular methods of sub-pixel mapping of hyperspectral images along with their comparative analysis.

  7. Three-Dimensional Super-Resolution: Theory, Modeling, and Field Tests Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Vincent E.; Hines, Glenn; Pierrottet, Diego; Reisse, Robert

    2014-01-01

    Many flash lidar applications continue to demand higher three-dimensional image resolution beyond the current state-of-the-art technology of the detector arrays and their associated readout circuits. Even with the available number of focal plane pixels, the required number of photons for illuminating all the pixels may impose impractical requirements on the laser pulse energy or the receiver aperture size. Therefore, image resolution enhancement by means of a super-resolution algorithm in near real time presents a very attractive solution for a wide range of flash lidar applications. This paper describes a superresolution technique and illustrates its performance and merits for generating three-dimensional image frames at a video rate.

  8. Portable and cost-effective pixel super-resolution on-chip microscope for telemedicine applications.

    PubMed

    Bishara, Waheb; Sikora, Uzair; Mudanyali, Onur; Su, Ting-Wei; Yaglidere, Oguzhan; Luckhart, Shirley; Ozcan, Aydogan

    2011-01-01

    We report a field-portable lensless on-chip microscope with a lateral resolution of <1 μm and a large field-of-view of ~24 mm(2). This microscope is based on digital in-line holography and a pixel super-resolution algorithm to process multiple lensfree holograms and obtain a single high-resolution hologram. In its compact and cost-effective design, we utilize 23 light emitting diodes butt-coupled to 23 multi-mode optical fibers, and a simple optical filter, with no moving parts. Weighing only ~95 grams, we demonstrate the performance of this field-portable microscope by imaging various objects including human malaria parasites in thin blood smears.

  9. An Example-Based Super-Resolution Algorithm for Selfie Images

    PubMed Central

    William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep

    2016-01-01

    A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500

  10. Super-resolution for imagery from integrated microgrid polarimeters.

    PubMed

    Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M

    2011-07-04

    Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.

  11. A novel super-resolution camera model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  12. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  13. Super-pixel extraction based on multi-channel pulse coupled neural network

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.

  14. Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression

    NASA Astrophysics Data System (ADS)

    Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang

    2018-02-01

    Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.

  15. Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression

    NASA Astrophysics Data System (ADS)

    Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang

    2018-05-01

    Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.

  16. Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest

    NASA Astrophysics Data System (ADS)

    Feng, W.; Sui, H.; Chen, X.

    2018-04-01

    Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.

  17. Propagation phasor approach for holographic image reconstruction

    PubMed Central

    Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan

    2016-01-01

    To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671

  18. Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy.

    PubMed

    Zhang, Jialin; Sun, Jiasong; Chen, Qian; Li, Jiaji; Zuo, Chao

    2017-09-18

    High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm 2 ) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  19. A kind of color image segmentation algorithm based on super-pixel and PCNN

    NASA Astrophysics Data System (ADS)

    Xu, GuangZhu; Wang, YaWen; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun

    2018-04-01

    Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn't been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.

  20. Dictionary learning based noisy image super-resolution via distance penalty weight model

    PubMed Central

    Han, Yulan; Zhao, Yongping; Wang, Qisong

    2017-01-01

    In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness. PMID:28759633

  1. Super-Resolution Enhancement From Multiple Overlapping Images: A Fractional Area Technique

    NASA Astrophysics Data System (ADS)

    Michaels, Joshua A.

    With the availability of large quantities of relatively low-resolution data from several decades of space borne imaging, methods of creating an accurate, higher-resolution image from the multiple lower-resolution images (i.e. super-resolution), have been developed almost since such imagery has been around. The fractional-area super-resolution technique developed in this thesis has never before been documented. Satellite orbits, like Landsat, have a quantifiable variation, which means each image is not centered on the exact same spot more than once and the overlapping information from these multiple images may be used for super-resolution enhancement. By splitting a single initial pixel into many smaller, desired pixels, a relationship can be created between them using the ratio of the area within the initial pixel. The ideal goal for this technique is to obtain smaller pixels with exact values and no error, yielding a better potential result than those methods that yield interpolated pixel values with consequential loss of spatial resolution. A Fortran 95 program was developed to perform all calculations associated with the fractional-area super-resolution technique. The fractional areas are calculated using traditional trigonometry and coordinate geometry and Linear Algebra Package (LAPACK; Anderson et al., 1999) is used to solve for the higher-resolution pixel values. In order to demonstrate proof-of-concept, a synthetic dataset was created using the intrinsic Fortran random number generator and Adobe Illustrator CS4 (for geometry). To test the real-life application, digital pictures from a Sony DSC-S600 digital point-and-shoot camera with a tripod were taken of a large US geological map under fluorescent lighting. While the fractional-area super-resolution technique works in perfect synthetic conditions, it did not successfully produce a reasonable or consistent solution in the digital photograph enhancement test. The prohibitive amount of processing time (up to 60 days for a relatively small enhancement area) severely limits the practical usefulness of fraction-area super-resolution. Fractional-area super-resolution is very sensitive to relative input image co-registration, which must be accurate to a sub-pixel degree. However, use of this technique, if input conditions permit, could be applied as a "pinpoint" super-resolution technique. Such an application could be possible by only applying it to only very small areas with very good input image co-registration.

  2. Field-Portable Pixel Super-Resolution Colour Microscope

    PubMed Central

    Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan

    2013-01-01

    Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742

  3. Field-portable pixel super-resolution colour microscope.

    PubMed

    Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan

    2013-01-01

    Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.

  4. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  5. Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array.

    PubMed

    Bishara, Waheb; Sikora, Uzair; Mudanyali, Onur; Su, Ting-Wei; Yaglidere, Oguzhan; Luckhart, Shirley; Ozcan, Aydogan

    2011-04-07

    We report a portable lensless on-chip microscope that can achieve <1 µm resolution over a wide field-of-view of ∼ 24 mm(2) without the use of any mechanical scanning. This compact on-chip microscope weighs ∼ 95 g and is based on partially coherent digital in-line holography. Multiple fiber-optic waveguides are butt-coupled to light emitting diodes, which are controlled by a low-cost micro-controller to sequentially illuminate the sample. The resulting lensfree holograms are then captured by a digital sensor-array and are rapidly processed using a pixel super-resolution algorithm to generate much higher resolution holographic images (both phase and amplitude) of the objects. This wide-field and high-resolution on-chip microscope, being compact and light-weight, would be important for global health problems such as diagnosis of infectious diseases in remote locations. Toward this end, we validate the performance of this field-portable microscope by imaging human malaria parasites (Plasmodium falciparum) in thin blood smears. Our results constitute the first-time that a lensfree on-chip microscope has successfully imaged malaria parasites.

  6. Super resolution for astronomical observations

    NASA Astrophysics Data System (ADS)

    Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng

    2018-05-01

    In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.

  7. High-resolution reconstruction for terahertz imaging.

    PubMed

    Xu, Li-Min; Fan, Wen-Hui; Liu, Jia

    2014-11-20

    We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.

  8. Distance-based over-segmentation for single-frame RGB-D images

    NASA Astrophysics Data System (ADS)

    Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao

    2017-11-01

    Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.

  9. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    NASA Astrophysics Data System (ADS)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  10. Resolution enhancement of tri-stereo remote sensing images by super resolution methods

    NASA Astrophysics Data System (ADS)

    Tuna, Caglayan; Akoguz, Alper; Unal, Gozde; Sertel, Elif

    2016-10-01

    Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.

  11. Saliency detection algorithm based on LSC-RC

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu

    2018-02-01

    Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.

  12. Computational wavelength resolution for in-line lensless holography: phase-coded diffraction patterns and wavefront group-sparsity

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Shevkunov, Igor; Petrov, Nikolay V.; Egiazarian, Karen

    2017-06-01

    In-line lensless holography is considered with a random phase modulation at the object plane. The forward wavefront propagation is modelled using the Fourier transform with the angular spectrum transfer function. The multiple intensities (holograms) recorded by the sensor are random due to the random phase modulation and noisy with Poissonian noise distribution. It is shown by computational experiments that high-accuracy reconstructions can be achieved with resolution going up to the two thirds of the wavelength. With respect to the sensor pixel size it is a super-resolution with a factor of 32. The algorithm designed for optimal superresolution phase/amplitude reconstruction from Poissonian data is based on the general methodology developed for phase retrieval with a pixel-wise resolution in V. Katkovnik, "Phase retrieval from noisy data based on sparse approximation of object phase and amplitude", http://www.cs.tut.fi/ lasip/DDT/index3.html.

  13. North Twin Peak in super resolution

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This pair of images shows the result of taking a sequence of 25 identical exposures from the Imager for Mars Pathfinder (IMP) of the northern Twin Peak, with small camera motions, and processing them with the Super-Resolution algorithm developed at NASA's Ames Research Center.

    The upper image is a representative input image, scaled up by a factor of five, with the pixel edges smoothed out for a fair comparison. The lower image allows significantly finer detail to be resolved.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    The super-resolution research was conducted by Peter Cheeseman, Bob Kanefsky, Robin Hanson, and John Stutz of NASA's Ames Research Center, Mountain View, CA. More information on this technology is available on the Ames Super Resolution home page at

    http://ic-www.arc.nasa.gov/ic/projects/bayes-group/ group/super-res/

  14. Framework for Detection and Localization of Extreme Climate Event with Pixel Recursive Super Resolution

    NASA Astrophysics Data System (ADS)

    Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.

    2017-12-01

    Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.

  15. On Super-Resolution and the MUSIC Algorithm,

    DTIC Science & Technology

    1985-05-01

    SUPER-RESOLUTION AND THE MUSIC ALGORITHM AUTHOR: G D de Villiers DATE: May 1985 SUMMARY Simulation results for phased array signal processing using...the MUSIC algorithm are presented. The model used is more realistic than previous ones and it gives an indication as to how the algorithm would perform...resolution ON SUPER-RESOLUTION AND THE MUSIC ALGORITHM 1. INTRODUCTION At present there is a considerable amount of interest in "high-resolution" b

  16. Demosaiced pixel super-resolution for multiplexed holographic color imaging

    PubMed Central

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2016-01-01

    To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242

  17. 3D super resolution range-gated imaging for canopy reconstruction and measurement

    NASA Astrophysics Data System (ADS)

    Huang, Hantao; Wang, Xinwei; Sun, Liang; Lei, Pingshun; Fan, Songtao; Zhou, Yan

    2018-01-01

    In this paper, we proposed a method of canopy reconstruction and measurement based on 3D super resolution range-gated imaging. In this method, high resolution 2D intensity images are grasped by active gate imaging, and 3D images of canopy are reconstructed by triangular-range-intensity correlation algorithm at the same time. A range-gated laser imaging system(RGLIS) is established based on 808 nm diode laser and gated intensified charge-coupled device (ICCD) camera with 1392´1040 pixels. The proof experiments have been performed for potted plants located 75m away and trees located 165m away. The experiments show it that can acquire more than 1 million points per frame, and 3D imaging has the spatial resolution about 0.3mm at the distance of 75m and the distance accuracy about 10 cm. This research is beneficial for high speed acquisition of canopy structure and non-destructive canopy measurement.

  18. Demosaiced pixel super-resolution in digital holography for multiplexed computational color imaging on-a-chip (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan

    2017-03-01

    Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.

  19. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  20. Video-rate nanoscopy enabled by sCMOS camera-specific single-molecule localization algorithms

    PubMed Central

    Huang, Fang; Hartwich, Tobias M. P.; Rivera-Molina, Felix E.; Lin, Yu; Duim, Whitney C.; Long, Jane J.; Uchil, Pradeep D.; Myers, Jordan R.; Baird, Michelle A.; Mothes, Walther; Davidson, Michael W.; Toomre, Derek; Bewersdorf, Joerg

    2013-01-01

    Newly developed scientific complementary metal–oxide–semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition in single-molecule switching nanoscopy (SMSN) while simultaneously increasing the effective quantum efficiency. However, sCMOS-intrinsic pixel-dependent readout noise substantially reduces the localization precision and introduces localization artifacts. Here we present algorithms that overcome these limitations and provide unbiased, precise localization of single molecules at the theoretical limit. In combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at up to 32 reconstructed images/second (recorded at 1,600–3,200 camera frames/second) in both fixed and living cells. PMID:23708387

  1. Subpixel target detection and enhancement in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Tiwari, K. C.; Arora, M.; Singh, D.

    2011-06-01

    Hyperspectral data due to its higher information content afforded by higher spectral resolution is increasingly being used for various remote sensing applications including information extraction at subpixel level. There is however usually a lack of matching fine spatial resolution data particularly for target detection applications. Thus, there always exists a tradeoff between the spectral and spatial resolutions due to considerations of type of application, its cost and other associated analytical and computational complexities. Typically whenever an object, either manmade, natural or any ground cover class (called target, endmembers, components or class) gets spectrally resolved but not spatially, mixed pixels in the image result. Thus, numerous manmade and/or natural disparate substances may occur inside such mixed pixels giving rise to mixed pixel classification or subpixel target detection problems. Various spectral unmixing models such as Linear Mixture Modeling (LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the endmember spectrum and their corresponding abundance fractions inside the pixel. It, however, does not provide spatial distribution of these abundance fractions within a pixel. This limits the applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse Euclidean distance based super-resolution mapping method has been presented that achieves subpixel target detection in hyperspectral images by adjusting spatial distribution of abundance fraction within a pixel. Results obtained at different resolutions indicate that super-resolution mapping may effectively aid subpixel target detection.

  2. Measuring the performance of super-resolution reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Dijk, Judith; Schutte, Klamer; van Eekeren, Adam W. M.; Bijl, Piet

    2012-06-01

    For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order to judge these algorithms and the conditions under which they operate best, performance evaluation methods are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available. Therefore, evaluation of the differences in high resolution between the estimated high resolution image and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution reconstruction, which are not known on forehand and hence are difficult to evaluate. In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms. Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery. Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution reconstruction algorithms.

  3. Infrared super-resolution imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei

    2014-03-01

    The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.

  4. Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar

    PubMed Central

    Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu

    2015-01-01

    Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871

  5. Super-resolution imaging of subcortical white matter using stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI)

    PubMed Central

    Hainsworth, A. H.; Lee, S.; Patel, A.; Poon, W. W.; Knight, A. E.

    2018-01-01

    Aims The spatial resolution of light microscopy is limited by the wavelength of visible light (the ‘diffraction limit’, approximately 250 nm). Resolution of sub-cellular structures, smaller than this limit, is possible with super resolution methods such as stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI). We aimed to resolve subcellular structures (axons, myelin sheaths and astrocytic processes) within intact white matter, using STORM and SOFI. Methods Standard cryostat-cut sections of subcortical white matter from donated human brain tissue and from adult rat and mouse brain were labelled, using standard immunohistochemical markers (neurofilament-H, myelin-associated glycoprotein, glial fibrillary acidic protein, GFAP). Image sequences were processed for STORM (effective pixel size 8–32 nm) and for SOFI (effective pixel size 80 nm). Results In human, rat and mouse, subcortical white matter high-quality images for axonal neurofilaments, myelin sheaths and filamentous astrocytic processes were obtained. In quantitative measurements, STORM consistently underestimated width of axons and astrocyte processes (compared with electron microscopy measurements). SOFI provided more accurate width measurements, though with somewhat lower spatial resolution than STORM. Conclusions Super resolution imaging of intact cryo-cut human brain tissue is feasible. For quantitation, STORM can under-estimate diameters of thin fluorescent objects. SOFI is more robust. The greatest limitation for super-resolution imaging in brain sections is imposed by sample preparation. We anticipate that improved strategies to reduce autofluorescence and to enhance fluorophore performance will enable rapid expansion of this approach. PMID:28696566

  6. Super-resolution imaging of subcortical white matter using stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI).

    PubMed

    Hainsworth, A H; Lee, S; Foot, P; Patel, A; Poon, W W; Knight, A E

    2018-06-01

    The spatial resolution of light microscopy is limited by the wavelength of visible light (the 'diffraction limit', approximately 250 nm). Resolution of sub-cellular structures, smaller than this limit, is possible with super resolution methods such as stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI). We aimed to resolve subcellular structures (axons, myelin sheaths and astrocytic processes) within intact white matter, using STORM and SOFI. Standard cryostat-cut sections of subcortical white matter from donated human brain tissue and from adult rat and mouse brain were labelled, using standard immunohistochemical markers (neurofilament-H, myelin-associated glycoprotein, glial fibrillary acidic protein, GFAP). Image sequences were processed for STORM (effective pixel size 8-32 nm) and for SOFI (effective pixel size 80 nm). In human, rat and mouse, subcortical white matter high-quality images for axonal neurofilaments, myelin sheaths and filamentous astrocytic processes were obtained. In quantitative measurements, STORM consistently underestimated width of axons and astrocyte processes (compared with electron microscopy measurements). SOFI provided more accurate width measurements, though with somewhat lower spatial resolution than STORM. Super resolution imaging of intact cryo-cut human brain tissue is feasible. For quantitation, STORM can under-estimate diameters of thin fluorescent objects. SOFI is more robust. The greatest limitation for super-resolution imaging in brain sections is imposed by sample preparation. We anticipate that improved strategies to reduce autofluorescence and to enhance fluorophore performance will enable rapid expansion of this approach. © 2017 British Neuropathological Society.

  7. Controlled power delivery for super-resolution imaging of biological samples using digital micromirror device

    NASA Astrophysics Data System (ADS)

    Valiya Peedikakkal, Liyana; Cadby, Ashley

    2017-02-01

    Localization based super resolution images of a biological sample is generally achieved by using high power laser illumination with long exposure time which unfortunately increases photo-toxicity of a sample, making super resolution microscopy, in general, incompatible with live cell imaging. Furthermore, the limitation of photobleaching reduces the ability to acquire time lapse images of live biological cells using fluorescence microscopy. Digital Light Processing (DLP) technology can deliver light at grey scale levels by flickering digital micromirrors at around 290 Hz enabling highly controlled power delivery to samples. In this work, Digital Micromirror Device (DMD) is implemented in an inverse Schiefspiegler telescope setup to control the power and pattern of illumination for super resolution microscopy. We can achieve spatial and temporal patterning of illumination by controlling the DMD pixel by pixel. The DMD allows us to control the power and spatial extent of the laser illumination. We have used this to show that we can reduce the power delivered to the sample to allow for longer time imaging in one area while achieving sub-diffraction STORM imaging in another using higher power densities.

  8. Portable lensless wide-field microscopy imaging platform based on digital inline holography and multi-frame pixel super-resolution

    PubMed Central

    Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan

    2017-01-01

    In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866

  9. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  10. Structure-aware depth super-resolution using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon

    2015-03-01

    This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

  11. Super-resolution optics for virtual reality

    NASA Astrophysics Data System (ADS)

    Grabovičkić, Dejan; Benitez, Pablo; Miñano, Juan C.; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj; Nikolic, Milena I.; Lopez, Jesus; Gorospe, Jorge; Sanchez, Eduardo; Lastres, Carmen; Mohedano, Ruben

    2017-06-01

    In present commercial Virtual Reality (VR) headsets the resolution perceived is still limited, since the VR pixel density (typically 10-15 pixels/deg) is well below what the human eye can resolve (60 pixels/deg). We present here novel advanced optical design approaches that dramatically increase the perceived resolution of the VR keeping the large FoV required in VR applications. This approach can be applied to a vast number of optical architectures, including some advanced configurations, as multichannel designs. All this is done at the optical design stage, and no eye tracker is needed in the headset.

  12. Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.

    PubMed

    Li, Yusheng; Matej, Samuel; Metzler, Scott D

    2014-12-01

    Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.

  13. FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data

    NASA Astrophysics Data System (ADS)

    Min, Junhong; Vonesch, Cédric; Kirshner, Hagai; Carlini, Lina; Olivier, Nicolas; Holden, Seamus; Manley, Suliana; Ye, Jong Chul; Unser, Michael

    2014-04-01

    Super resolution microscopy such as STORM and (F)PALM is now a well known method for biological studies at the nanometer scale. However, conventional imaging schemes based on sparse activation of photo-switchable fluorescent probes have inherently slow temporal resolution which is a serious limitation when investigating live-cell dynamics. Here, we present an algorithm for high-density super-resolution microscopy which combines a sparsity-promoting formulation with a Taylor series approximation of the PSF. Our algorithm is designed to provide unbiased localization on continuous space and high recall rates for high-density imaging, and to have orders-of-magnitude shorter run times compared to previous high-density algorithms. We validated our algorithm on both simulated and experimental data, and demonstrated live-cell imaging with temporal resolution of 2.5 seconds by recovering fast ER dynamics.

  14. FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data

    PubMed Central

    Min, Junhong; Vonesch, Cédric; Kirshner, Hagai; Carlini, Lina; Olivier, Nicolas; Holden, Seamus; Manley, Suliana; Ye, Jong Chul; Unser, Michael

    2014-01-01

    Super resolution microscopy such as STORM and (F)PALM is now a well known method for biological studies at the nanometer scale. However, conventional imaging schemes based on sparse activation of photo-switchable fluorescent probes have inherently slow temporal resolution which is a serious limitation when investigating live-cell dynamics. Here, we present an algorithm for high-density super-resolution microscopy which combines a sparsity-promoting formulation with a Taylor series approximation of the PSF. Our algorithm is designed to provide unbiased localization on continuous space and high recall rates for high-density imaging, and to have orders-of-magnitude shorter run times compared to previous high-density algorithms. We validated our algorithm on both simulated and experimental data, and demonstrated live-cell imaging with temporal resolution of 2.5 seconds by recovering fast ER dynamics. PMID:24694686

  15. A Super-Resolution Algorithm for Enhancement of FLASH LIDAR Data: Flight Test Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse Robert

    2014-01-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  16. A super-resolution algorithm for enhancement of flash lidar data: flight test results

    NASA Astrophysics Data System (ADS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Eric; Reisse, Robert

    2013-03-01

    This paper describes the results of a 3D super-resolution algorithm applied to the range data obtained from a recent Flash Lidar helicopter flight test. The flight test was conducted by the NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project over a simulated lunar terrain facility at NASA Kennedy Space Center. ALHAT is developing the technology for safe autonomous landing on the surface of celestial bodies: Moon, Mars, asteroids. One of the test objectives was to verify the ability of 3D super-resolution technique to generate high resolution digital elevation models (DEMs) and to determine time resolved relative positions and orientations of the vehicle. 3D super-resolution algorithm was developed earlier and tested in computational modeling, and laboratory experiments, and in a few dynamic experiments using a moving truck. Prior to the helicopter flight test campaign, a 100mX100m hazard field was constructed having most of the relevant extraterrestrial hazard: slopes, rocks, and craters with different sizes. Data were collected during the flight and then processed by the super-resolution code. The detailed DEM of the hazard field was constructed using independent measurement to be used for comparison. ALHAT navigation system data were used to verify abilities of super-resolution method to provide accurate relative navigation information. Namely, the 6 degree of freedom state vector of the instrument as a function of time was restored from super-resolution data. The results of comparisons show that the super-resolution method can construct high quality DEMs and allows for identifying hazards like rocks and craters within the accordance of ALHAT requirements.

  17. Land cover mapping at sub-pixel scales

    NASA Astrophysics Data System (ADS)

    Makido, Yasuyo Kato

    One of the biggest drawbacks of land cover mapping from remotely sensed images relates to spatial resolution, which determines the level of spatial details depicted in an image. Fine spatial resolution images from satellite sensors such as IKONOS and QuickBird are now available. However, these images are not suitable for large-area studies, since a single image is very small and therefore it is costly for large area studies. Much research has focused on attempting to extract land cover types at sub-pixel scale, and little research has been conducted concerning the spatial allocation of land cover types within a pixel. This study is devoted to the development of new algorithms for predicting land cover distribution using remote sensory imagery at sub-pixel level. The "pixel-swapping" optimization algorithm, which was proposed by Atkinson for predicting sub-pixel land cover distribution, is investigated in this study. Two limitations of this method, the arbitrary spatial range value and the arbitrary exponential model of spatial autocorrelation, are assessed. Various weighting functions, as alternatives to the exponential model, are evaluated in order to derive the optimum weighting function. Two different simulation models were employed to develop spatially autocorrelated binary class maps. In all tested models, Gaussian, Exponential, and IDW, the pixel swapping method improved classification accuracy compared with the initial random allocation of sub-pixels. However the results suggested that equal weight could be used to increase accuracy and sub-pixel spatial autocorrelation instead of using these more complex models of spatial structure. New algorithms for modeling the spatial distribution of multiple land cover classes at sub-pixel scales are developed and evaluated. Three methods are examined: sequential categorical swapping, simultaneous categorical swapping, and simulated annealing. These three methods are applied to classified Landsat ETM+ data that has been resampled to 210 meters. The result suggested that the simultaneous method can be considered as the optimum method in terms of accuracy performance and computation time. The case study employs remote sensing imagery at the following sites: tropical forests in Brazil and temperate multiple land mosaic in East China. Sub-areas for both sites are used to examine how the characteristics of the landscape affect the ability of the optimum technique. Three types of measurement: Moran's I, mean patch size (MPS), and patch size standard deviation (STDEV), are used to characterize the landscape. All results suggested that this technique could increase the classification accuracy more than traditional hard classification. The methods developed in this study can benefit researchers who employ coarse remote sensing imagery but are interested in detailed landscape information. In many cases, the satellite sensor that provides large spatial coverage has insufficient spatial detail to identify landscape patterns. Application of the super-resolution technique described in this dissertation could potentially solve this problem by providing detailed land cover predictions from the coarse resolution satellite sensor imagery.

  18. Supporting lander and rover operation: a novel super-resolution restoration technique

    NASA Astrophysics Data System (ADS)

    Tao, Yu; Muller, Jan-Peter

    2015-04-01

    Higher resolution imaging data is always desirable to critical rover engineering operations, such as landing site selection, path planning, and optical localisation. For current Mars missions, 25cm HiRISE images have been widely used by the MER & MSL engineering team for rover path planning and location registration/adjustment. However, 25cm is not high enough resolution to be able to view individual rocks (≤2m in size) or visualise the types of sedimentary features that rover onboard cameras might observe. Nevertheless, due to various physical constraints (e.g. telescope size and mass) from the imaging instruments themselves, one needs to be able to tradeoff spatial resolution and bandwidth. This means that future imaging systems are likely to be limited to resolve features larger than 25cm. We have developed a novel super-resolution algorithm/pipeline to be able to restore higher resolution image from the non-redundant sub-pixel information contained in multiple lower resolution raw images [Tao & Muller 2015]. We will demonstrate with experiments performed using 5-10 overlapped 25cm HiRISE images for MER-A, MER-B & MSL to resolve 5-10cm super resolution images that can be directly compared to rover imagery at a range of 5 metres from the rover cameras but in our case can be used to visualise features many kilometres away from the actual rover traverse. We will demonstrate how these super-resolution images together with image understanding software can be used to quantify rock size-frequency distributions as well as measure sedimentary rock layers for several critical sites for comparison with rover orthorectified image mosaic to demonstrate optimality of using our super-resolution resolved image to better support future lander and rover operation in future. We present the potential of super-resolution for virtual exploration to the ˜400 HiRISE areas which have been viewed 5 or more times and the potential application of this technique to all of the ESA ExoMars Trace Gas orbiter CaSSiS stereo, multi-angle and colour camera images from 2017 onwards. Acknowledgements: The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No.312377 PRoViDE.

  19. A super resolution framework for low resolution document image OCR

    NASA Astrophysics Data System (ADS)

    Ma, Di; Agam, Gady

    2013-01-01

    Optical character recognition is widely used for converting document images into digital media. Existing OCR algorithms and tools produce good results from high resolution, good quality, document images. In this paper, we propose a machine learning based super resolution framework for low resolution document image OCR. Two main techniques are used in our proposed approach: a document page segmentation algorithm and a modified K-means clustering algorithm. Using this approach, by exploiting coherence in the document, we reconstruct from a low resolution document image a better resolution image and improve OCR results. Experimental results show substantial gain in low resolution documents such as the ones captured from video.

  20. Robust video super-resolution with registration efficiency adaptation

    NASA Astrophysics Data System (ADS)

    Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen

    2010-07-01

    Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.

  1. Multiple signal classification algorithm for super-resolution fluorescence microscopy

    PubMed Central

    Agarwal, Krishna; Macháň, Radek

    2016-01-01

    Single-molecule localization techniques are restricted by long acquisition and computational times, or the need of special fluorophores or biologically toxic photochemical environments. Here we propose a statistical super-resolution technique of wide-field fluorescence microscopy we call the multiple signal classification algorithm which has several advantages. It provides resolution down to at least 50 nm, requires fewer frames and lower excitation power and works even at high fluorophore concentrations. Further, it works with any fluorophore that exhibits blinking on the timescale of the recording. The multiple signal classification algorithm shows comparable or better performance in comparison with single-molecule localization techniques and four contemporary statistical super-resolution methods for experiments of in vitro actin filaments and other independently acquired experimental data sets. We also demonstrate super-resolution at timescales of 245 ms (using 49 frames acquired at 200 frames per second) in samples of live-cell microtubules and live-cell actin filaments imaged without imaging buffers. PMID:27934858

  2. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.

    PubMed

    Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn

    2016-01-01

    Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

  3. Recent advances in the field of super resolved imaging and sensing

    NASA Astrophysics Data System (ADS)

    Zalevsky, Zeev; Borkowski, Amikam; Marom, Emanuel; Javidi, Bahram; Beiderman, Yevgeny; Micó, Vicente; García, Javier

    2011-05-01

    In this paper we start by presenting one recent development in the field of geometric super resolution. The new approach overcomes the reduction of resolution caused by the non ideal sampling of the image done by the spatial averaging of each pixel of the sampling array. Right after, we demonstrate a remote super sensing technique allowing monitoring, from a distance, the heart beats, blood pulse pressure and the glucose level in the blood stream of a patient by tracking the trajectory of secondary speckle patterns reflected from the skin of the wrist or from the sclera.

  4. Windowed time-reversal music technique for super-resolution ultrasound imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Labyed, Yassin

    Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements.

  5. Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface.

    PubMed

    Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun

    2016-06-01

    Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging.

  6. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    NASA Astrophysics Data System (ADS)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  7. Portable microscopy platform for the clinical and environmental monitoring

    NASA Astrophysics Data System (ADS)

    Wang, Weiming; Yu, Yan; Huang, Hui; Ou, Jinping

    2016-04-01

    Light microscopy can not only address various diagnosis needs such as aquatic parasites and bacteria such as E. coli in water, but also provide a method for the screening of red tide. Traditional microscope based on the smartphone created by adding lens couldn't keep the tradeoff between field-of-view(FOV) and the resolution. In this paper, we demonstrate a non-contact, light and cost-effective microscope platform, that can image highly dense samples with a spatial resolution of ~0.8um over a field-of-view(FOV) of >1mm2. After captured the direct images, we performed the pixel super-resolution algorithm to improve the image resolution and overcome the hardware interference. The system would be a good point-of-care diagnostic solution in resource limited settings. We validated the performance of the system by imaging resolution test targets, the squamous cell cancer(SqCC) and green algae that necessary to detect the squamous carcinoma and red tide

  8. Face sketch recognition based on edge enhancement via deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.

  9. Operating organic light-emitting diodes imaged by super-resolution spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, John T.; Granick, Steve

    Super-resolution stimulated emission depletion (STED) microscopy is adapted here for materials characterization that would not otherwise be possible. With the example of organic light-emitting diodes (OLEDs), spectral imaging with pixel-by-pixel wavelength discrimination allows us to resolve local-chain environment encoded in the spectral response of the semi-conducting polymer, and correlate chain packing with local electroluminescence by using externally applied current as the excitation source. We observe nanoscopic defects that would be unresolvable by traditional microscopy. They are revealed in electroluminescence maps in operating OLEDs with 50 nm spatial resolution. We find that brightest emission comes from regions with more densely packedmore » chains. Conventional microscopy of an operating OLED would lack the resolution needed to discriminate these features, while traditional methods to resolve nanoscale features generally cannot be performed when the device is operating. As a result, this points the way towards real-time analysis of materials design principles in devices as they actually operate.« less

  10. Operating organic light-emitting diodes imaged by super-resolution spectroscopy

    DOE PAGES

    King, John T.; Granick, Steve

    2016-06-21

    Super-resolution stimulated emission depletion (STED) microscopy is adapted here for materials characterization that would not otherwise be possible. With the example of organic light-emitting diodes (OLEDs), spectral imaging with pixel-by-pixel wavelength discrimination allows us to resolve local-chain environment encoded in the spectral response of the semi-conducting polymer, and correlate chain packing with local electroluminescence by using externally applied current as the excitation source. We observe nanoscopic defects that would be unresolvable by traditional microscopy. They are revealed in electroluminescence maps in operating OLEDs with 50 nm spatial resolution. We find that brightest emission comes from regions with more densely packedmore » chains. Conventional microscopy of an operating OLED would lack the resolution needed to discriminate these features, while traditional methods to resolve nanoscale features generally cannot be performed when the device is operating. As a result, this points the way towards real-time analysis of materials design principles in devices as they actually operate.« less

  11. A Bayesian Nonparametric Approach to Image Super-Resolution.

    PubMed

    Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid

    2015-02-01

    Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.

  12. Development and Characterization of a Dither-Based Super-Resolution Reconstruction Method for Fiber Imaging Arrays

    NASA Astrophysics Data System (ADS)

    Languirand, Eric Robert

    Chemical imaging is an important tool for providing insight into function, role, and spatial distribution of analytes. This thesis describes the use of imaging fiber bundles (IFB) for super-resolution reconstruction using surface enhanced Raman scattering (SERS) showing improvement in resolution with arrayed bundles for the first time. Additionally this thesis describes characteristics of the IFB with regards to cross-talk as a function of aperture size. The first part of this thesis characterizes the IFB for both tapered and untapered bundles in terms of cross-talk. Cross-talk is defined as the amount of light leaking from a central fiber element in the imaging fiber bundle to surrounding fiber elements. To make this measurement ubiquitous for all imaging bundles, quantum dots were employed. Untapered and tapered IFB possess cross-talk of 2% or less, with fiber elements down to 32nm. The second part of this thesis employs a super resolution reconstruction algorithm using projection onto convex sets for resolution improvement. When using IFB arrays, the point spread function (PSF) of the array can be known accurately if the fiber elements over fill the pixel detector array. Therefore, the use of the known PSF compared to a general blurring kernel was evaluated. Relative increases in resolution of 12% and 2% at the 95% confidence level are found, when compared to a reference image, for the general blurring kernel and PSF, respectively. The third part of this thesis shows for the first time the use of SERS with a dithered IFB array coupled with super-resolution reconstruction. The resolution improvement across a step-edge is shown to be approximately 20% when compared to a reference image. This provides an additional means of increasing the resolution of fiber bundles beyond that of just tapering. Furthermore, this provides a new avenue for nanoscale imaging using these bundles. Lastly, synthetic data with varying degrees of signal-to-noise (S/N) were employed to explore the relationship S/N has with the reconstruction process. It is generally shown that increasing the number images used in the reconstruction process and increasing the S/N will improve the reconstruction providing larger increases in resolution.

  13. Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data

    NASA Astrophysics Data System (ADS)

    Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.

    2015-04-01

    In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.

  14. High Resolution Bathymetry Estimation Improvement with Single Image Super-Resolution Algorithm Super-Resolution Forests

    DTIC Science & Technology

    2017-01-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5514--17-9692 High Resolution Bathymetry Estimation Improvement with Single Image Super...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate

  15. Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface

    PubMed Central

    Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun

    2016-01-01

    Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging. PMID:27246668

  16. Underwater video enhancement using multi-camera super-resolution

    NASA Astrophysics Data System (ADS)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  17. Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images

    NASA Astrophysics Data System (ADS)

    Ardila, Juan P.; Tolpekin, Valentyn A.; Bijker, Wietske; Stein, Alfred

    2011-11-01

    Identification of tree crowns from remote sensing requires detailed spectral information and submeter spatial resolution imagery. Traditional pixel-based classification techniques do not fully exploit the spatial and spectral characteristics of remote sensing datasets. We propose a contextual and probabilistic method for detection of tree crowns in urban areas using a Markov random field based super resolution mapping (SRM) approach in very high resolution images. Our method defines an objective energy function in terms of the conditional probabilities of panchromatic and multispectral images and it locally optimizes the labeling of tree crown pixels. Energy and model parameter values are estimated from multiple implementations of SRM in tuning areas and the method is applied in QuickBird images to produce a 0.6 m tree crown map in a city of The Netherlands. The SRM output shows an identification rate of 66% and commission and omission errors in small trees and shrub areas. The method outperforms tree crown identification results obtained with maximum likelihood, support vector machines and SRM at nominal resolution (2.4 m) approaches.

  18. Pixel decomposition for tracking in low resolution videos

    NASA Astrophysics Data System (ADS)

    Govinda, Vivekanand; Ralph, Jason F.; Spencer, Joseph W.; Goulermas, John Y.; Yang, Lihua; Abbas, Alaa M.

    2008-04-01

    This paper describes a novel set of algorithms that allows indoor activity to be monitored using data from very low resolution imagers and other non-intrusive sensors. The objects are not resolved but activity may still be determined. This allows the use of such technology in sensitive environments where privacy must be maintained. Spectral un-mixing algorithms from remote sensing were adapted for this environment. These algorithms allow the fractional contributions from different colours within each pixel to be estimated and this is used to assist in the detection and monitoring of small objects or sub-pixel motion.

  19. Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector

    NASA Astrophysics Data System (ADS)

    Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2014-02-01

    A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.

  20. Easy-DHPSF open-source software for three-dimensional localization of single molecules with precision beyond the optical diffraction limit.

    PubMed

    Lew, Matthew D; von Diezmann, Alexander R S; Moerner, W E

    2013-02-25

    Automated processing of double-helix (DH) microscope images of single molecules (SMs) streamlines the protocol required to obtain super-resolved three-dimensional (3D) reconstructions of ultrastructures in biological samples by single-molecule active control microscopy. Here, we present a suite of MATLAB subroutines, bundled with an easy-to-use graphical user interface (GUI), that facilitates 3D localization of single emitters (e.g. SMs, fluorescent beads, or quantum dots) with precisions of tens of nanometers in multi-frame movies acquired using a wide-field DH epifluorescence microscope. The algorithmic approach is based upon template matching for SM recognition and least-squares fitting for 3D position measurement, both of which are computationally expedient and precise. Overlapping images of SMs are ignored, and the precision of least-squares fitting is not as high as maximum likelihood-based methods. However, once calibrated, the algorithm can fit 15-30 molecules per second on a 3 GHz Intel Core 2 Duo workstation, thereby producing a 3D super-resolution reconstruction of 100,000 molecules over a 20×20×2 μm field of view (processing 128×128 pixels × 20000 frames) in 75 min.

  1. SNSMIL, a real-time single molecule identification and localization algorithm for super-resolution fluorescence microscopy

    PubMed Central

    Tang, Yunqing; Dai, Luru; Zhang, Xiaoming; Li, Junbai; Hendriks, Johnny; Fan, Xiaoming; Gruteser, Nadine; Meisenberg, Annika; Baumann, Arnd; Katranidis, Alexandros; Gensch, Thomas

    2015-01-01

    Single molecule localization based super-resolution fluorescence microscopy offers significantly higher spatial resolution than predicted by Abbe’s resolution limit for far field optical microscopy. Such super-resolution images are reconstructed from wide-field or total internal reflection single molecule fluorescence recordings. Discrimination between emission of single fluorescent molecules and background noise fluctuations remains a great challenge in current data analysis. Here we present a real-time, and robust single molecule identification and localization algorithm, SNSMIL (Shot Noise based Single Molecule Identification and Localization). This algorithm is based on the intrinsic nature of noise, i.e., its Poisson or shot noise characteristics and a new identification criterion, QSNSMIL, is defined. SNSMIL improves the identification accuracy of single fluorescent molecules in experimental or simulated datasets with high and inhomogeneous background. The implementation of SNSMIL relies on a graphics processing unit (GPU), making real-time analysis feasible as shown for real experimental and simulated datasets. PMID:26098742

  2. Maximum likelihood positioning and energy correction for scintillation detectors

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-01

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  3. Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.

    PubMed

    Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban

    2015-07-20

    In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.

  4. Performance improvements of wavelength-shifting-fiber neutron detectors using high-resolution positioning algorithms

    DOE PAGES

    Wang, C. L.

    2016-05-17

    On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA.more » Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less

  5. Time reversal and phase coherent music techniques for super-resolution ultrasound imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lianjie; Labyed, Yassin

    Systems and methods for super-resolution ultrasound imaging using a windowed and generalized TR-MUSIC algorithm that divides the imaging region into overlapping sub-regions and applies the TR-MUSIC algorithm to the windowed backscattered ultrasound signals corresponding to each sub-region. The algorithm is also structured to account for the ultrasound attenuation in the medium and the finite-size effects of ultrasound transducer elements. A modified TR-MUSIC imaging algorithm is used to account for ultrasound scattering from both density and compressibility contrasts. The phase response of ultrasound transducer elements is accounted for in a PC-MUSIC system.

  6. Giga-pixel lensfree holographic microscopy and tomography using color image sensors.

    PubMed

    Isikman, Serhan O; Greenbaum, Alon; Luo, Wei; Coskun, Ahmet F; Ozcan, Aydogan

    2012-01-01

    We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2). This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ± 50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3) across a sample volume of ~5 mm(3), which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

  7. Super-Resolution Imaging Strategies for Cell Biologists Using a Spinning Disk Microscope

    PubMed Central

    Hosny, Neveen A.; Song, Mingying; Connelly, John T.; Ameer-Beg, Simon; Knight, Martin M.; Wheeler, Ann P.

    2013-01-01

    In this study we use a spinning disk confocal microscope (SD) to generate super-resolution images of multiple cellular features from any plane in the cell. We obtain super-resolution images by using stochastic intensity fluctuations of biological probes, combining Photoactivation Light-Microscopy (PALM)/Stochastic Optical Reconstruction Microscopy (STORM) methodologies. We compared different image analysis algorithms for processing super-resolution data to identify the most suitable for analysis of particular cell structures. SOFI was chosen for X and Y and was able to achieve a resolution of ca. 80 nm; however higher resolution was possible >30 nm, dependant on the super-resolution image analysis algorithm used. Our method uses low laser power and fluorescent probes which are available either commercially or through the scientific community, and therefore it is gentle enough for biological imaging. Through comparative studies with structured illumination microscopy (SIM) and widefield epifluorescence imaging we identified that our methodology was advantageous for imaging cellular structures which are not immediately at the cell-substrate interface, which include the nuclear architecture and mitochondria. We have shown that it was possible to obtain two coloured images, which highlights the potential this technique has for high-content screening, imaging of multiple epitopes and live cell imaging. PMID:24130668

  8. Super-Resolution Algorithm in Cumulative Virtual Blanking

    NASA Astrophysics Data System (ADS)

    Montillet, J. P.; Meng, X.; Roberts, G. W.; Woolfson, M. S.

    2008-11-01

    The proliferation of mobile devices and the emergence of wireless location-based services have generated consumer demand for precise location. In this paper, the MUSIC super-resolution algorithm is applied to time delay estimation for positioning purposes in cellular networks. The goal is to position a Mobile Station with UMTS technology. The problem of Base-Stations herability is solved using Cumulative Virtual Blanking. A simple simulator is presented using DS-SS signal. The results show that MUSIC algorithm improves the time delay estimation in both the cases whether or not Cumulative Virtual Blanking was carried out.

  9. Introduction to the virtual special issue on super-resolution imaging techniques

    NASA Astrophysics Data System (ADS)

    Cao, Liangcai; Liu, Zhengjun

    2017-12-01

    Until quite recently, the resolution of optical imaging instruments, including telescopes, cameras and microscopes, was considered to be limited by the diffraction of light and by image sensors. In the past few years, many exciting super-resolution approaches have emerged that demonstrate intriguing ways to bypass the classical limit in optics and detectors. More and more research groups are engaged in the study of advanced super-resolution schemes, devices, algorithms, systems, and applications [1-6]. Super-resolution techniques involve new methods in science and engineering of optics [7,8], measurements [9,10], chemistry [11,12] and information [13,14]. Promising applications, particularly in biomedical research and semiconductor industry, have been successfully demonstrated.

  10. DMD-based LED-illumination super-resolution and optical sectioning microscopy.

    PubMed

    Dan, Dan; Lei, Ming; Yao, Baoli; Wang, Wen; Winterhalder, Martin; Zumbusch, Andreas; Qi, Yujiao; Xia, Liang; Yan, Shaohui; Yang, Yanlong; Gao, Peng; Ye, Tong; Zhao, Wei

    2013-01-01

    Super-resolution three-dimensional (3D) optical microscopy has incomparable advantages over other high-resolution microscopic technologies, such as electron microscopy and atomic force microscopy, in the study of biological molecules, pathways and events in live cells and tissues. We present a novel approach of structured illumination microscopy (SIM) by using a digital micromirror device (DMD) for fringe projection and a low-coherence LED light for illumination. The lateral resolution of 90 nm and the optical sectioning depth of 120 μm were achieved. The maximum acquisition speed for 3D imaging in the optical sectioning mode was 1.6×10(7) pixels/second, which was mainly limited by the sensitivity and speed of the CCD camera. In contrast to other SIM techniques, the DMD-based LED-illumination SIM is cost-effective, ease of multi-wavelength switchable and speckle-noise-free. The 2D super-resolution and 3D optical sectioning modalities can be easily switched and applied to either fluorescent or non-fluorescent specimens.

  11. DMD-based LED-illumination Super-resolution and optical sectioning microscopy

    PubMed Central

    Dan, Dan; Lei, Ming; Yao, Baoli; Wang, Wen; Winterhalder, Martin; Zumbusch, Andreas; Qi, Yujiao; Xia, Liang; Yan, Shaohui; Yang, Yanlong; Gao, Peng; Ye, Tong; Zhao, Wei

    2013-01-01

    Super-resolution three-dimensional (3D) optical microscopy has incomparable advantages over other high-resolution microscopic technologies, such as electron microscopy and atomic force microscopy, in the study of biological molecules, pathways and events in live cells and tissues. We present a novel approach of structured illumination microscopy (SIM) by using a digital micromirror device (DMD) for fringe projection and a low-coherence LED light for illumination. The lateral resolution of 90 nm and the optical sectioning depth of 120 μm were achieved. The maximum acquisition speed for 3D imaging in the optical sectioning mode was 1.6×107 pixels/second, which was mainly limited by the sensitivity and speed of the CCD camera. In contrast to other SIM techniques, the DMD-based LED-illumination SIM is cost-effective, ease of multi-wavelength switchable and speckle-noise-free. The 2D super-resolution and 3D optical sectioning modalities can be easily switched and applied to either fluorescent or non-fluorescent specimens. PMID:23346373

  12. Evaluation of position-estimation methods applied to CZT-based photon-counting detectors for dedicated breast CT

    PubMed Central

    Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J.

    2015-01-01

    Abstract. Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of 100  μm. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a 5×5 array of 200  μm pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent K-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of 194  μm, with 2×2 binning during the acquisition giving an effective pixel size of 388  μm. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors. PMID:26158095

  13. Evaluation of position-estimation methods applied to CZT-based photon-counting detectors for dedicated breast CT.

    PubMed

    Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J

    2015-04-01

    Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of [Formula: see text]. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a [Formula: see text] array of [Formula: see text] pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent [Formula: see text]-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of [Formula: see text], with [Formula: see text] binning during the acquisition giving an effective pixel size of [Formula: see text]. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors.

  14. Pixel super resolution using wavelength scanning

    DTIC Science & Technology

    2016-04-08

    the light source is adjusted to ~20 μW. The image sensor chip is a color CMOS sensor chip with a pixel size of 1.12 μm manufactured for cellphone...pitch (that is, ~ 1 μm in Figure 3a, using a CMOS sensor that has a 1.12-μm pixel pitch). For the same configuration depicted in Figure 3, utilizing...section). The a Lens-free raw holograms captured by 1.12 μm CMOS image sensor Field of view ≈ 20.5 mm2 Angle change directions for synthetic aperture

  15. Quantitative super-resolution single molecule microscopy dataset of YFP-tagged growth factor receptors.

    PubMed

    Lukeš, Tomáš; Pospíšil, Jakub; Fliegel, Karel; Lasser, Theo; Hagen, Guy M

    2018-03-01

    Super-resolution single molecule localization microscopy (SMLM) is a method for achieving resolution beyond the classical limit in optical microscopes (approx. 200 nm laterally). Yellow fluorescent protein (YFP) has been used for super-resolution single molecule localization microscopy, but less frequently than other fluorescent probes. Working with YFP in SMLM is a challenge because a lower number of photons are emitted per molecule compared with organic dyes, which are more commonly used. Publically available experimental data can facilitate development of new data analysis algorithms. Four complete, freely available single molecule super-resolution microscopy datasets on YFP-tagged growth factor receptors expressed in a human cell line are presented, including both raw and analyzed data. We report methods for sample preparation, for data acquisition, and for data analysis, as well as examples of the acquired images. We also analyzed the SMLM datasets using a different method: super-resolution optical fluctuation imaging (SOFI). The 2 modes of analysis offer complementary information about the sample. A fifth single molecule super-resolution microscopy dataset acquired with the dye Alexa 532 is included for comparison purposes. This dataset has potential for extensive reuse. Complete raw data from SMLM experiments have typically not been published. The YFP data exhibit low signal-to-noise ratios, making data analysis a challenge. These datasets will be useful to investigators developing their own algorithms for SMLM, SOFI, and related methods. The data will also be useful for researchers investigating growth factor receptors such as ErbB3.

  16. Development of High Resolution Mirrors and Cd-Zn-Te Detectors for Hard X-ray Astronomy

    NASA Technical Reports Server (NTRS)

    Ramsey, Brian D.; Speegle, Chet O.; Gaskin, Jessica; Sharma, Dharma; Engelhaupt, Darell; Six, N. Frank (Technical Monitor)

    2002-01-01

    We describe the fabrication and implementation of a high-resolution conical, grazing- incidence, hard X-ray (20-70 keV) telescope. When flown aboard stratospheric balloons, these mirrors are used to image cosmic sources such as supernovae, neutron stars, and quasars. The fabrication process involves generating super-polished mandrels, mirror shell electroforming, and mirror testing. The cylindrical mandrels consist of two conical segments; each segment is approximately 305 mm long. These mandrels are first, precision ground to within approx. 1.0 micron straightness along each conical segment and then lapped and polished to less than 0.5 micron straightness. Each mandrel segment is the super-polished to an average surface roughness of approx. 3.25 angstrom rms. By mirror shell replication, this combination of good figure and low surface roughness has enabled us to achieve 15 arcsec, confirmed by X-ray measurements in the Marshall Space Flight Center 102 meter test facility. To image the focused X-rays requires a focal plane detector with appropriate spatial resolution. For 15 arcsec optics of 6 meter focal length, this resolution must be around 200 microns. In addition, the detector must have a high efficiency, relatively high energy resolution, and low background. We are currently developing Cadmium-Zinc-Telluride fine-pixel detectors for this purpose. The detectors under study consist of a 16x16 pixel array with a pixel pitch of 300 microns and are 1 mm and 2 mm thick. At 60 keV, the measured energy resolution is around 2%.

  17. High resolution laboratory grating-based x-ray phase-contrast CT

    NASA Astrophysics Data System (ADS)

    Viermetz, Manuel P.; Birnbacher, Lorenz J. B.; Fehringer, Andreas; Willner, Marian; Noel, Peter B.; Pfeiffer, Franz; Herzen, Julia

    2017-03-01

    Grating-based phase-contrast computed tomography (gbPC-CT) is a promising imaging method for imaging of soft tissue contrast without the need of any contrast agent. The focus of this study is the increase in spatial resolution without loss in sensitivity to allow visualization of pathologies comparable to the convincing results obtained at the synchrotron. To improve the effective pixel size a super-resolution reconstruction based on subpixel shifts involving a deconvolution of the image is applied on differential phase-contrast data. In our study we could achieve an effective pixel sizes of 28mm without any drawback in terms of sensitivity or the ability to measure quantitative data.

  18. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction.

    PubMed

    Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom

    2018-06-01

    Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

  19. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Chengguang; Drinkwater, Bruce W.

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method.more » However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.« less

  20. Multiple-image hiding using super resolution reconstruction in high-frequency domains

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Wei; Zhao, Wu-Xiang; Wang, Jun; Wang, Qiong-Hua

    2017-12-01

    In this paper, a robust multiple-image hiding method using the computer-generated integral imaging and the modified super-resolution reconstruction algorithm is proposed. In our work, the host image is first transformed into frequency domains by cellular automata (CA), to assure the quality of the stego-image, the secret images are embedded into the CA high-frequency domains. The proposed method has the following advantages: (1) robustness to geometric attacks because of the memory-distributed property of elemental images, (2) increasing quality of the reconstructed secret images as the scheme utilizes the modified super-resolution reconstruction algorithm. The simulation results show that the proposed multiple-image hiding method outperforms other similar hiding methods and is robust to some geometric attacks, e.g., Gaussian noise and JPEG compression attacks.

  1. Chandra ACIS Sub-pixel Resolution

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.

    2011-05-01

    We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy

  2. Super-Resolution in Plenoptic Cameras Using FPGAs

    PubMed Central

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-01-01

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246

  3. Super-resolution in plenoptic cameras using FPGAs.

    PubMed

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  4. Light-sheet Bayesian microscopy enables deep-cell super-resolution imaging of heterochromatin in live human embryonic stem cells.

    PubMed

    Hu, Ying S; Zhu, Quan; Elkins, Keri; Tse, Kevin; Li, Yu; Fitzpatrick, James A J; Verma, Inder M; Cang, Hu

    2013-01-01

    Heterochromatin in the nucleus of human embryonic cells plays an important role in the epigenetic regulation of gene expression. The architecture of heterochromatin and its dynamic organization remain elusive because of the lack of fast and high-resolution deep-cell imaging tools. We enable this task by advancing instrumental and algorithmic implementation of the localization-based super-resolution technique. We present light-sheet Bayesian super-resolution microscopy (LSBM). We adapt light-sheet illumination for super-resolution imaging by using a novel prism-coupled condenser design to illuminate a thin slice of the nucleus with high signal-to-noise ratio. Coupled with a Bayesian algorithm that resolves overlapping fluorophores from high-density areas, we show, for the first time, nanoscopic features of the heterochromatin structure in both fixed and live human embryonic stem cells. The enhanced temporal resolution allows capturing the dynamic change of heterochromatin with a lateral resolution of 50-60 nm on a time scale of 2.3 s. Light-sheet Bayesian microscopy opens up broad new possibilities of probing nanometer-scale nuclear structures and real-time sub-cellular processes and other previously difficult-to-access intracellular regions of living cells at the single-molecule, and single cell level.

  5. Light-sheet Bayesian microscopy enables deep-cell super-resolution imaging of heterochromatin in live human embryonic stem cells

    PubMed Central

    Hu, Ying S; Zhu, Quan; Elkins, Keri; Tse, Kevin; Li, Yu; Fitzpatrick, James A J; Verma, Inder M; Cang, Hu

    2016-01-01

    Background Heterochromatin in the nucleus of human embryonic cells plays an important role in the epigenetic regulation of gene expression. The architecture of heterochromatin and its dynamic organization remain elusive because of the lack of fast and high-resolution deep-cell imaging tools. We enable this task by advancing instrumental and algorithmic implementation of the localization-based super-resolution technique. Results We present light-sheet Bayesian super-resolution microscopy (LSBM). We adapt light-sheet illumination for super-resolution imaging by using a novel prism-coupled condenser design to illuminate a thin slice of the nucleus with high signal-to-noise ratio. Coupled with a Bayesian algorithm that resolves overlapping fluorophores from high-density areas, we show, for the first time, nanoscopic features of the heterochromatin structure in both fixed and live human embryonic stem cells. The enhanced temporal resolution allows capturing the dynamic change of heterochromatin with a lateral resolution of 50–60 nm on a time scale of 2.3 s. Conclusion Light-sheet Bayesian microscopy opens up broad new possibilities of probing nanometer-scale nuclear structures and real-time sub-cellular processes and other previously difficult-to-access intracellular regions of living cells at the single-molecule, and single cell level. PMID:27795878

  6. Example-based super-resolution for single-image analysis from the Chang'e-1 Mission

    NASA Astrophysics Data System (ADS)

    Wu, Fan-Lu; Wang, Xiang-Jun

    2016-11-01

    Due to the low spatial resolution of images taken from the Chang'e-1 (CE-1) orbiter, the details of the lunar surface are blurred and lost. Considering the limited spatial resolution of image data obtained by a CCD camera on CE-1, an example-based super-resolution (SR) algorithm is employed to obtain high-resolution (HR) images. SR reconstruction is important for the application of image data to increase the resolution of images. In this article, a novel example-based algorithm is proposed to implement SR reconstruction by single-image analysis, and the computational cost is reduced compared to other example-based SR methods. The results show that this method can enhance the resolution of images using SR and recover detailed information about the lunar surface. Thus it can be used for surveying HR terrain and geological features. Moreover, the algorithm is significant for the HR processing of remotely sensed images obtained by other imaging systems.

  7. Super-resolution reconstruction of MR image with a novel residual learning network algorithm

    NASA Astrophysics Data System (ADS)

    Shi, Jun; Liu, Qingping; Wang, Chaofeng; Zhang, Qi; Ying, Shihui; Xu, Haoyu

    2018-04-01

    Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. Convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning-based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance to all of the other compared CNN-based SR algorithms in this work.

  8. Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data

    NASA Astrophysics Data System (ADS)

    Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.

    2017-12-01

    We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).

  9. An asynchronous data-driven readout prototype for CEPC vertex detector

    NASA Astrophysics Data System (ADS)

    Yang, Ping; Sun, Xiangming; Huang, Guangming; Xiao, Le; Gao, Chaosong; Huang, Xing; Zhou, Wei; Ren, Weiping; Li, Yashu; Liu, Jianchao; You, Bihui; Zhang, Li

    2017-12-01

    The Circular Electron Positron Collider (CEPC) is proposed as a Higgs boson and/or Z boson factory for high-precision measurements on the Higgs boson. The precision of secondary vertex impact parameter plays an important role in such measurements which typically rely on flavor-tagging. Thus silicon CMOS Pixel Sensors (CPS) are the most promising technology candidate for a CEPC vertex detector, which can most likely feature a high position resolution, a low power consumption and a fast readout simultaneously. For the R&D of the CEPC vertex detector, we have developed a prototype MIC4 in the Towerjazz 180 nm CMOS Image Sensor (CIS) process. We have proposed and implemented a new architecture of asynchronous zero-suppression data-driven readout inside the matrix combined with a binary front-end inside the pixel. The matrix contains 128 rows and 64 columns with a small pixel pitch of 25 μm. The readout architecture has implemented the traditional OR-gate chain inside a super pixel combined with a priority arbiter tree between the super pixels, only reading out relevant pixels. The MIC4 architecture will be introduced in more detail in this paper. It will be taped out in May and will be characterized when the chip comes back.

  10. Circuit for high resolution decoding of multi-anode microchannel array detectors

    NASA Technical Reports Server (NTRS)

    Kasle, David B. (Inventor)

    1995-01-01

    A circuit for high resolution decoding of multi-anode microchannel array detectors consisting of input registers accepting transient inputs from the anode array; anode encoding logic circuits connected to the input registers; midpoint pipeline registers connected to the anode encoding logic circuits; and pixel decoding logic circuits connected to the midpoint pipeline registers is described. A high resolution algorithm circuit operates in parallel with the pixel decoding logic circuit and computes a high resolution least significant bit to enhance the multianode microchannel array detector's spatial resolution by halving the pixel size and doubling the number of pixels in each axis of the anode array. A multiplexer is connected to the pixel decoding logic circuit and allows a user selectable pixel address output according to the actual multi-anode microchannel array detector anode array size. An output register concatenates the high resolution least significant bit onto the standard ten bit pixel address location to provide an eleven bit pixel address, and also stores the full eleven bit pixel address. A timing and control state machine is connected to the input registers, the anode encoding logic circuits, and the output register for managing the overall operation of the circuit.

  11. Super-Resolution of Multi-Pixel and Sub-Pixel Images for the SDI

    DTIC Science & Technology

    1993-06-08

    where the phase of the transmitted signal is not needed. The Wigner - Ville distribution ( WVD ) of a real signal s(t), associated with the complex...B. Boashash, 0. P. Kenny and H. J. Whitehouse, "Radar imaging using the Wigner - Ville distribution ", in Real-Time Signal Processing, J. P. Letellier...analytic signal z(t), is a time- frequency distribution defined as-’- 00 W(tf) Z (~t + ) t- -)exp(-i2nft) . (45) Note that the WVD is the double Fourier

  12. Lensfree super-resolution holographic microscopy using wetting films on a chip

    NASA Astrophysics Data System (ADS)

    Mudanyali, Onur; Bishara, Waheb; Ozcan, Aydogan

    2011-08-01

    We investigate the use of wetting films to significantly improve the imaging performance of lensfree pixel super-resolution on-chip microscopy, achieving < 1 μm spatial resolution over a large imaging area of ~24 mm2. Formation of an ultra-thin wetting film over the specimen effectively creates a micro-lens effect over each object, which significantly improves the signal-to-noise-ratio and therefore the resolution of our lensfree images. We validate the performance of this approach through lensfree on-chip imaging of various objects having fine morphological features (with dimensions of e.g., ≤0.5 μm) such as Escherichia coli (E. coli), human sperm, Giardia lamblia trophozoites, polystyrene micro beads as well as red blood cells. These results are especially important for the development of highly sensitive field-portable microscopic analysis tools for resource limited settings.

  13. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  14. Nanoscale Spatiotemporal Diffusion Modes Measured by Simultaneous Confocal and Stimulated Emission Depletion Nanoscopy Imaging.

    PubMed

    Schneider, Falk; Waithe, Dominic; Galiani, Silvia; Bernardino de la Serna, Jorge; Sezgin, Erdinc; Eggeling, Christian

    2018-06-19

    The diffusion dynamics in the cellular plasma membrane provide crucial insights into molecular interactions, organization, and bioactivity. Beam-scanning fluorescence correlation spectroscopy combined with super-resolution stimulated emission depletion nanoscopy (scanning STED-FCS) measures such dynamics with high spatial and temporal resolution. It reveals nanoscale diffusion characteristics by measuring the molecular diffusion in conventional confocal mode and super-resolved STED mode sequentially for each pixel along the scanned line. However, to directly link the spatial and the temporal information, a method that simultaneously measures the diffusion in confocal and STED modes is needed. Here, to overcome this problem, we establish an advanced STED-FCS measurement method, line interleaved excitation scanning STED-FCS (LIESS-FCS), that discloses the molecular diffusion modes at different spatial positions with a single measurement. It relies on fast beam-scanning along a line with alternating laser illumination that yields, for each pixel, the apparent diffusion coefficients for two different observation spot sizes (conventional confocal and super-resolved STED). We demonstrate the potential of the LIESS-FCS approach with simulations and experiments on lipid diffusion in model and live cell plasma membranes. We also apply LIESS-FCS to investigate the spatiotemporal organization of glycosylphosphatidylinositol-anchored proteins in the plasma membrane of live cells, which, interestingly, show multiple diffusion modes at different spatial positions.

  15. Model-free uncertainty estimation in stochastical optical fluctuation imaging (SOFI) leads to a doubled temporal resolution

    PubMed Central

    Vandenberg, Wim; Duwé, Sam; Leutenegger, Marcel; Moeyaert, Benjamien; Krajnik, Bartosz; Lasser, Theo; Dedecker, Peter

    2016-01-01

    Stochastic optical fluctuation imaging (SOFI) is a super-resolution fluorescence imaging technique that makes use of stochastic fluctuations in the emission of the fluorophores. During a SOFI measurement multiple fluorescence images are acquired from the sample, followed by the calculation of the spatiotemporal cumulants of the intensities observed at each position. Compared to other techniques, SOFI works well under conditions of low signal-to-noise, high background, or high emitter densities. However, it can be difficult to unambiguously determine the reliability of images produced by any superresolution imaging technique. In this work we present a strategy that enables the estimation of the variance or uncertainty associated with each pixel in the SOFI image. In addition to estimating the image quality or reliability, we show that this can be used to optimize the signal-to-noise ratio (SNR) of SOFI images by including multiple pixel combinations in the cumulant calculation. We present an algorithm to perform this optimization, which automatically takes all relevant instrumental, sample, and probe parameters into account. Depending on the optical magnification of the system, this strategy can be used to improve the SNR of a SOFI image by 40% to 90%. This gain in information is entirely free, in the sense that it does not require additional efforts or complications. Alternatively our approach can be applied to reduce the number of fluorescence images to meet a particular quality level by about 30% to 50%, strongly improving the temporal resolution of SOFI imaging. PMID:26977356

  16. Image super-resolution via sparse representation.

    PubMed

    Yang, Jianchao; Wright, John; Huang, Thomas S; Ma, Yi

    2010-11-01

    This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework.

  17. Further developments of 8μm pitch MCT pixels at Finmeccanica (formerly Selex ES)

    NASA Astrophysics Data System (ADS)

    Jeckells, David; McEwen, R. Kennedy; Bains, Sudesh; Herbert, Martin

    2016-05-01

    Finmeccanica (formerly Selex ES) introduced high performance mercury cadmium telluride (MCT) infrared detectors on an 8μm pitch in 2015 with their SuperHawk device which builds on standard production processes already used for the manufacture of 24μm, 20μm, 16μm and 12μm pitch devices. The flexibility of the proprietary Finmeccanica designed diode structure, used in conjunction with the mature production Metal Organic Vapour Phase Epitaxy (MOVPE) MCT growth process at Finmeccanica, enables fine control of diode electrical and optical structure including free choice of cut-off wavelength. The mesa pixel design inherently provides major system performance benefits by reducing blurring mechanisms, including optical scattering, inter-pixel cross-talk and carrier diffusion, to negligible levels. The SuperHawk detector has demonstrated unrivalled MTF and NETD performance, even when operating at temperatures in excess of 120K. The SuperHawk Integrated Detector Cooler Assembly (IDCA) benefits from recent dewar developments at Finmeccanica, which have improved thermal efficiencies while maintaining mechanical integrity over a wide range of applications, enabling use of smaller cryo-coolers to reduce system SWAP-C. Performance and qualification results are presented together with example imagery. SuperHawk provides an easy high resolution upgrade for systems currently based on standard definition 16μm and 15μm infrared detector formats. The paper also addresses further work to increase the operating temperature of the established 8μm process, exploiting High Operating Temperature (HOT) MCT at Finmeccanica, as well as options for LWIR variants of the SuperHawk device.

  18. Evaluating an image-fusion algorithm with synthetic-image-generation tools

    NASA Astrophysics Data System (ADS)

    Gross, Harry N.; Schott, John R.

    1996-06-01

    An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.

  19. LAI inversion algorithm based on directional reflectance kernels.

    PubMed

    Tang, S; Chen, J M; Zhu, Q; Li, X; Chen, M; Sun, R; Zhou, Y; Deng, F; Xie, D

    2007-11-01

    Leaf area index (LAI) is an important ecological and environmental parameter. A new LAI algorithm is developed using the principles of ground LAI measurements based on canopy gap fraction. First, the relationship between LAI and gap fraction at various zenith angles is derived from the definition of LAI. Then, the directional gap fraction is acquired from a remote sensing bidirectional reflectance distribution function (BRDF) product. This acquisition is obtained by using a kernel driven model and a large-scale directional gap fraction algorithm. The algorithm has been applied to estimate a LAI distribution in China in mid-July 2002. The ground data acquired from two field experiments in Changbai Mountain and Qilian Mountain were used to validate the algorithm. To resolve the scale discrepancy between high resolution ground observations and low resolution remote sensing data, two TM images with a resolution approaching the size of ground plots were used to relate the coarse resolution LAI map to ground measurements. First, an empirical relationship between the measured LAI and a vegetation index was established. Next, a high resolution LAI map was generated using the relationship. The LAI value of a low resolution pixel was calculated from the area-weighted sum of high resolution LAIs composing the low resolution pixel. The results of this comparison showed that the inversion algorithm has an accuracy of 82%. Factors that may influence the accuracy are also discussed in this paper.

  20. Fast and efficient molecule detection in localization-based super-resolution microscopy by parallel adaptive histogram equalization.

    PubMed

    Li, Yiming; Ishitsuka, Yuji; Hedde, Per Niklas; Nienhaus, G Ulrich

    2013-06-25

    In localization-based super-resolution microscopy, individual fluorescent markers are stochastically photoactivated and subsequently localized within a series of camera frames, yielding a final image with a resolution far beyond the diffraction limit. Yet, before localization can be performed, the subregions within the frames where the individual molecules are present have to be identified-oftentimes in the presence of high background. In this work, we address the importance of reliable molecule identification for the quality of the final reconstructed super-resolution image. We present a fast and robust algorithm (a-livePALM) that vastly improves the molecule detection efficiency while minimizing false assignments that can lead to image artifacts.

  1. Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ

    PubMed Central

    Müller, Marcel; Mönkemöller, Viola; Hennig, Simon; Hübner, Wolfgang; Huser, Thomas

    2016-01-01

    Super-resolved structured illumination microscopy (SR-SIM) is an important tool for fluorescence microscopy. SR-SIM microscopes perform multiple image acquisitions with varying illumination patterns, and reconstruct them to a super-resolved image. In its most frequent, linear implementation, SR-SIM doubles the spatial resolution. The reconstruction is performed numerically on the acquired wide-field image data, and thus relies on a software implementation of specific SR-SIM image reconstruction algorithms. We present fairSIM, an easy-to-use plugin that provides SR-SIM reconstructions for a wide range of SR-SIM platforms directly within ImageJ. For research groups developing their own implementations of super-resolution structured illumination microscopy, fairSIM takes away the hurdle of generating yet another implementation of the reconstruction algorithm. For users of commercial microscopes, it offers an additional, in-depth analysis option for their data independent of specific operating systems. As a modular, open-source solution, fairSIM can easily be adapted, automated and extended as the field of SR-SIM progresses. PMID:26996201

  2. Local structure-based image decomposition for feature extraction with applications to face recognition.

    PubMed

    Qian, Jianjun; Yang, Jian; Xu, Yong

    2013-09-01

    This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.

  3. Super-resolution processing for multi-functional LPI waveforms

    NASA Astrophysics Data System (ADS)

    Li, Zhengzheng; Zhang, Yan; Wang, Shang; Cai, Jingxiao

    2014-05-01

    Super-resolution (SR) is a radar processing technique closely related to the pulse compression (or correlation receiver). There are many super-resolution algorithms developed for the improved range resolution and reduced sidelobe contaminations. Traditionally, the waveforms used for the SR have been either phase-coding (such as LKP3 code, Barker code) or the frequency modulation (chirp, or nonlinear frequency modulation). There are, however, an important class of waveforms which are either random in nature (such as random noise waveform), or randomly modulated for multiple function operations (such as the ADS-B radar signals in [1]). These waveforms have the advantages of low-probability-of-intercept (LPI). If the existing SR techniques can be applied to these waveforms, there will be much more flexibility for using these waveforms in actual sensing missions. Also, SR usually has great advantage that the final output (as estimation of ground truth) is largely independent of the waveform. Such benefits are attractive to many important primary radar applications. In this paper the general introduction of the SR algorithms are provided first, and some implementation considerations are discussed. The selected algorithms are applied to the typical LPI waveforms, and the results are discussed. It is observed that SR algorithms can be reliably used for LPI waveforms, on the other hand, practical considerations should be kept in mind in order to obtain the optimal estimation results.

  4. Microstructural analysis of aluminum high pressure die castings

    NASA Astrophysics Data System (ADS)

    David, Maria Diana

    Microstructural analysis of aluminum high pressure die castings (HPDC) is challenging and time consuming. Automating the stereology method is an efficient way in obtaining quantitative data; however, validating the accuracy of this technique can also pose some challenges. In this research, a semi-automated algorithm to quantify microstructural features in aluminum HPDC was developed. Analysis was done near the casting surface where it exhibited fine microstructure. Optical and Secondary electron (SE) and backscatter electron (BSE) SEM images were taken to characterize the features in the casting. Image processing steps applied on SEM and optical micrographs included median and range filters, dilation, erosion, and a hole-closing function. Measurements were done on different image pixel resolutions that ranged from 3 to 35 pixel/μm. Pixel resolutions below 6 px/μm were too low for the algorithm to distinguish the phases from each other. At resolutions higher than 6 px/μm, the volume fraction of primary α-Al and the line intercept count curves plateaued. Within this range, comparable results were obtained validating the assumption that there is a range of image pixel resolution relative to the size of the casting features at which stereology measurements become independent of the image resolution. Volume fraction within this curve plateau was consistent with the manual measurements while the line intercept count was significantly higher using the computerized technique for all resolutions. This was attributed to the ragged edges of some primary α-Al; hence, the algorithm still needs some improvements. Further validation of the code using other castings or alloys with known phase amount and size may also be beneficial.

  5. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    NASA Astrophysics Data System (ADS)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  6. Comparing the imaging performance of computed super resolution and magnification tomosynthesis

    NASA Astrophysics Data System (ADS)

    Maidment, Tristan D.; Vent, Trevor L.; Ferris, William S.; Wurtele, David E.; Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2017-03-01

    Computed super-resolution (SR) is a method of reconstructing images with pixels that are smaller than the detector element size; superior spatial resolution is achieved through the elimination of aliasing and alteration of the sampling function imposed by the reconstructed pixel aperture. By comparison, magnification mammography is a method of projection imaging that uses geometric magnification to increase spatial resolution. This study explores the development and application of magnification digital breast tomosynthesis (MDBT). Four different acquisition geometries are compared in terms of various image metrics. High-contrast spatial resolution was measured in various axes using a lead star pattern. A modified Defrise phantom was used to determine the low-frequency spatial resolution. An anthropomorphic phantom was used to simulate clinical imaging. Each experiment was conducted at three different magnifications: contact (1.04x), MAG1 (1.3x), and MAG2 (1.6x). All images were taken on our next generation tomosynthesis system, an in-house solution designed to optimize SR. It is demonstrated that both computed SR and MDBT (MAG1 and MAG2) provide improved spatial resolution over non-SR contact imaging. To achieve the highest resolution, SR and MDBT should be combined. However, MDBT is adversely affected by patient motion at higher magnifications. In addition, MDBT requires more radiation dose and delays diagnosis, since MDBT would be conducted upon recall. By comparison, SR can be conducted with the original screening data. In conclusion, this study demonstrates that computed SR and MDBT are both viable methods of imaging the breast.

  7. Point target detection utilizing super-resolution strategy for infrared scanning oversampling system

    NASA Astrophysics Data System (ADS)

    Wang, Longguang; Lin, Zaiping; Deng, Xinpu; An, Wei

    2017-11-01

    To improve the resolution of remote sensing infrared images, infrared scanning oversampling system is employed with information amount quadrupled, which contributes to the target detection. Generally the image data from double-line detector of infrared scanning oversampling system is shuffled to a whole oversampled image to be post-processed, whereas the aliasing between neighboring pixels leads to image degradation with a great impact on target detection. This paper formulates a point target detection method utilizing super-resolution (SR) strategy concerning infrared scanning oversampling system, with an accelerated SR strategy proposed to realize fast de-aliasing of the oversampled image and an adaptive MRF-based regularization designed to achieve the preserving and aggregation of target energy. Extensive experiments demonstrate the superior detection performance, robustness and efficiency of the proposed method compared with other state-of-the-art approaches.

  8. A multi-emitter fitting algorithm for potential live cell super-resolution imaging over a wide range of molecular densities.

    PubMed

    Takeshima, T; Takahashi, T; Yamashita, J; Okada, Y; Watanabe, S

    2018-05-25

    Multi-emitter fitting algorithms have been developed to improve the temporal resolution of single-molecule switching nanoscopy, but the molecular density range they can analyse is narrow and the computation required is intensive, significantly limiting their practical application. Here, we propose a computationally fast method, wedged template matching (WTM), an algorithm that uses a template matching technique to localise molecules at any overlapping molecular density from sparse to ultrahigh density with subdiffraction resolution. WTM achieves the localization of overlapping molecules at densities up to 600 molecules μm -2 with a high detection sensitivity and fast computational speed. WTM also shows localization precision comparable with that of DAOSTORM (an algorithm for high-density super-resolution microscopy), at densities up to 20 molecules μm -2 , and better than DAOSTORM at higher molecular densities. The application of WTM to a high-density biological sample image demonstrated that it resolved protein dynamics from live cell images with subdiffraction resolution and a temporal resolution of several hundred milliseconds or less through a significant reduction in the number of camera images required for a high-density reconstruction. WTM algorithm is a computationally fast, multi-emitter fitting algorithm that can analyse over a wide range of molecular densities. The algorithm is available through the website. https://doi.org/10.17632/bf3z6xpn5j.1. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.

  9. Particle tracking and extended object imaging by interferometric super resolution microscopy

    NASA Astrophysics Data System (ADS)

    Gdor, Itay; Yoo, Seunghwan; Wang, Xiaolei; Daddysman, Matthew; Wilton, Rosemarie; Ferrier, Nicola; Hereld, Mark; Cossairt, Oliver (Ollie); Katsaggelos, Aggelos; Scherer, Norbert F.

    2018-02-01

    An interferometric fluorescent microscope and a novel theoretic image reconstruction approach were developed and used to obtain super-resolution images of live biological samples and to enable dynamic real time tracking. The tracking utilizes the information stored in the interference pattern of both the illuminating incoherent light and the emitted light. By periodically shifting the interferometer phase and a phase retrieval algorithm we obtain information that allow localization with sub-2 nm axial resolution at 5 Hz.

  10. Enhancing Deep-Water Low-Resolution Gridded Bathymetry Using Single Image Super-Resolution

    NASA Astrophysics Data System (ADS)

    Elmore, P. A.; Nock, K.; Bonanno, D.; Smith, L.; Ferrini, V. L.; Petry, F. E.

    2017-12-01

    We present research to employ single-image super-resolution (SISR) algorithms to enhance knowledge of the seafloor using the 1-minute GEBCO 2014 grid when 100m grids from high-resolution sonar systems are available for training. Our numerical upscaling experiments of x15 upscaling of the GEBCO grid along three areas of the Eastern Pacific Ocean along mid-ocean ridge systems where we have these 100m gridded bathymetry data sets, which we accept as ground-truth. We show that four SISR algorithms can enhance this low-resolution knowledge of bathymetry versus bicubic or Spline-In-Tension algorithms through upscaling under these conditions: 1) rough topography is present in both training and testing areas and 2) the range of depths and features in the training area contains the range of depths in the enhancement area. We quantitatively judged successful SISR enhancement versus bicubic interpolation when Student's hypothesis testing show significant improvement of the root-mean squared error (RMSE) between upscaled bathymetry and 100m gridded ground-truth bathymetry at p < 0.05. In addition, we found evidence that random forest based SISR methods may provide more robust enhancements versus non-forest based SISR algorithms.

  11. Cygnus A super-resolved via convex optimization from VLA data

    NASA Astrophysics Data System (ADS)

    Dabbech, A.; Onose, A.; Abdulaziz, A.; Perley, R. A.; Smirnov, O. M.; Wiaux, Y.

    2018-05-01

    We leverage the Sparsity Averaging Re-weighted Analysis approach for interferometric imaging, that is based on convex optimization, for the super-resolution of Cyg A from observations at the frequencies 8.422 and 6.678 GHz with the Karl G. Jansky Very Large Array (VLA). The associated average sparsity and positivity priors enable image reconstruction beyond instrumental resolution. An adaptive Preconditioned primal-dual algorithmic structure is developed for imaging in the presence of unknown noise levels and calibration errors. We demonstrate the superior performance of the algorithm with respect to the conventional CLEAN-based methods, reflected in super-resolved images with high fidelity. The high-resolution features of the recovered images are validated by referring to maps of Cyg A at higher frequencies, more precisely 17.324 and 14.252 GHz. We also confirm the recent discovery of a radio transient in Cyg A, revealed in the recovered images of the investigated data sets. Our MATLAB code is available online on GitHub.

  12. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  13. Hierarchical Object-based Image Analysis approach for classification of sub-meter multispectral imagery in Tanzania

    NASA Astrophysics Data System (ADS)

    Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.

    2015-12-01

    Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.

  14. Super-resolution reconstruction for 4D computed tomography of the lung via the projections onto convex sets approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yu, E-mail: yuzhang@smu.edu.cn, E-mail: qianjinfeng08@gmail.com; Wu, Xiuxiu; Yang, Wei

    2014-11-01

    Purpose: The use of 4D computed tomography (4D-CT) of the lung is important in lung cancer radiotherapy for tumor localization and treatment planning. Sometimes, dense sampling is not acquired along the superior–inferior direction. This disadvantage results in an interslice thickness that is much greater than in-plane voxel resolutions. Isotropic resolution is necessary for multiplanar display, but the commonly used interpolation operation blurs images. This paper presents a super-resolution (SR) reconstruction method to enhance 4D-CT resolution. Methods: The authors assume that the low-resolution images of different phases at the same position can be regarded as input “frames” to reconstruct high-resolution images.more » The SR technique is used to recover high-resolution images. Specifically, the Demons deformable registration algorithm is used to estimate the motion field between different “frames.” Then, the projection onto convex sets approach is implemented to reconstruct high-resolution lung images. Results: The performance of the SR algorithm is evaluated using both simulated and real datasets. Their method can generate clearer lung images and enhance image structure compared with cubic spline interpolation and back projection (BP) method. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 40.8% relative to cubic spline interpolation and 10.2% versus BP. Conclusions: A new algorithm has been developed to improve the resolution of 4D-CT. The algorithm outperforms the cubic spline interpolation and BP approaches by producing images with markedly improved structural clarity and greatly reduced artifacts.« less

  15. Automatic Organ Segmentation for CT Scans Based on Super-Pixel and Convolutional Neural Networks.

    PubMed

    Liu, Xiaoming; Guo, Shuxu; Yang, Bingtao; Ma, Shuzhi; Zhang, Huimao; Li, Jing; Sun, Changjian; Jin, Lanyi; Li, Xueyan; Yang, Qi; Fu, Yu

    2018-04-20

    Accurate segmentation of specific organ from computed tomography (CT) scans is a basic and crucial task for accurate diagnosis and treatment. To avoid time-consuming manual optimization and to help physicians distinguish diseases, an automatic organ segmentation framework is presented. The framework utilized convolution neural networks (CNN) to classify pixels. To reduce the redundant inputs, the simple linear iterative clustering (SLIC) of super-pixels and the support vector machine (SVM) classifier are introduced. To establish the perfect boundary of organs in one-pixel-level, the pixels need to be classified step-by-step. First, the SLIC is used to cut an image into grids and extract respective digital signatures. Next, the signature is classified by the SVM, and the rough edges are acquired. Finally, a precise boundary is obtained by the CNN, which is based on patches around each pixel-point. The framework is applied to abdominal CT scans of livers and high-resolution computed tomography (HRCT) scans of lungs. The experimental CT scans are derived from two public datasets (Sliver 07 and a Chinese local dataset). Experimental results show that the proposed method can precisely and efficiently detect the organs. This method consumes 38 s/slice for liver segmentation. The Dice coefficient of the liver segmentation results reaches to 97.43%. For lung segmentation, the Dice coefficient is 97.93%. This finding demonstrates that the proposed framework is a favorable method for lung segmentation of HRCT scans.

  16. Photon-efficient super-resolution laser radar

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2017-08-01

    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.

  17. Scene-based nonuniformity correction and enhancement: pixel statistics and subpixel motion.

    PubMed

    Zhao, Wenyi; Zhang, Chao

    2008-07-01

    We propose a framework for scene-based nonuniformity correction (NUC) and nonuniformity correction and enhancement (NUCE) that is required for focal-plane array-like sensors to obtain clean and enhanced-quality images. The core of the proposed framework is a novel registration-based nonuniformity correction super-resolution (NUCSR) method that is bootstrapped by statistical scene-based NUC methods. Based on a comprehensive imaging model and an accurate parametric motion estimation, we are able to remove severe/structured nonuniformity and in the presence of subpixel motion to simultaneously improve image resolution. One important feature of our NUCSR method is the adoption of a parametric motion model that allows us to (1) handle many practical scenarios where parametric motions are present and (2) carry out perfect super-resolution in principle by exploring available subpixel motions. Experiments with real data demonstrate the efficiency of the proposed NUCE framework and the effectiveness of the NUCSR method.

  18. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  19. Efficient phase contrast imaging in STEM using a pixelated detector. Part 1: Experimental demonstration at atomic resolution

    DOE PAGES

    Pennycook, Timothy J.; Lupini, Andrew R.; Yang, Hao; ...

    2014-10-15

    In this paper, we demonstrate a method to achieve high efficiency phase contrast imaging in aberration corrected scanning transmission electron microscopy (STEM) with a pixelated detector. The pixelated detector is used to record the Ronchigram as a function of probe position which is then analyzed with ptychography. Ptychography has previously been used to provide super-resolution beyond the diffraction limit of the optics, alongside numerically correcting for spherical aberration. Here we rely on a hardware aberration corrector to eliminate aberrations, but use the pixelated detector data set to utilize the largest possible volume of Fourier space to create high efficiency phasemore » contrast images. The use of ptychography to diagnose the effects of chromatic aberration is also demonstrated. In conclusion, the four dimensional dataset is used to compare different bright field detector configurations from the same scan for a sample of bilayer graphene. Our method of high efficiency ptychography produces the clearest images, while annular bright field produces almost no contrast for an in-focus aberration-corrected probe.« less

  20. Construction of pixel-level resolution DEMs from monocular images by shape and albedo from shading constrained with low-resolution DEM

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Liu, Wai Chung; Grumpe, Arne; Wöhler, Christian

    2018-06-01

    Lunar Digital Elevation Model (DEM) is important for lunar successful landing and exploration missions. Lunar DEMs are typically generated by photogrammetry or laser altimetry approaches. Photogrammetric methods require multiple stereo images of the region of interest and it may not be applicable in cases where stereo coverage is not available. In contrast, reflectance based shape reconstruction techniques, such as shape from shading (SfS) and shape and albedo from shading (SAfS), apply monocular images to generate DEMs with pixel-level resolution. We present a novel hierarchical SAfS method that refines a lower-resolution DEM to pixel-level resolution given a monocular image with known light source. We also estimate the corresponding pixel-wise albedo map in the process and based on that to regularize the shape reconstruction with pixel-level resolution based on the low-resolution DEM. In this study, a Lunar-Lambertian reflectance model is applied to estimate the albedo map. Experiments were carried out using monocular images from the Lunar Reconnaissance Orbiter Narrow Angle Camera (LRO NAC), with spatial resolution of 0.5-1.5 m per pixel, constrained by the Selenological and Engineering Explorer and LRO Elevation Model (SLDEM), with spatial resolution of 60 m. The results indicate that local details are well recovered by the proposed algorithm with plausible albedo estimation. The low-frequency topographic consistency depends on the quality of low-resolution DEM and the resolution difference between the image and the low-resolution DEM.

  1. Calibration of the venµs super-spectral camera

    NASA Astrophysics Data System (ADS)

    Topaz, Jeremy; Sprecher, Tuvia; Tinto, Francesc; Echeto, Pierre; Hagolle, Olivier

    2017-11-01

    A high-resolution super-spectral camera is being developed by Elbit Systems in Israel for the joint CNES- Israel Space Agency satellite, VENμS (Vegetation and Environment monitoring on a new Micro-Satellite). This camera will have 12 narrow spectral bands in the Visible/NIR region and will give images with 5.3 m resolution from an altitude of 720 km, with an orbit which allows a two-day revisit interval for a number of selected sites distributed over some two-thirds of the earth's surface. The swath width will be 27 km at this altitude. To ensure the high radiometric and geometric accuracy needed to fully exploit such multiple data sampling, careful attention is given in the design to maximize characteristics such as signal-to-noise ratio (SNR), spectral band accuracy, stray light rejection, inter- band pixel-to-pixel registration, etc. For the same reasons, accurate calibration of all the principle characteristics is essential, and this presents some major challenges. The methods planned to achieve the required level of calibration are presented following a brief description of the system design. A fuller description of the system design is given in [2], [3] and [4].

  2. SuperHERO: Design of a New Hard X-Ray Focusing Telescope

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Elsner, Ronald; Ramsey, Brian; Wilson-Hodge, Colleen; Tennant, Allyn; Christe, Steven; Shih, Albert; Kiranmayee, Kilaru; Swartz, Douglas; Seller, Paul; hide

    2015-01-01

    SuperHERO is a hard x-ray (20-75 keV) balloon-borne telescope, currently in its proposal phase, that will utilize high angular-resolution grazing-incidence optics, coupled to novel CdTe multi-pixel, fine-pitch (250 micrometers) detectors. The high-resolution electroformed-nickel, grazing-incidence optics were developed at MSFC, and the detectors were developed at the Rutherford Appleton Laboratory in the UK, and are being readied for flight at GSFC. SuperHERO will use two active pointing systems; one for carrying out astronomical observations and another for solar observations during the same flight. The telescope will reside on a light-weight, carbon-composite structure that will integrate the Wallops Arc Second Pointer into its frame, for arcsecond or better pointing. This configuration will allow for Long Duration Balloon flights that can last up to 4 weeks. This next generation design, which is based on the High Energy Replicated Optics (HERO) and HERO to Explore the Sun (HEROES) payloads, will be discussed, with emphasis on the core telescope components.

  3. Sparse super-resolution reconstructions of video from mobile devices in digital TV broadcast applications

    NASA Astrophysics Data System (ADS)

    Boon, Choong S.; Guleryuz, Onur G.; Kawahara, Toshiro; Suzuki, Yoshinori

    2006-08-01

    We consider the mobile service scenario where video programming is broadcast to low-resolution wireless terminals. In such a scenario, broadcasters utilize simultaneous data services and bi-directional communications capabilities of the terminals in order to offer substantially enriched viewing experiences to users by allowing user participation and user tuned content. While users immediately benefit from this service when using their phones in mobile environments, the service is less appealing in stationary environments where a regular television provides competing programming at much higher display resolutions. We propose a fast super-resolution technique that allows the mobile terminals to show a much enhanced version of the broadcast video on nearby high-resolution devices, extending the appeal and usefulness of the broadcast service. The proposed single frame super-resolution algorithm uses recent sparse recovery results to provide high quality and high-resolution video reconstructions based solely on individual decoded frames provided by the low-resolution broadcast.

  4. Micrometer-resolution imaging using MÖNCH: towards G2-less grating interferometry

    PubMed Central

    Cartier, Sebastian; Kagias, Matias; Bergamaschi, Anna; Wang, Zhentian; Dinapoli, Roberto; Mozzanica, Aldo; Ramilli, Marco; Schmitt, Bernd; Brückner, Martin; Fröjdh, Erik; Greiffenberg, Dominic; Mayilyan, Davit; Mezza, Davide; Redford, Sophie; Ruder, Christian; Schädler, Lukas; Shi, Xintian; Thattil, Dhanya; Tinti, Gemma; Zhang, Jiaguo; Stampanoni, Marco

    2016-01-01

    MÖNCH is a 25 µm-pitch charge-integrating detector aimed at exploring the limits of current hybrid silicon detector technology. The small pixel size makes it ideal for high-resolution imaging. With an electronic noise of about 110 eV r.m.s., it opens new perspectives for many synchrotron applications where currently the detector is the limiting factor, e.g. inelastic X-ray scattering, Laue diffraction and soft X-ray or high-resolution color imaging. Due to the small pixel pitch, the charge cloud generated by absorbed X-rays is shared between neighboring pixels for most of the photons. Therefore, at low photon fluxes, interpolation algorithms can be applied to determine the absorption position of each photon with a resolution of the order of 1 µm. In this work, the characterization results of one of the MÖNCH prototypes are presented under low-flux conditions. A custom interpolation algorithm is described and applied to the data to obtain high-resolution images. Images obtained in grating interferometry experiments without the use of the absorption grating G2 are shown and discussed. Perspectives for the future developments of the MÖNCH detector are also presented. PMID:27787252

  5. The Super-Linear Slope Of The Spatially-resolved Star Formation Law In NGC 3521 And NGC 5194 (m51a)

    NASA Astrophysics Data System (ADS)

    Liu, Guilin; Koda, J.; Calzetti, D.; Fukuhara, M.; Momose, R.

    2011-01-01

    We have conducted interferometric observations with CARMA and an OTF mapping with the 45-m telescope at NRO in the CO (1-0) emission line of NGC 3521. Combining these new data, together with similar data for M51a and archival SINGS H-alpha, 24um, THINGS H I and GALEX FUV data for both galaxies, we investigate the empirical scaling law that connects the surface density of star formation rate (SFR) and cold gas (the Schmidt-Kennicutt law) on a spatially-resolved basis, and find a super-linear slope when carefully subtracting the background emissions in the SFR image. We argue that plausibly deriving SFR maps of nearby galaxies requires the diffuse stellar/dust background emission to be carefully subtracted (especially in mid-IR). An approach to complete this task is presented and applied in our pixel-by-pixel analysis on both galaxies, showing that the controversial results whether the molecular S-K law is super-linear or basically linear is a result of removing or preserving the local background. In both galaxies, the power index of the molecular S-K law is super-linear (1.5-1.9) at the highest available resolution (230 pc), and decreases monotonically for decreasing resolution; while the scatter (mainly intrinsic) increases as the resolution becomes higher, indicating a trend for which the S-K law breaks down below some scale. Both quantities are systematically larger in M51a than in NGC 3521, but when plotted against the de-projected scale, they become highly consistent between the two galaxies, tentatively suggesting that the sub-kpc molecular S-K law in spiral galaxies depends only on the scale being considered, without varying amongst spiral galaxies. We obtaion slope=-1.1[log(scale/kpc)]+1.4 and scatter=-0.2 [scale/kpc]+0.7 through fitting to the M51a data, which describes both galaxies impressively well on sub-kpc scales. However, a larger sample of galaxies with better sensitivity, resolution and broader FoV are required to test these results.

  6. SuperHERO: the next generation hard x-ray HEROES telescope

    NASA Astrophysics Data System (ADS)

    Gaskin, Jessica A.; Christe, Steven D.; Elsner, Ronald F.; Kilaru, Kiranmayee; Ramsey, Brian D.; Seller, Paul; Shih, Albert Y.; Stuchlik, David W.; Swartz, Douglas A.; Tennant, Allyn F.; Weddendorf, Bruce; Wilson, Matthew D.; Wilson-Hodge, Colleen A.

    2014-07-01

    SuperHERO is a new high-resolution, Long Duration Balloon-capable, hard-x-ray (20-75 keV) focusing telescope for making novel astrophysics and heliophysics observations. The SuperHERO payload, currently in its proposal phase, is being developed jointly by the Astrophysics Office at NASA Marshall Space Flight Center and the Solar Physics Laboratory and the Wallops Flight Facility at NASA Goddard Space Flight Center. SuperHERO is a follow-on payload to the High Energy Replicated Optics to Explore the Sun (HEROES) balloon-borne telescope that recently flew from Fort Sumner, NM in September of 2013, and will utilize many of the same features. Significant enhancements to the HEROES payload will be made, including the addition of optics, novel solid-state multi-pixel CdTe detectors, integration of the Wallops Arc-Second Pointer and a significantly lighter gondola suitable for Long Duration Flights.

  7. Single image super-resolution reconstruction algorithm based on eage selection

    NASA Astrophysics Data System (ADS)

    Zhang, Yaolan; Liu, Yijun

    2017-05-01

    Super-resolution (SR) has become more important, because it can generate high-quality high-resolution (HR) images from low-resolution (LR) input images. At present, there are a lot of work is concentrated on developing sophisticated image priors to improve the image quality, while taking much less attention to estimating and incorporating the blur model that can also impact the reconstruction results. We present a new reconstruction method based on eager selection. This method takes full account of the factors that affect the blur kernel estimation and accurately estimating the blur process. When comparing with the state-of-the-art methods, our method has comparable performance.

  8. Real-time bacterial microcolony counting using on-chip microscopy

    NASA Astrophysics Data System (ADS)

    Jung, Jae Hee; Lee, Jung Eun

    2016-02-01

    Observing microbial colonies is the standard method for determining the microbe titer and investigating the behaviors of microbes. Here, we report an automated, real-time bacterial microcolony-counting system implemented on a wide field-of-view (FOV), on-chip microscopy platform, termed ePetri. Using sub-pixel sweeping microscopy (SPSM) with a super-resolution algorithm, this system offers the ability to dynamically track individual bacterial microcolonies over a wide FOV of 5.7 mm × 4.3 mm without requiring a moving stage or lens. As a demonstration, we obtained high-resolution time-series images of S. epidermidis at 20-min intervals. We implemented an image-processing algorithm to analyze the spatiotemporal distribution of microcolonies, the development of which could be observed from a single bacterial cell. Test bacterial colonies with a minimum diameter of 20 μm could be enumerated within 6 h. We showed that our approach not only provides results that are comparable to conventional colony-counting assays but also can be used to monitor the dynamics of colony formation and growth. This microcolony-counting system using on-chip microscopy represents a new platform that substantially reduces the detection time for bacterial colony counting. It uses chip-scale image acquisition and is a simple and compact solution for the automation of colony-counting assays and microbe behavior analysis with applications in antibacterial drug discovery.

  9. Real-time bacterial microcolony counting using on-chip microscopy

    PubMed Central

    Jung, Jae Hee; Lee, Jung Eun

    2016-01-01

    Observing microbial colonies is the standard method for determining the microbe titer and investigating the behaviors of microbes. Here, we report an automated, real-time bacterial microcolony-counting system implemented on a wide field-of-view (FOV), on-chip microscopy platform, termed ePetri. Using sub-pixel sweeping microscopy (SPSM) with a super-resolution algorithm, this system offers the ability to dynamically track individual bacterial microcolonies over a wide FOV of 5.7 mm × 4.3 mm without requiring a moving stage or lens. As a demonstration, we obtained high-resolution time-series images of S. epidermidis at 20-min intervals. We implemented an image-processing algorithm to analyze the spatiotemporal distribution of microcolonies, the development of which could be observed from a single bacterial cell. Test bacterial colonies with a minimum diameter of 20 μm could be enumerated within 6 h. We showed that our approach not only provides results that are comparable to conventional colony-counting assays but also can be used to monitor the dynamics of colony formation and growth. This microcolony-counting system using on-chip microscopy represents a new platform that substantially reduces the detection time for bacterial colony counting. It uses chip-scale image acquisition and is a simple and compact solution for the automation of colony-counting assays and microbe behavior analysis with applications in antibacterial drug discovery. PMID:26902822

  10. Pixel-based OPC optimization based on conjugate gradients.

    PubMed

    Ma, Xu; Arce, Gonzalo R

    2011-01-31

    Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.

  11. Retrieval of Cloud Properties for Partially Cloud-Filled Pixels During CRYSTAL-FACE

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Minnis, P.; Smith, W. L.; Khaiyer, M. M.; Heck, P. W.; Sun-Mack, S.; Uttal, T.; Comstock, J.

    2003-12-01

    Partially cloud-filled pixels can be a significant problem for remote sensing of cloud properties. Generally, the optical depth and effective particle sizes are often too small or too large, respectively, when derived from radiances that are assumed to be overcast but contain radiation from both clear and cloud areas within the satellite imager field of view. This study presents a method for reducing the impact of such partially cloud field pixels by estimating the cloud fraction within each pixel using higher resolution visible (VIS, 0.65mm) imager data. Although the nominal resolution for most channels on the Geostationary Operational Environmental Satellite (GOES) imager and the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra are 4 and 1 km, respectively, both instruments also take VIS channel data at 1 km and 0.25 km, respectively. Thus, it may be possible to obtain an improved estimate of cloud fraction within the lower resolution pixels by using the information contained in the higher resolution VIS data. GOES and MODIS multi-spectral data, taken during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment (CRYSTAL-FACE), are analyzed with the algorithm used for the Atmospheric Radiation Measurement Program (ARM) and the Clouds and Earth's Radiant Energy System (CERES) to derive cloud amount, temperature, height, phase, effective particle size, optical depth, and water path. Normally, the algorithm assumes that each pixel is either entirely clear or cloudy. In this study, a threshold method is applied to the higher resolution VIS data to estimate the partial cloud fraction within each low-resolution pixel. The cloud properties are then derived from the observed low-resolution radiances using the cloud cover estimate to properly extract the radiances due only to the cloudy part of the scene. This approach is applied to both GOES and MODIS data to estimate the improvement in the retrievals for each resolution. Results are compared with the radar reflectivity techniques employed by the NOAA ETL MMCR and the PARSL 94 GHz radars located at the CRYSTAL-FACE Eastern & Western Ground Sites, respectively. This technique is most likely to yield improvements for low and midlevel layer clouds that have little thermal variability in cloud height.

  12. Aircraft target detection algorithm based on high resolution spaceborne SAR imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing

    2018-03-01

    In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.

  13. Super-resolution for scanning light stimulation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bitzer, L. A.; Neumann, K.; Benson, N., E-mail: niels.benson@uni-due.de

    Super-resolution (SR) is a technique used in digital image processing to overcome the resolution limitation of imaging systems. In this process, a single high resolution image is reconstructed from multiple low resolution images. SR is commonly used for CCD and CMOS (Complementary Metal-Oxide-Semiconductor) sensor images, as well as for medical applications, e.g., magnetic resonance imaging. Here, we demonstrate that super-resolution can be applied with scanning light stimulation (LS) systems, which are common to obtain space-resolved electro-optical parameters of a sample. For our purposes, the Projection Onto Convex Sets (POCS) was chosen and modified to suit the needs of LS systems.more » To demonstrate the SR adaption, an Optical Beam Induced Current (OBIC) LS system was used. The POCS algorithm was optimized by means of OBIC short circuit current measurements on a multicrystalline solar cell, resulting in a mean square error reduction of up to 61% and improved image quality.« less

  14. Spatial super-resolution of colored images by micro mirrors

    NASA Astrophysics Data System (ADS)

    Dahan, Daniel; Yaacobi, Ami; Pinsky, Ephraim; Zalevsky, Zeev

    2018-06-01

    In this paper, we present two methods of dealing with the geometric resolution limit of color imaging sensors. It is possible to overcome the pixel size limit by adding a digital micro-mirror device component on the intermediate image plane of an optical system, and adapting its pattern in a computerized manner before sampling each frame. The full RGB image can be reconstructed from the Bayer camera by building a dedicated optical design, or by adjusting the demosaicing process to the special format of the enhanced image.

  15. Recent advances in time series InSAR

    NASA Astrophysics Data System (ADS)

    Hooper, Andrew; Bekaert, David; Spaans, Karsten

    2010-05-01

    Despite the multiple successes of InSAR at measuring surface displacement, in many instances the signal over much of an image either decorrelates too quickly to be useful or is swamped by atmospheric noise. Time series InSAR methods seek to address these issues by essentially increasing the signal-to-noise ratio (SNR) through the use of more data. These techniques are particularly useful for applications where the strain rates detected at the surface are low, such as postseismic/interseismic motion, magma/fluid movement, landslides and reservoir exploitation. Our previous developments in this field have included a persistent scatterer algorithm based on spatial correlation, a full resolution small baseline approach based on the same strategy, and procedure for combining the two [Hooper, GRL, 2008]. This combined method works well on small areas (up to one frame) at ERS or Envisat strip-map resolution. However, in applying it to larger areas, such as the Guerrero region of Mexico and western Anatolia in Turkey, or when processing data at higher resolution, e.g. from TerraSAR-X, computer resource problems can arise. We have therefore altered the processing strategy to involve smarter use of computer memory. Further improvement is achieved by the resampling of the selected pixels (whether persistent scatterers or distributed scatterers) to a coarser resolution - usually we do not require a resolution on the scale of individual resolution cells for geophysical applications. Aliasing is avoided by summing the phase of nearby selected pixels, weighted according to their estimated SNR. This is akin to smart multilooking, but note that better results can be achieved than by starting the analysis with low-resolution (multilooked) data. Another development concerns selecting pixels only in images where they appear reliable. This allows for resolution cells that become correlated/decorrelated either in a temporary fashion, e.g., due to snow cover, or in a permanent way due to the appearance or removal of scatterers. The detection algorithm relies on the degree of spatial correlation for the pixel of interest in each image. We have also modified our 3-D phase-unwrapping algorithms to allow for the resulting differing combinations of coherent pixels in every interferogram. We demonstrate our improved techniques on volcanoes in Iceland and the 2006 slow-slip event in Guerrero, Mexico.

  16. TestSTORM: Simulator for optimizing sample labeling and image acquisition in localization based super-resolution microscopy

    PubMed Central

    Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós

    2014-01-01

    Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813

  17. Reconstruction of 2D PET data with Monte Carlo generated system matrix for generalized natural pixels

    NASA Astrophysics Data System (ADS)

    Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.

    2006-06-01

    In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).

  18. Challenges, constraints, and results of lens design for 17 micron-bolometer focal plane arrays in 8-12 micron waveband

    NASA Astrophysics Data System (ADS)

    Schuster, Norbert; Franks, John

    2011-06-01

    In the 8-12 micron waveband Focal Plane Arrays (FPA) are available with a 17 micron pixel pitch in different arrays sizes (e.g. 512 x 480 pixels and 320 x 240 pixels) and with excellent electrical properties. Many applications become possible using this new type of IR-detector which will become the future standard in uncooled technology. Lenses with an f-number faster than f/1.5 minimize the diffraction impact on the spatial resolution and guarantee a high thermal resolution for uncooled cameras. Both effects will be quantified. The distinction between Traditional f-number (TF) and Radiometric f-number (RF) is discussed. Lenses with different focal lengths are required for applications in a variety of markets. They are classified by their Horizontal field of view (HFOV). Respecting the requirements for high volume markets, several two lens solutions will be discussed. A commonly accepted parameter of spatial resolution is the Modulation Transfer Function (MTF)-value at the Nyquist frequency of the detector (here 30cy/mm). This parameter of resolution will be presented versus field of view. Wide Angle and Super Wide Angle lenses are susceptible to low relative illumination in the corner of the detector. Measures to reduce this drop to an acceptable value are presented.

  19. Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.

    2011-01-01

    We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.

  20. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  1. Adaptive block online learning target tracking based on super pixel segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Yue; Li, Jianzeng

    2018-04-01

    Video target tracking technology under the unremitting exploration of predecessors has made big progress, but there are still lots of problems not solved. This paper proposed a new algorithm of target tracking based on image segmentation technology. Firstly we divide the selected region using simple linear iterative clustering (SLIC) algorithm, after that, we block the area with the improved density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm. Each sub-block independently trained classifier and tracked, then the algorithm ignore the failed tracking sub-block while reintegrate the rest of the sub-blocks into tracking box to complete the target tracking. The experimental results show that our algorithm can work effectively under occlusion interference, rotation change, scale change and many other problems in target tracking compared with the current mainstream algorithms.

  2. Detection of Multi-Layer and Vertically-Extended Clouds Using A-Train Sensors

    NASA Technical Reports Server (NTRS)

    Joiner, J.; Vasilkov, A. P.; Bhartia, P. K.; Wind, G.; Platnick, S.; Menzel, W. P.

    2010-01-01

    The detection of mUltiple cloud layers using satellite observations is important for retrieval algorithms as well as climate applications. In this paper, we describe a relatively simple algorithm to detect multiple cloud layers and distinguish them from vertically-extended clouds. The algorithm can be applied to coincident passive sensors that derive both cloud-top pressure from the thermal infrared observations and an estimate of solar photon pathlength from UV, visible, or near-IR measurements. Here, we use data from the A-train afternoon constellation of satellites: cloud-top pressure, cloud optical thickness, the multi-layer flag from the Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) and the optical centroid cloud pressure from the Aura Ozone Monitoring Instrument (OMI). For the first time, we use data from the CloudSat radar to evaluate the results of a multi-layer cloud detection scheme. The cloud classification algorithms applied with different passive sensor configurations compare well with each other as well as with data from CloudSat. We compute monthly mean fractions of pixels containing multi-layer and vertically-extended clouds for January and July 2007 at the OMI spatial resolution (l2kmx24km at nadir) and at the 5kmx5km MODIS resolution used for infrared cloud retrievals. There are seasonal variations in the spatial distribution of the different cloud types. The fraction of cloudy pixels containing distinct multi-layer cloud is a strong function of the pixel size. Globally averaged, these fractions are approximately 20% and 10% for OMI and MODIS, respectively. These fractions may be significantly higher or lower depending upon location. There is a much smaller resolution dependence for fractions of pixels containing vertically-extended clouds (approx.20% for OMI and slightly less for MODIS globally), suggesting larger spatial scales for these clouds. We also find higher fractions of vertically-extended clouds over land as compared with ocean, particularly in the tropics and summer hemisphere.

  3. Super Typhoon Halong off Taiwan

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On July 14, 2002, Super Typhoon Halong was east of Taiwan (left edge) in the western Pacific Ocean. At the time this image was taken the storm was a Category 4 hurricane, with maximum sustained winds of 115 knots (132 miles per hour), but as recently as July 12, winds were at 135 knots (155 miles per hour). Halong has moved northwards and pounded Okinawa, Japan, with heavy rain and high winds, just days after tropical Storm Chataan hit the country, creating flooding and killing several people. The storm is expected to be a continuing threat on Monday and Tuesday. This image was acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra satellite on July 14, 2002. Please note that the high-resolution scene provided here is 500 meters per pixel. For a copy of the scene at the sensor's fullest resolution, visit the MODIS Rapid Response Image Gallery. Image courtesy Jacques Descloitres, MODIS Land Rapid Response Team at NASA GSFC

  4. Quantitative evaluation of software packages for single-molecule localization microscopy.

    PubMed

    Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael

    2015-08-01

    The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.

  5. Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy.

    PubMed

    Huang, Xiaoshuai; Fan, Junchao; Li, Liuju; Liu, Haosen; Wu, Runlong; Wu, Yi; Wei, Lisi; Mao, Heng; Lal, Amit; Xi, Peng; Tang, Liqiang; Zhang, Yunfeng; Liu, Yanmei; Tan, Shan; Chen, Liangyi

    2018-06-01

    To increase the temporal resolution and maximal imaging time of super-resolution (SR) microscopy, we have developed a deconvolution algorithm for structured illumination microscopy based on Hessian matrixes (Hessian-SIM). It uses the continuity of biological structures in multiple dimensions as a priori knowledge to guide image reconstruction and attains artifact-minimized SR images with less than 10% of the photon dose used by conventional SIM while substantially outperforming current algorithms at low signal intensities. Hessian-SIM enables rapid imaging of moving vesicles or loops in the endoplasmic reticulum without motion artifacts and with a spatiotemporal resolution of 88 nm and 188 Hz. Its high sensitivity allows the use of sub-millisecond excitation pulses followed by dark recovery times to reduce photobleaching of fluorescent proteins, enabling hour-long time-lapse SR imaging of actin filaments in live cells. Finally, we observed the structural dynamics of mitochondrial cristae and structures that, to our knowledge, have not been observed previously, such as enlarged fusion pores during vesicle exocytosis.

  6. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  7. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  8. Super-resolution Time-Lapse Seismic Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Ovcharenko, O.; Kazei, V.; Peter, D. B.; Alkhalifah, T.

    2017-12-01

    Time-lapse seismic waveform inversion is a technique, which allows tracking changes in the reservoirs over time. Such monitoring is relatively computationally extensive and therefore it is barely feasible to perform it on-the-fly. Most of the expenses are related to numerous FWI iterations at high temporal frequencies, which is inevitable since the low-frequency components can not resolve fine scale features of a velocity model. Inverted velocity changes are also blurred when there is noise in the data, so the problem of low-resolution images is widely known. One of the problems intensively tackled by computer vision research community is the recovering of high-resolution images having their low-resolution versions. Usage of artificial neural networks to reach super-resolution from a single downsampled image is one of the leading solutions for this problem. Each pixel of the upscaled image is affected by all the pixels of its low-resolution version, which enables the workflow to recover features that are likely to occur in the corresponding environment. In the present work, we adopt machine learning image enhancement technique to improve the resolution of time-lapse full-waveform inversion. We first invert the baseline model with conventional FWI. Then we run a few iterations of FWI on a set of the monitoring data to find desired model changes. These changes are blurred and we enhance their resolution by using a deep neural network. The network is trained to map low-resolution model updates predicted by FWI into the real perturbations of the baseline model. For supervised training of the network we generate a set of random perturbations in the baseline model and perform FWI on the noisy data from the perturbed models. We test the approach on a realistic perturbation of Marmousi II model and demonstrate that it outperforms conventional convolution-based deblurring techniques.

  9. Shape and Albedo from Shading (SAfS) for Pixel-Level dem Generation from Monocular Images Constrained by Low-Resolution dem

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian

    2016-06-01

    Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) (0.5 m spatial resolution), constrained by the SELENE and LRO Elevation Model (SLDEM 2015) of 60 m spatial resolution. The results indicate that local details are largely recovered by the algorithm while low frequency topographic consistency is affected by the low-resolution DEM.

  10. Enhancing multi-spot structured illumination microscopy with fluorescence difference

    NASA Astrophysics Data System (ADS)

    Ward, Edward N.; Torkelsen, Frida H.; Pal, Robert

    2018-03-01

    Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested.

  11. IMPROVING THE ACCURACY OF HISTORIC SATELLITE IMAGE CLASSIFICATION BY COMBINING LOW-RESOLUTION MULTISPECTRAL DATA WITH HIGH-RESOLUTION PANCHROMATIC DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Getman, Daniel J

    2008-01-01

    Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic datamore » (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.« less

  12. Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation

    NASA Astrophysics Data System (ADS)

    Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra

    2017-12-01

    Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.

  13. A super-resolution ultrasound method for brain vascular mapping

    PubMed Central

    O'Reilly, Meaghan A.; Hynynen, Kullervo

    2013-01-01

    Purpose: High-resolution vascular imaging has not been achieved in the brain due to limitations of current clinical imaging modalities. The authors present a method for transcranial ultrasound imaging of single micrometer-size bubbles within a tube phantom. Methods: Emissions from single bubbles within a tube phantom were mapped through an ex vivo human skull using a sparse hemispherical receiver array and a passive beamforming algorithm. Noninvasive phase and amplitude correction techniques were applied to compensate for the aberrating effects of the skull bone. The positions of the individual bubbles were estimated beyond the diffraction limit of ultrasound to produce a super-resolution image of the tube phantom, which was compared with microcomputed tomography (micro-CT). Results: The resulting super-resolution ultrasound image is comparable to results obtained via the micro-CT for small tissue specimen imaging. Conclusions: This method provides superior resolution to deep-tissue contrast ultrasound and has the potential to be extended to provide complete vascular network imaging in the brain. PMID:24320408

  14. Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2008-01-01

    We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.

  15. Data Processing for a High Resolution Preclinical PET Detector Based on Philips DPC Digital SiPMs

    NASA Astrophysics Data System (ADS)

    Schug, David; Wehner, Jakob; Goldschmidt, Benjamin; Lerche, Christoph; Dueppenbecker, Peter Michael; Hallen, Patrick; Weissler, Bjoern; Gebhardt, Pierre; Kiessling, Fabian; Schulz, Volkmar

    2015-06-01

    In positron emission tomography (PET) systems, light sharing techniques are commonly used to readout scintillator arrays consisting of scintillation elements, which are smaller than the optical sensors. The scintillating element is then identified evaluating the signal heights in the readout channels using statistical algorithms, the center of gravity (COG) algorithm being the simplest and mostly used one. We propose a COG algorithm with a fixed number of input channels in order to guarantee a stable calculation of the position. The algorithm is implemented and tested with the raw detector data obtained with the Hyperion-II D preclinical PET insert which uses Philips Digital Photon Counting's (PDPC) digitial SiPMs. The gamma detectors use LYSO scintillator arrays with 30 ×30 crystals of 1 ×1 ×12 mm3 in size coupled to 4 ×4 PDPC DPC 3200-22 sensors (DPC) via a 2-mm-thick light guide. These self-triggering sensors are made up of 2 ×2 pixels resulting in a total of 64 readout channels. We restrict the COG calculation to a main pixel, which captures most of the scintillation light from a crystal, and its (direct and diagonal) neighboring pixels and reject single events in which this data is not fully available. This results in stable COG positions for a crystal element and enables high spatial image resolution. Due to the sensor layout, for some crystals it is very likely that a single diagonal neighbor pixel is missing as a result of the low light level on the corresponding DPC. This leads to a loss of sensitivity, if these events are rejected. An enhancement of the COG algorithm is proposed which handles the potentially missing pixel separately both for the crystal identification and the energy calculation. Using this advancement, we show that the sensitivity of the Hyperion-II D insert using the described scintillator configuration can be improved by 20-100% for practical useful readout thresholds of a single DPC pixel ranging from 17-52 photons. Furthermore, we show that the energy resolution of the scanner is superior for all readout thresholds if singles with a single missing pixel are accepted and correctly handled compared to the COG method only accepting singles with all neighbors present by 0-1.6% (relative difference). The presented methods can not only be applied to gamma detectors employing DPC sensors, but can be generalized to other similarly structured and self-triggering detectors, using light sharing techniques, as well.

  16. Low-light-level image super-resolution reconstruction based on iterative projection photon localization algorithm

    NASA Astrophysics Data System (ADS)

    Ying, Changsheng; Zhao, Peng; Li, Ye

    2018-01-01

    The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.

  17. The imaging performance of the SRC on Mars Express

    USGS Publications Warehouse

    Oberst, J.; Schwarz, G.; Behnke, T.; Hoffmann, H.; Matz, K.-D.; Flohrer, J.; Hirsch, H.; Roatsch, T.; Scholten, F.; Hauber, E.; Brinkmann, B.; Jaumann, R.; Williams, D.; Kirk, R.; Duxbury, T.; Leu, C.; Neukum, G.

    2008-01-01

    The Mars Express spacecraft carries the pushbroom scanner high-resolution stereo camera (HRSC) and its added imaging subsystem super resolution channel (SRC). The SRC is equipped with its own optical system and a 1024??1024 framing sensor. SRC produces snapshots with 2.3 m ground pixel size from the nominal spacecraft pericenter height of 250 km, which are typically embedded in the central part of the large HRSC scenes. The salient features of the SRC are its light-weight optics, a reliable CCD detector, and high-speed read-out electronics. The quality and effective visibility of details in the SRC images unfortunately falls short of what has been expected. In cases where thermal balance cannot be reached, artifacts, such as blurring and "ghost features" are observed in the images. In addition, images show large numbers of blemish pixels and are plagued by electronic noise. As a consequence, we have developed various image improving algorithms, which are discussed in this paper. While results are encouraging, further studies of image restoration by dedicated processing appear worthwhile. The SRC has obtained more than 6940 images at the time of writing (1 September 2007), which often show fascinating details in surface morphology. SRC images are highly useful for a variety of applications in planetary geology, for studies of the Mars atmosphere, and for astrometric observations of the Martian satellites. This paper will give a full account of the design philosophy, technical concept, calibration, operation, integration with HRSC, and performance, as well as science accomplishments of the SRC. ?? 2007 Elsevier Ltd. All rights reserved.

  18. Super-resolved all-refocused image with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Li, Lin; Hou, Guangqi

    2015-12-01

    This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.

  19. On the creation of high spatial resolution imaging spectroscopy data from multi-temporal low spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Yao, Wei; van Aardt, Jan; Messinger, David

    2017-05-01

    The Hyperspectral Infrared Imager (HyspIRI) mission aims to provide global imaging spectroscopy data to the benefit of especially ecosystem studies. The onboard spectrometer will collect radiance spectra from the visible to short wave infrared (VSWIR) regions (400-2500 nm). The mission calls for fine spectral resolution (10 nm band width) and as such will enable scientists to perform material characterization, species classification, and even sub-pixel mapping. However, the global coverage requirement results in a relatively low spatial resolution (GSD 30m), which restricts applications to objects of similar scales. We therefore have focused on the assessment of sub-pixel vegetation structure from spectroscopy data in past studies. In this study, we investigate the development or reconstruction of higher spatial resolution imaging spectroscopy data via fusion of multi-temporal data sets to address the drawbacks implicit in low spatial resolution imagery. The projected temporal resolution of the HyspIRI VSWIR instrument is 15 days, which implies that we have access to as many as six data sets for an area over the course of a growth season. Previous studies have shown that select vegetation structural parameters, e.g., leaf area index (LAI) and gross ecosystem production (GEP), are relatively constant in summer and winter for temperate forests; we therefore consider the data sets collected in summer to be from a similar, stable forest structure. The first step, prior to fusion, involves registration of the multi-temporal data. A data fusion algorithm then can be applied to the pre-processed data sets. The approach hinges on an algorithm that has been widely applied to fuse RGB images. Ideally, if we have four images of a scene which all meet the following requirements - i) they are captured with the same camera configurations; ii) the pixel size of each image is x; and iii) at least r2 images are aligned on a grid of x/r - then a high-resolution image, with a pixel size of x/r, can be reconstructed from the multi-temporal set. The algorithm was applied to data from NASA's classic Airborne Visible and Infrared Imaging Spectrometer (AVIRIS-C; GSD 18m), collected between 2013-2015 (summer and fall) over our study area (NEON's Southwest Pacific Domain; Fresno, CA) to generate higher spatial resolution imagery (GSD 9m). The reconstructed data set was validated via comparison to NEON's imaging spectrometer (NIS) data (GSD 1m). The results showed that algorithm worked well with the AVIRIS-C data and could be applied to the HyspIRI data.

  20. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    PubMed

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  1. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    PubMed Central

    Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun

    2017-01-01

    To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837

  2. Super-Resolution for “Jilin-1” Satellite Video Imagery via a Convolutional Network

    PubMed Central

    Wang, Zhongyuan; Wang, Lei; Ren, Yexian

    2018-01-01

    Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method’s practicality. Experimental results on “Jilin-1” satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods. PMID:29652838

  3. Super-Resolution for "Jilin-1" Satellite Video Imagery via a Convolutional Network.

    PubMed

    Xiao, Aoran; Wang, Zhongyuan; Wang, Lei; Ren, Yexian

    2018-04-13

    Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method's practicality. Experimental results on "Jilin-1" satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods.

  4. A novel weighted-direction color interpolation

    NASA Astrophysics Data System (ADS)

    Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng

    2013-08-01

    A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.

  5. Enhancing multi-spot structured illumination microscopy with fluorescence difference

    PubMed Central

    Torkelsen, Frida H.

    2018-01-01

    Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested. PMID:29657751

  6. Automated Sargassum Detection for Landsat Imagery

    NASA Astrophysics Data System (ADS)

    McCarthy, S.; Gallegos, S. C.; Armstrong, D.

    2016-02-01

    We implemented a system to automatically detect Sargassum, a floating seaweed, in 30-meter LANDSAT-8 Operational Land Imager (OLI) imagery. Our algorithm for Sargassum detection is an extended form of Hu's approach to derive a floating algae index (FAI) [1]. Hu's algorithm was developed for Moderate Resolution Imaging Spectroradiometer (MODIS) data, but we extended it for use with the OLI bands centered at 655, 865, and 1609 nm, which are comparable to the MODIS bands located at 645, 859, and 1640 nm. We also developed a high resolution true color product to mask cloud pixels in the OLI scene by applying a threshold to top of the atmosphere (TOA) radiances in the red (655 nm), green (561 nm), and blue (443 nm) wavelengths, as well as a method for removing false positive identifications of Sargassum in the imagery. Hu's algorithm derives a FAI for each Sargassum identified pixel. Our algorithm is currently set to only flag the presence of Sargassum in an OLI pixel by classifying any pixel with a FAI > 0.0 as Sargassum. Additionally, our system geo-locates the flagged Sargassum pixels identified in the OLI imagery into the U.S. Navy Global HYCOM model grid. One element of the model grid covers an area 0.125 degrees of latitude by 0.125 degrees of longitude. To resolve the differences in spatial coverage between Landsat and HYCOM, a scheme was developed to calculate the percentage of pixels flagged within the grid element and if above a threshold, it will be flagged as Sargassum. This work is a part of a larger system, sponsored by NASA/Applied Science and Technology Project at J.C. Stennis Space Center, to forecast when and where Sargassum will land on shore. The focus area of this work is currently the Texas coast. Plans call for extending our efforts into the Caribbean. References: [1] Hu, Chuanmin. A novel ocean color index to detect floating algae in the global oceans. Remote Sensing of Environment 113 (2009) 2118-2129.

  7. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  8. Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing

    NASA Astrophysics Data System (ADS)

    Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.

    2018-01-01

    Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.

  9. Improved Wallis Dodging Algorithm for Large-Scale Super-Resolution Reconstruction Remote Sensing Images.

    PubMed

    Fan, Chong; Chen, Xushuai; Zhong, Lei; Zhou, Min; Shi, Yun; Duan, Yulin

    2017-03-18

    A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.

  10. Super-resolution structured illumination in optically thick specimens without fluorescent tagging

    NASA Astrophysics Data System (ADS)

    Hoffman, Zachary R.; DiMarzio, Charles A.

    2017-11-01

    This research extends the work of Hoffman et al. to provide both sectioning and super-resolution using random patterns within thick specimens. Two methods of processing structured illumination in reflectance have been developed without the need for a priori knowledge of either the optical system or the modulation patterns. We explore the use of two deconvolution algorithms that assume either Gaussian or sparse priors. This paper will show that while both methods accomplish their intended objective, the sparse priors method provides superior resolution and contrast against all tested targets, providing anywhere from ˜1.6× to ˜2× resolution enhancement. The methods developed here can reasonably be implemented to work without a priori knowledge about the patterns or point spread function. Further, all experiments are run using an incoherent light source, unknown random modulation patterns, and without the use of fluorescent tagging. These additional modifications are challenging, but the generalization of these methods makes them prime candidates for clinical application, providing super-resolved noninvasive sectioning in vivo.

  11. Location and Geologic Setting for the Three U.S. Mars Landers

    NASA Technical Reports Server (NTRS)

    Parker, T. J.; Kirk, R. L.

    1999-01-01

    Super resolution of the horizon at both Viking landing sites has revealed "new" features we use for triangulation, similar to the approach used during the Mars Pathfinder Mission. We propose alternative landing site locations for both landers for which we believe the confidence is very high. Super resolution of VL-1 images also reveals some of the drift material at the site to consist of gravel-size deposits. Since our proposed location for VL-2 is NOT on the Mie ejecta blanket, the blocky surface around the lander may represent the meter-scale texture of "smooth palins" in the region. The Viking Lander panchromatic images typically offer more repeat coverage than does the IMP on Mars Pathfinder, due to the longer duration of these landed missions. Sub-pixel offsets, necessary for super resolution to work, appear to be attributable to thermal effects on the lander and settling of the lander over time. Due to the greater repeat coverage (particularly in the near and mid-fields) and all-panchromatic images, the gain in resolution by super resolution processing is better for Viking than it is with most IMP image sequences. This enhances the study of textural details near the lander and enables the identification rock and surface textures at greater distances from the lander. Discernment of stereo in super resolution im-ages is possible to great distances from the lander, but is limited by the non-rotating baseline between the two cameras and the shorter height of the cameras above the ground compared to IMP. With super resolution, details of horizon features, such as blockiness and crater rim shapes, may be better correlated with Orbiter images. A number of horizon features - craters and ridges - were identified at VL-1 during the misison, and a few hils and subtle ridges were identified at VL-2. We have added a few "new" horizon features for triangulation at the VL-2 landing site in Utopia Planitia. These features were used for independent triangulation with features visible in Viking Orbiter and MGS MOC images, though the actual location of VL-1 lies in a data dropout in the MOC image of the area. Additional information is contained in the original extended abstract.

  12. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    NASA Technical Reports Server (NTRS)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  13. Meteosat SEVIRI Fire Radiative Power (FRP) products from the Land Surface Analysis Satellite Applications Facility (LSA SAF) - Part 1: Algorithms, product contents and analysis

    NASA Astrophysics Data System (ADS)

    Wooster, M. J.; Roberts, G.; Freeborn, P. H.; Xu, W.; Govaerts, Y.; Beeby, R.; He, J.; Lattanzio, A.; Mullen, R.

    2015-06-01

    Characterising changes in landscape scale fire activity at very high temporal resolution is best achieved using thermal observations of actively burning fires made from geostationary Earth observation (EO) satellites. Over the last decade or more, a series of research and/or operational "active fire" products have been developed from these types of geostationary observations, often with the aim of supporting the generation of data related to biomass burning fuel consumption and trace gas and aerosol emission fields. The Fire Radiative Power (FRP) products generated by the Land Surface Analysis Satellite Applications Facility (LSA SAF) from data collected by the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI) are one such set of products, and are freely available in both near real-time and archived form. Every 15 min, the algorithms used to generate these products identify and map the location of new SEVIRI observations containing actively burning fires, and characterise their individual rates of radiative energy release (fire radiative power; FRP) that is believed proportional to rates of biomass consumption and smoke emission. The FRP-PIXEL product contains the highest spatial resolution FRP dataset, delivered for all of Europe, northern and southern Africa, and part of South America at a spatial resolution of 3 km (decreasing away from the west African sub-satellite point) at the full 15 min temporal resolution. The FRP-GRID product is an hourly summary of the FRP-PIXEL data, produced at a 5° grid cell size and including simple bias adjustments for meteorological cloud cover and for the regional underestimation of FRP caused, primarily, by the non-detection of low FRP fire pixels at SEVIRI's relatively coarse pixel size. Here we describe the enhanced geostationary Fire Thermal Anomaly (FTA) algorithm used to detect the SEVIRI active fire pixels, and detail methods used to deliver atmospherically corrected FRP information together with the per-pixel uncertainty metrics. Using scene simulations and analysis of real SEVIRI data, including from a period of Meteosat-8 "special operations", we describe some of the sensor and data pre-processing characteristics influencing fire detection and FRP uncertainty. We show that the FTA algorithm is able to discriminate actively burning fires covering down to 10-4 of a pixel, and is more sensitive to fire than algorithms used within many other widely exploited active fire products. We also find that artefacts arising from the digital filtering and geometric resampling strategies used to generate level 1.5 SEVIRI data can significantly increase FRP uncertainties in the SEVIRI active fire products, and recommend that the processing chains used for the forthcoming Meteosat Third Generation attempt to minimise the impact of these types of operations. Finally, we illustrate the information contained within the current Meteosat FRP-PIXEL and FRP-GRID products, providing example analyses for both individual fires and multi-year regional-scale fire activity. A companion paper (Roberts et al., 2015) provides a full product performance evaluation for both products, along with examples of their use for prescribing fire smoke emissions within atmospheric modelling components of the Copernicus Atmosphere Monitoring Service (CAMS).

  14. 4K x 2K pixel color video pickup system

    NASA Astrophysics Data System (ADS)

    Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou

    1998-12-01

    This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.

  15. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.

  16. Characterization of Urban Landscape Using Super-Resolution UAS Data, Multiple Textural Scales and Data-Mining Techniques

    NASA Astrophysics Data System (ADS)

    Voss, M.; Blundell, B.

    2015-12-01

    Characterization of urban environments is a high priority for the U.S. Army as battlespaces have transitioned from the predominantly open spaces of the 20th century to urban areas where soldiers have reduced situational awareness due to the diversity and density of their surroundings. Creating high-resolution urban terrain geospatial information will improve mission planning and soldier effectiveness. In this effort, super-resolution true-color imagery was collected with an Altivan NOVA unmanned aerial system over the Muscatatuck Urban Training Center near Butlerville, Indiana on September 16, 2014. Multispectral texture analysis using different algorithms was conducted for urban surface characterization at a variety of scales. Training samples extracted from the true-color and texture images. These data were processed using a variety of meta-algorithms with a decision tree classifier to create a high-resolution urban features map. In addition to improving accuracy over traditional image classification methods, this technique allowed the determination of the most significant textural scales in creating urban terrain maps for tactical exploitation.

  17. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  18. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  19. Impact of defective pixels in AMLCDs on the perception of medical images

    NASA Astrophysics Data System (ADS)

    Kimpe, Tom; Sneyders, Yuri

    2006-03-01

    With LCD displays, each pixel has its own individual transistor that controls the transmittance of that pixel. Occasionally, these individual transistors will short or alternatively malfunction, resulting in a defective pixel that always shows the same brightness. With ever increasing resolution of displays the number of defect pixels per display increases accordingly. State of the art processes are capable of producing displays with no more than one faulty transistor out of 3 million. A five Mega Pixel medical LCD panel contains 15 million individual sub pixels (3 sub pixels per pixel), each having an individual transistor. This means that a five Mega Pixel display on average will have 5 failing pixels. This paper investigates the visibility of defective pixels and analyzes the possible impact of defective pixels on the perception of medical images. JND simulations were done to study the effect of defective pixels on medical images. Our results indicate that defective LCD pixels can mask subtle features in medical images in an unexpectedly broad area around the defect and therefore may reduce the quality of diagnosis for specific high-demanding areas such as mammography. As a second contribution an innovative solution is proposed. A specialized image processing algorithm can make defective pixels completely invisible and moreover can also recover the information of the defect so that the radiologist perceives the medical image correctly. This correction algorithm has been validated with both JND simulations and psycho visual tests.

  20. a New Object-Based Framework to Detect Shodows in High-Resolution Satellite Imagery Over Urban Areas

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.

  1. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    NASA Astrophysics Data System (ADS)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  2. Improving multiphoton STED nanoscopy with separation of photons by LIfetime Tuning (SPLIT)

    NASA Astrophysics Data System (ADS)

    Coto Hernández, Iván.; Lanzano, Luca; Castello, Marco; Jowett, Nate; Tortarolo, Giorgio; Diaspro, Alberto; Vicidomini, Giuseppe

    2018-02-01

    Stimulated emission depletion (STED) microscopy is a powerful bio-imaging technique since it provides molecular spatial resolution whilst preserving the most important assets of fluorescence microscopy. When combined with twophoton excitation (2PE) microscopy (2PE-STED), the sub-diffraction imaging ability of STED microscopy can be achieved also on thick biological samples. The most straightforward implementation of 2PE-STED microscopy is obtained by introducing a STED beam operating in continuous wave (CW) into a conventional Ti:Sapphire based 2PE microscope (2PE-CW-STED). In this implementation, an effective resolution enhancement is mainly obtained implementing a time-gated detection scheme, which however can drastically reduce the signal-to-noise/background ratio of the final image. Herein, we combine the lifetime tuning (SPLIT) approach with 2PE-CW-STED to overcome this limitation. The SPLIT approach is employed to discard fluorescence photons lacking super-resolution information, by means of a pixel-by-pixel phasor approach. Combining the SPLIT approach with image deconvolution further optimizes the signal-to-noise/background ratio.

  3. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees

    PubMed Central

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-01-01

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597

  4. Super-resolution fusion of complementary panoramic images based on cross-selection kernel regression interpolation.

    PubMed

    Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu

    2014-03-20

    A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.

  5. Theory of compressive modeling and simulation

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith

    2013-05-01

    Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .

  6. Subpixelic Measurement of Large 1D Displacements: Principle, Processing Algorithms, Performances and Software

    PubMed Central

    Guelpa, Valérian; Laurent, Guillaume J.; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-01-01

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations—leading to high resolution—while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 μs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 μm measurement range. PMID:24625736

  7. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  8. Sub-pixel flood inundation mapping from multispectral remotely sensed images based on discrete particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Li, Linyi; Chen, Yun; Yu, Xin; Liu, Rui; Huang, Chang

    2015-03-01

    The study of flood inundation is significant to human life and social economy. Remote sensing technology has provided an effective way to study the spatial and temporal characteristics of inundation. Remotely sensed images with high temporal resolutions are widely used in mapping inundation. However, mixed pixels do exist due to their relatively low spatial resolutions. One of the most popular approaches to resolve this issue is sub-pixel mapping. In this paper, a novel discrete particle swarm optimization (DPSO) based sub-pixel flood inundation mapping (DPSO-SFIM) method is proposed to achieve an improved accuracy in mapping inundation at a sub-pixel scale. The evaluation criterion for sub-pixel inundation mapping is formulated. The DPSO-SFIM algorithm is developed, including particle discrete encoding, fitness function designing and swarm search strategy. The accuracy of DPSO-SFIM in mapping inundation at a sub-pixel scale was evaluated using Landsat ETM + images from study areas in Australia and China. The results show that DPSO-SFIM consistently outperformed the four traditional SFIM methods in these study areas. A sensitivity analysis of DPSO-SFIM was also carried out to evaluate its performances. It is hoped that the results of this study will enhance the application of medium-low spatial resolution images in inundation detection and mapping, and thereby support the ecological and environmental studies of river basins.

  9. Automated Detection of Synapses in Serial Section Transmission Electron Microscopy Image Stacks

    PubMed Central

    Kreshuk, Anna; Koethe, Ullrich; Pax, Elizabeth; Bock, Davi D.; Hamprecht, Fred A.

    2014-01-01

    We describe a method for fully automated detection of chemical synapses in serial electron microscopy images with highly anisotropic axial and lateral resolution, such as images taken on transmission electron microscopes. Our pipeline starts from classification of the pixels based on 3D pixel features, which is followed by segmentation with an Ising model MRF and another classification step, based on object-level features. Classifiers are learned on sparse user labels; a fully annotated data subvolume is not required for training. The algorithm was validated on a set of 238 synapses in 20 serial 7197×7351 pixel images (4.5×4.5×45 nm resolution) of mouse visual cortex, manually labeled by three independent human annotators and additionally re-verified by an expert neuroscientist. The error rate of the algorithm (12% false negative, 7% false positive detections) is better than state-of-the-art, even though, unlike the state-of-the-art method, our algorithm does not require a prior segmentation of the image volume into cells. The software is based on the ilastik learning and segmentation toolkit and the vigra image processing library and is freely available on our website, along with the test data and gold standard annotations (http://www.ilastik.org/synapse-detection/sstem). PMID:24516550

  10. Developing a CCD camera with high spatial resolution for RIXS in the soft X-ray range

    NASA Astrophysics Data System (ADS)

    Soman, M. R.; Hall, D. J.; Tutt, J. H.; Murray, N. J.; Holland, A. D.; Schmitt, T.; Raabe, J.; Schmitt, B.

    2013-12-01

    The Super Advanced X-ray Emission Spectrometer (SAXES) at the Swiss Light Source contains a high resolution Charge-Coupled Device (CCD) camera used for Resonant Inelastic X-ray Scattering (RIXS). Using the current CCD-based camera system, the energy-dispersive spectrometer has an energy resolution (E/ΔE) of approximately 12,000 at 930 eV. A recent study predicted that through an upgrade to the grating and camera system, the energy resolution could be improved by a factor of 2. In order to achieve this goal in the spectral domain, the spatial resolution of the CCD must be improved to better than 5 μm from the current 24 μm spatial resolution (FWHM). The 400 eV-1600 eV energy X-rays detected by this spectrometer primarily interact within the field free region of the CCD, producing electron clouds which will diffuse isotropically until they reach the depleted region and buried channel. This diffusion of the charge leads to events which are split across several pixels. Through the analysis of the charge distribution across the pixels, various centroiding techniques can be used to pinpoint the spatial location of the X-ray interaction to the sub-pixel level, greatly improving the spatial resolution achieved. Using the PolLux soft X-ray microspectroscopy endstation at the Swiss Light Source, a beam of X-rays of energies from 200 eV to 1400 eV can be focused down to a spot size of approximately 20 nm. Scanning this spot across the 16 μm square pixels allows the sub-pixel response to be investigated. Previous work has demonstrated the potential improvement in spatial resolution achievable by centroiding events in a standard CCD. An Electron-Multiplying CCD (EM-CCD) has been used to improve the signal to effective readout noise ratio achieved resulting in a worst-case spatial resolution measurement of 4.5±0.2 μm and 3.9±0.1 μm at 530 eV and 680 eV respectively. A method is described that allows the contribution of the X-ray spot size to be deconvolved from these worst-case resolution measurements, estimating the spatial resolution to be approximately 3.5 μm and 3.0 μm at 530 eV and 680 eV, well below the resolution limit of 5 μm required to improve the spectral resolution by a factor of 2.

  11. Two-photon speckle illumination for super-resolution microscopy.

    PubMed

    Negash, Awoke; Labouesse, Simon; Chaumet, Patrick C; Belkebir, Kamal; Giovannini, Hugues; Allain, Marc; Idier, Jérôme; Sentenac, Anne

    2018-06-01

    We present a numerical study of a microscopy setup in which the sample is illuminated with uncontrolled speckle patterns and the two-photon excitation fluorescence is collected on a camera. We show that, using a simple deconvolution algorithm for processing the speckle low-resolution images, this wide-field imaging technique exhibits resolution significantly better than that of two-photon excitation scanning microscopy or one-photon excitation bright-field microscopy.

  12. Application of dot-matrix illumination of liquid crystal phase space light modulator in 3D imaging of APD array

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Sun, Huayan; Guo, Huichao

    2018-01-01

    Aiming at the problem of beam scanning in low-resolution APD array in three-dimensional imaging, a method of beam scanning with liquid crystal phase-space optical modulator is proposed to realize high-resolution imaging by low-resolution APD array. First, a liquid crystal phase spatial light modulator is used to generate a beam array and then a beam array is scanned. Since the sub-beam divergence angle in the beam array is smaller than the field angle of a single pixel in the APD array, the APD's pixels respond only to the three-dimensional information of the beam illumination position. Through the scanning of the beam array, a single pixel is used to collect the target three-dimensional information multiple times, thereby improving the resolution of the APD detector. Finally, MATLAB is used to simulate the algorithm in this paper by using two-dimensional scalar diffraction theory, which realizes the splitting and scanning with a resolution of 5 x 5. The feasibility is verified theoretically.

  13. Three-dimensional super-resolved live cell imaging through polarized multi-angle TIRF.

    PubMed

    Zheng, Cheng; Zhao, Guangyuan; Liu, Wenjie; Chen, Youhua; Zhang, Zhimin; Jin, Luhong; Xu, Yingke; Kuang, Cuifang; Liu, Xu

    2018-04-01

    Measuring three-dimensional nanoscale cellular structures is challenging, especially when the structure is dynamic. Owing to the informative total internal reflection fluorescence (TIRF) imaging under varied illumination angles, multi-angle (MA) TIRF has been examined to offer a nanoscale axial and a subsecond temporal resolution. However, conventional MA-TIRF still performs badly in lateral resolution and fails to characterize the depth image in densely distributed regions. Here, we emphasize the lateral super-resolution in the MA-TIRF, exampled by simply introducing polarization modulation into the illumination procedure. Equipped with a sparsity and accelerated proximal algorithm, we examine a more precise 3D sample structure compared with previous methods, enabling live cell imaging with a temporal resolution of 2 s and recovering high-resolution mitochondria fission and fusion processes. We also shared the recovery program, which is the first open-source recovery code for MA-TIRF, to the best of our knowledge.

  14. qSR: a quantitative super-resolution analysis tool reveals the cell-cycle dependent organization of RNA Polymerase I in live human cells.

    PubMed

    Andrews, J O; Conway, W; Cho, W -K; Narayanan, A; Spille, J -H; Jayanth, N; Inoue, T; Mullen, S; Thaler, J; Cissé, I I

    2018-05-09

    We present qSR, an analytical tool for the quantitative analysis of single molecule based super-resolution data. The software is created as an open-source platform integrating multiple algorithms for rigorous spatial and temporal characterizations of protein clusters in super-resolution data of living cells. First, we illustrate qSR using a sample live cell data of RNA Polymerase II (Pol II) as an example of highly dynamic sub-diffractive clusters. Then we utilize qSR to investigate the organization and dynamics of endogenous RNA Polymerase I (Pol I) in live human cells, throughout the cell cycle. Our analysis reveals a previously uncharacterized transient clustering of Pol I. Both stable and transient populations of Pol I clusters co-exist in individual living cells, and their relative fraction vary during cell cycle, in a manner correlating with global gene expression. Thus, qSR serves to facilitate the study of protein organization and dynamics with very high spatial and temporal resolutions directly in live cell.

  15. CMEIAS color segmentation: an improved computing technology to process color images for quantitative microbial ecology studies at single-cell resolution.

    PubMed

    Gross, Colin A; Reddy, Chandan K; Dazzo, Frank B

    2010-02-01

    Quantitative microscopy and digital image analysis are underutilized in microbial ecology largely because of the laborious task to segment foreground object pixels from background, especially in complex color micrographs of environmental samples. In this paper, we describe an improved computing technology developed to alleviate this limitation. The system's uniqueness is its ability to edit digital images accurately when presented with the difficult yet commonplace challenge of removing background pixels whose three-dimensional color space overlaps the range that defines foreground objects. Image segmentation is accomplished by utilizing algorithms that address color and spatial relationships of user-selected foreground object pixels. Performance of the color segmentation algorithm evaluated on 26 complex micrographs at single pixel resolution had an overall pixel classification accuracy of 99+%. Several applications illustrate how this improved computing technology can successfully resolve numerous challenges of complex color segmentation in order to produce images from which quantitative information can be accurately extracted, thereby gain new perspectives on the in situ ecology of microorganisms. Examples include improvements in the quantitative analysis of (1) microbial abundance and phylotype diversity of single cells classified by their discriminating color within heterogeneous communities, (2) cell viability, (3) spatial relationships and intensity of bacterial gene expression involved in cellular communication between individual cells within rhizoplane biofilms, and (4) biofilm ecophysiology based on ribotype-differentiated radioactive substrate utilization. The stand-alone executable file plus user manual and tutorial images for this color segmentation computing application are freely available at http://cme.msu.edu/cmeias/ . This improved computing technology opens new opportunities of imaging applications where discriminating colors really matter most, thereby strengthening quantitative microscopy-based approaches to advance microbial ecology in situ at individual single-cell resolution.

  16. Wavelet Filter Banks for Super-Resolution SAR Imaging

    NASA Technical Reports Server (NTRS)

    Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess

    2011-01-01

    This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.

  17. Dynamic placement of plasmonic hotspots for super-resolution surface-enhanced Raman scattering.

    PubMed

    Ertsgaard, Christopher T; McKoskey, Rachel M; Rich, Isabel S; Lindquist, Nathan C

    2014-10-28

    In this paper, we demonstrate dynamic placement of locally enhanced plasmonic fields using holographic laser illumination of a silver nanohole array. To visualize these focused "hotspots", the silver surface was coated with various biological samples for surface-enhanced Raman spectroscopy (SERS) imaging. Due to the large field enhancements, blinking behavior of the SERS hotspots was observed and processed using a stochastic optical reconstruction microscopy algorithm enabling super-resolution localization of the hotspots to within 10 nm. These hotspots were then shifted across the surface in subwavelength (<100 nm for a wavelength of 660 nm) steps using holographic illumination from a spatial light modulator. This created a dynamic imaging and sensing surface, whereas static illumination would only have produced stationary hotspots. Using this technique, we also show that such subwavelength shifting and localization of plasmonic hotspots has potential for imaging applications. Interestingly, illuminating the surface with randomly shifting SERS hotspots was sufficient to completely fill in a wide field of view for super-resolution chemical imaging.

  18. Enhancing spatial resolution of (18)F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine.

    PubMed

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu

    2015-07-07

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  19. Enhancing spatial resolution of 18F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I.; Shi, Kuangyu

    2015-07-01

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by 18F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [18F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  20. Ghost detection and removal based on super-pixel grouping in exposure fusion

    NASA Astrophysics Data System (ADS)

    Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun

    2014-09-01

    A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.

  1. A novel ship CFAR detection algorithm based on adaptive parameter enhancement and wake-aided detection in SAR images

    NASA Astrophysics Data System (ADS)

    Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun

    2018-03-01

    Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.

  2. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    PubMed

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  3. Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging.

    PubMed

    Hardie, Russell C; Barnard, Kenneth J; Ordonez, Raul

    2011-12-19

    Fast nonuniform interpolation based super-resolution (SR) has traditionally been limited to applications with translational interframe motion. This is in part because such methods are based on an underlying assumption that the warping and blurring components in the observation model commute. For translational motion this is the case, but it is not true in general. This presents a problem for applications such as airborne imaging where translation may be insufficient. Here we present a new Fourier domain analysis to show that, for many image systems, an affine warping model with limited zoom and shear approximately commutes with the point spread function when diffraction effects are modeled. Based on this important result, we present a new fast adaptive Wiener filter (AWF) SR algorithm for non-translational motion and study its performance with affine motion. The fast AWF SR method employs a new smart observation window that allows us to precompute all the needed filter weights for any type of motion without sacrificing much of the full performance of the AWF. We evaluate the proposed algorithm using simulated data and real infrared airborne imagery that contains a thermal resolution target allowing for objective resolution analysis.

  4. DEPFET pixel detector for future e-e+ experiments

    NASA Astrophysics Data System (ADS)

    Boronat, M.; DEPFET Collaboration

    2016-04-01

    The DEPFET Collaboration develops highly granular, ultra-thin pixel detectors for outstanding vertex reconstruction at future e+e- collider experiments. A DEPFET sensor provides, simultaneously, position sensitive detector capabilities and in-pixel amplification by the integration of a field effect transistor on a fully depleted silicon bulk. The characterization of the latest DEPFET prototypes has proven that a comfortable signal to noise ratio and excellent single point resolution can be achieved for a sensor thickness of 50 μm. A complete detector concept is being developed for the Belle II experiment at the new Japanese super flavor factory. The close to Belle related final auxiliary ASICs have been produced and found to operate a DEPFET pixel detector of the latest generation with the Belle II required read-out speed. DEPFET is not only the technology of choice for the Belle II vertex detector, but also a solid candidate for the International Linear Collider (ILC). Therefore, in this paper, the status of DEPFET R&D project is reviewed in the light of the requirements of the vertex detector at a future e+e- collider.

  5. Resolution enhancement of pump-probe microscope with an inverse-annular filter

    NASA Astrophysics Data System (ADS)

    Kobayashi, Takayoshi; Kawasumi, Koshi; Miyazaki, Jun; Nakata, Kazuaki

    2018-04-01

    Optical pump-probe microscopy can provide images by detecting changes in probe light intensity induced by stimulated emission, photoinduced absorbance change, or photothermal-induced refractive index change in either transmission or reflection mode. Photothermal microscopy, which is one type of optical pump-probe microscopy, has intrinsically super resolution capability due to the bilinear dependence of signal intensity of pump and probe. We introduce new techniques for further resolution enhancement and fast imaging in photothermal microscope. First, we introduce a new pupil filter, an inverse-annular pupil filter in a pump-probe photothermal microscope, which provides resolution enhancement in three dimensions. The resolutions are proved to be improved in lateral and axial directions by imaging experiment using 20-nm gold nanoparticles. The improvement in X (perpendicular to the common pump and probe polarization direction), Y (parallel to the polarization direction), and Z (axial direction) are by 15 ± 6, 8 ± 8, and 21 ± 2% from the resolution without a pupil filter. The resolution enhancement is even better than the calculation using vector field, which predicts the corresponding enhancement of 11, 8, and 6%. The discussion is made to explain the unexpected results. We also demonstrate the photothermal imaging of thick biological samples (cells from rabbit intestine and kidney) stained with hematoxylin and eosin dye with the inverse-annular filter. Second, a fast, high-sensitivity photothermal microscope is developed by implementing a spatially segmented balanced detection scheme into a laser scanning microscope using a Galvano mirror. We confirm a 4.9 times improvement in signal-to-noise ratio in the spatially segmented balanced detection compared with that of conventional detection. The system demonstrates simultaneous bi-modal photothermal and confocal fluorescence imaging of transgenic mouse brain tissue with a pixel dwell time of 20 µs. The fluorescence image visualizes neurons expressing yellow fluorescence proteins, while the photothermal signal detected endogenous chromophores in the mouse brain, allowing 3D visualization of the distribution of various features such as blood cells and fine structures most probably due to lipids. This imaging modality was constructed using compact and cost-effective laser diodes, and will thus be widely useful in the life and medical sciences. Third, we have made further resolution improvement of high-sensitivity laser scanning photothermal microscopy by applying non-linear detection. By this, the new method has super resolution with 61 and 42% enhancement from the diffraction limit values of the probe and pump wavelengths, respectively, by a second-order non-linear scheme and a high-frame rate in a laser scanning microscope. The maximum resolution is determined to be 160 nm in the second-order non-linear detection mode and 270 nm in the linear detection mode by the PT signal of GNPs. The pixel rate and frame rate for 300 × 300 pixel image are 50 µs and 4.5 s, respectively. The pixel and frame rate are shorter than the rates, those are 1 ms and 100 s, using the piezo-driven stage system.

  6. Algorithms for image recovery calculation in extended single-shot phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Hasegawa, Shin-ya; Hirata, Ryo

    2018-04-01

    The single-shot phase-shifting method of image recovery using an inclined reference wave has the advantages of reducing the effects of vibration, being capable of operating in real time, and affording low-cost sensing. In this method, relatively low reference angles compared with that in the conventional method using phase shift between three or four pixels has been required. We propose an extended single-shot phase-shifting technique which uses the multiple-step phase-shifting algorithm and the corresponding multiple pixels which are the same as that of the period of an interference fringe. We have verified the theory underlying this recovery method by means of Fourier spectral analysis and its effectiveness by evaluating the visibility of the image using a high-resolution pattern. Finally, we have demonstrated high-contrast image recovery experimentally using a resolution chart. This method can be used in a variety of applications such as color holographic interferometry.

  7. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  8. Fully convolutional network with cluster for semantic segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Xiao; Chen, Zhongbi; Zhang, Jianlin

    2018-04-01

    At present, image semantic segmentation technology has been an active research topic for scientists in the field of computer vision and artificial intelligence. Especially, the extensive research of deep neural network in image recognition greatly promotes the development of semantic segmentation. This paper puts forward a method based on fully convolutional network, by cluster algorithm k-means. The cluster algorithm using the image's low-level features and initializing the cluster centers by the super-pixel segmentation is proposed to correct the set of points with low reliability, which are mistakenly classified in great probability, by the set of points with high reliability in each clustering regions. This method refines the segmentation of the target contour and improves the accuracy of the image segmentation.

  9. Enhancing the image resolution in a single-pixel sub-THz imaging system based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Alkus, Umit; Ermeydan, Esra Sengun; Sahin, Asaf Behzat; Cankaya, Ilyas; Altan, Hakan

    2018-04-01

    Compressed sensing (CS) techniques allow for faster imaging when combined with scan architectures, which typically suffer from speed. This technique when implemented with a subterahertz (sub-THz) single detector scan imaging system provides images whose resolution is only limited by the pixel size of the pattern used to scan the image plane. To overcome this limitation, the image of the target can be oversampled; however, this results in slower imaging rates especially if this is done in two-dimensional across the image plane. We show that by implementing a one-dimensional (1-D) scan of the image plane, a modified approach to CS theory applied with an appropriate reconstruction algorithm allows for successful reconstruction of the reflected oversampled image of a target placed in standoff configuration from the source. The experiments are done in reflection mode configuration where the operating frequency is 93 GHz and the corresponding wavelength is λ = 3.2 mm. To reconstruct the image with fewer samples, CS theory is applied using masks where the pixel size is 5 mm × 5 mm, and each mask covers an image area of 5 cm × 5 cm, meaning that the basic image is resolved as 10 × 10 pixels. To enhance the resolution, the information between two consecutive pixels is used, and oversampling along 1-D coupled with a modification of the masks in CS theory allowed for oversampled images to be reconstructed rapidly in 20 × 20 and 40 × 40 pixel formats. These are then compared using two different reconstruction algorithms, TVAL3 and ℓ1-MAGIC. The performance of these methods is compared for both simulated signals and real signals. It is found that the modified CS theory approach coupled with the TVAL3 reconstruction process, even when scanning along only 1-D, allows for rapid precise reconstruction of the oversampled target.

  10. Fast, large-scale hologram calculation in wavelet domain

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi

    2018-04-01

    We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.

  11. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    NASA Astrophysics Data System (ADS)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  12. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    NASA Astrophysics Data System (ADS)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  13. Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery.

    PubMed

    Penza, Veronica; Ortiz, Jesús; Mattos, Leonardo S; Forgione, Antonello; De Momi, Elena

    2016-02-01

    Single-incision laparoscopic surgery decreases postoperative infections, but introduces limitations in the surgeon's maneuverability and in the surgical field of view. This work aims at enhancing intra-operative surgical visualization by exploiting the 3D information about the surgical site. An interactive guidance system is proposed wherein the pose of preoperative tissue models is updated online. A critical process involves the intra-operative acquisition of tissue surfaces. It can be achieved using stereoscopic imaging and 3D reconstruction techniques. This work contributes to this process by proposing new methods for improved dense 3D reconstruction of soft tissues, which allows a more accurate deformation identification and facilitates the registration process. Two methods for soft tissue 3D reconstruction are proposed: Method 1 follows the traditional approach of the block matching algorithm. Method 2 performs a nonparametric modified census transform to be more robust to illumination variation. The simple linear iterative clustering (SLIC) super-pixel algorithm is exploited for disparity refinement by filling holes in the disparity images. The methods were validated using two video datasets from the Hamlyn Centre, achieving an accuracy of 2.95 and 1.66 mm, respectively. A comparison with ground-truth data demonstrated the disparity refinement procedure: (1) increases the number of reconstructed points by up to 43 % and (2) does not affect the accuracy of the 3D reconstructions significantly. Both methods give results that compare favorably with the state-of-the-art methods. The computational time constraints their applicability in real time, but can be greatly improved by using a GPU implementation.

  14. Geometrical superresolved imaging using nonperiodic spatial masking.

    PubMed

    Borkowski, Amikam; Zalevsky, Zeev; Javidi, Bahram

    2009-03-01

    The resolution of every imaging system is limited either by the F-number of its optics or by the geometry of its detection array. The geometrical limitation is caused by lack of spatial sampling points as well as by the shape of every sampling pixel that generates spectral low-pass filtering. We present a novel approach to overcome the low-pass filtering that is due to the shape of the sampling pixels. The approach combines special algorithms together with spatial masking placed in the intermediate image plane and eventually allows geometrical superresolved imaging without relation to the actual shape of the pixels.

  15. Super-resolution depth information from a short-wave infrared laser gated-viewing system by using correlated double sampling

    NASA Astrophysics Data System (ADS)

    Göhler, Benjamin; Lutzmann, Peter

    2017-10-01

    Primarily, a laser gated-viewing (GV) system provides range-gated 2D images without any range resolution within the range gate. By combining two GV images with slightly different gate positions, 3D information within a part of the range gate can be obtained. The depth resolution is higher (super-resolution) than the minimal gate shift step size in a tomographic sequence of the scene. For a state-of-the-art system with a typical frame rate of 20 Hz, the time difference between the two required GV images is 50 ms which may be too long in a dynamic scenario with moving objects. Therefore, we have applied this approach to the reset and signal level images of a new short-wave infrared (SWIR) GV camera whose read-out integrated circuit supports correlated double sampling (CDS) actually intended for the reduction of kTC noise (reset noise). These images are extracted from only one single laser pulse with a marginal time difference in between. The SWIR GV camera consists of 640 x 512 avalanche photodiodes based on mercury cadmium telluride with a pixel pitch of 15 μm. A Q-switched, flash lamp pumped solid-state laser with 1.57 μm wavelength (OPO), 52 mJ pulse energy after beam shaping, 7 ns pulse length and 20 Hz pulse repetition frequency is used for flash illumination. In this paper, the experimental set-up is described and the operating principle of CDS is explained. The method of deriving super-resolution depth information from a GV system by using CDS is introduced and optimized. Further, the range accuracy is estimated from measured image data.

  16. DEPFET detectors for future electron-positron colliders

    NASA Astrophysics Data System (ADS)

    Marinas, C.

    2015-11-01

    The DEPFET Collaboration develops highly granular, ultra-thin pixel detectors for outstanding vertex reconstruction at future electron-positron collider experiments. A DEPFET sensor, by the integration of a field effect transistor on a fully depleted silicon bulk, provides simultaneous position sensitive detector capabilities and in pixel amplification. The characterization of the latest DEPFET prototypes has proven that a adequate signal-to-noise ratio and excellent single point resolution can be achieved for a sensor thickness of 50 micrometers. The close to final auxiliary ASICs have been produced and found to operate a DEPFET pixel detector of the latest generation with the required read-out speed. A complete detector concept is being developed for the Belle II experiment at the new Japanese super flavor factory. DEPFET is not only the technology of choice for the Belle II vertex detector, but also a prime candidate for the ILC. Therefore, in this contribution, the status of DEPFET R&D project is reviewed in the light of the requirements of the vertex detector at a future electron-positron collider.

  17. SeaWiFS technical report series. Volume 4: An analysis of GAC sampling algorithms. A case study

    NASA Technical Reports Server (NTRS)

    Yeh, Eueng-Nan (Editor); Hooker, Stanford B. (Editor); Hooker, Stanford B. (Editor); Mccain, Charles R. (Editor); Fu, Gary (Editor)

    1992-01-01

    The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) instrument will sample at approximately a 1 km resolution at nadir which will be broadcast for reception by realtime ground stations. However, the global data set will be comprised of coarser four kilometer data which will be recorded and broadcast to the SeaWiFS Project for processing. Several algorithms for degrading the one kilometer data to four kilometer data are examined using imagery from the Coastal Zone Color Scanner (CZCS) in an effort to determine which algorithm would best preserve the statistical characteristics of the derived products generated from the one kilometer data. Of the algorithms tested, subsampling based on a fixed pixel within a 4 x 4 pixel array is judged to yield the most consistent results when compared to the one kilometer data products.

  18. Complementary aspects of spatial resolution and signal-to-noise ratio in computational imaging

    NASA Astrophysics Data System (ADS)

    Gureyev, T. E.; Paganin, D. M.; Kozlov, A.; Nesterets, Ya. I.; Quiney, H. M.

    2018-05-01

    A generic computational imaging setup is considered which assumes sequential illumination of a semitransparent object by an arbitrary set of structured coherent illumination patterns. For each incident illumination pattern, all transmitted light is collected by a photon-counting bucket (single-pixel) detector. The transmission coefficients measured in this way are then used to reconstruct the spatial distribution of the object's projected transmission. It is demonstrated that the square of the spatial resolution of such a setup is usually equal to the ratio of the image area to the number of linearly independent illumination patterns. If the noise in the measured transmission coefficients is dominated by photon shot noise, then the ratio of the square of the mean signal to the noise variance is proportional to the ratio of the mean number of registered photons to the number of illumination patterns. The signal-to-noise ratio in a reconstructed transmission distribution is always lower if the illumination patterns are nonorthogonal, because of spatial correlations in the measured data. Examples of imaging methods relevant to the presented analysis include conventional imaging with a pixelated detector, computational ghost imaging, compressive sensing, super-resolution imaging, and computed tomography.

  19. A smartphone-based chip-scale microscope using ambient illumination.

    PubMed

    Lee, Seung Ah; Yang, Changhuei

    2014-08-21

    Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone's camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the image resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction are performed on the device using a custom-built Android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.

  20. A smartphone-based chip-scale microscope using ambient illumination

    PubMed Central

    Lee, Seung Ah; Yang, Changhuei

    2014-01-01

    Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the imaging resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system. PMID:24964209

  1. Marine Boundary Layer Cloud Property Retrievals from High-Resolution ASTER Observations: Case Studies and Comparison with Terra MODIS

    NASA Technical Reports Server (NTRS)

    Werner, Frank; Wind, Galina; Zhang, Zhibo; Platnick, Steven; Di Girolamo, Larry; Zhao, Guangyu; Amarasinghe, Nandana; Meyer, Kerry

    2016-01-01

    A research-level retrieval algorithm for cloud optical and microphysical properties is developed for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) aboard the Terra satellite. It is based on the operational MODIS algorithm. This paper documents the technical details of this algorithm and evaluates the retrievals for selected marine boundary layer cloud scenes through comparisons with the operational MODIS Data Collection 6 (C6) cloud product. The newly developed, ASTERspecific cloud masking algorithm is evaluated through comparison with an independent algorithm reported in Zhao and Di Girolamo (2006). To validate and evaluate the cloud optical thickness (tau) and cloud effective radius (r(sub eff)) from ASTER, the high-spatial-resolution ASTER observations are first aggregated to the same 1000m resolution as MODIS. Subsequently, tau(sub aA) and r(sub eff, aA) retrieved from the aggregated ASTER radiances are compared with the collocated MODIS retrievals. For overcast pixels, the two data sets agree very well with Pearson's product-moment correlation coefficients of R greater than 0.970. However, for partially cloudy pixels there are significant differences between r(sub eff, aA) and the MODIS results which can exceed 10 micrometers. Moreover, it is shown that the numerous delicate cloud structures in the example marine boundary layer scenes, resolved by the high-resolution ASTER retrievals, are smoothed by the MODIS observations. The overall good agreement between the research-level ASTER results and the operational MODIS C6 products proves the feasibility of MODIS-like retrievals from ASTER reflectance measurements and provides the basis for future studies concerning the scale dependency of satellite observations and three-dimensional radiative effects.

  2. Marine boundary layer cloud property retrievals from high-resolution ASTER observations: case studies and comparison with Terra MODIS

    NASA Astrophysics Data System (ADS)

    Werner, Frank; Wind, Galina; Zhang, Zhibo; Platnick, Steven; Di Girolamo, Larry; Zhao, Guangyu; Amarasinghe, Nandana; Meyer, Kerry

    2016-12-01

    A research-level retrieval algorithm for cloud optical and microphysical properties is developed for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) aboard the Terra satellite. It is based on the operational MODIS algorithm. This paper documents the technical details of this algorithm and evaluates the retrievals for selected marine boundary layer cloud scenes through comparisons with the operational MODIS Data Collection 6 (C6) cloud product. The newly developed, ASTER-specific cloud masking algorithm is evaluated through comparison with an independent algorithm reported in [Zhao and Di Girolamo(2006)]. To validate and evaluate the cloud optical thickness (τ) and cloud effective radius (reff) from ASTER, the high-spatial-resolution ASTER observations are first aggregated to the same 1000 m resolution as MODIS. Subsequently, τaA and reff, aA retrieved from the aggregated ASTER radiances are compared with the collocated MODIS retrievals. For overcast pixels, the two data sets agree very well with Pearson's product-moment correlation coefficients of R > 0.970. However, for partially cloudy pixels there are significant differences between reff, aA and the MODIS results which can exceed 10 µm. Moreover, it is shown that the numerous delicate cloud structures in the example marine boundary layer scenes, resolved by the high-resolution ASTER retrievals, are smoothed by the MODIS observations. The overall good agreement between the research-level ASTER results and the operational MODIS C6 products proves the feasibility of MODIS-like retrievals from ASTER reflectance measurements and provides the basis for future studies concerning the scale dependency of satellite observations and three-dimensional radiative effects.

  3. Low-Light Image Enhancement Using Adaptive Digital Pixel Binning

    PubMed Central

    Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki

    2015-01-01

    This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609

  4. Spatial scaling of net primary productivity using subpixel landcover information

    NASA Astrophysics Data System (ADS)

    Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.

    2008-10-01

    Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.

  5. Segment fusion of ToF-SIMS images.

    PubMed

    Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A

    2016-06-08

    The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.

  6. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  7. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent

    2012-01-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  8. Computational-optical microscopy for 3D biological imaging beyond the diffraction limit

    NASA Astrophysics Data System (ADS)

    Grover, Ginni

    In recent years, super-resolution imaging has become an important fluorescent microscopy tool. It has enabled imaging of structures smaller than the optical diffraction limit with resolution less than 50 nm. Extension to high-resolution volume imaging has been achieved by integration with various optical techniques. In this thesis, development of a fluorescent microscope to enable high resolution, extended depth, three dimensional (3D) imaging is discussed; which is achieved by integration of computational methods with optical systems. In the first part of the thesis, point spread function (PSF) engineering for volume imaging is discussed. A class of PSFs, referred to as double-helix (DH) PSFs, is generated. The PSFs exhibit two focused spots in the image plane which rotate about the optical axis, encoding depth in rotation of the image. These PSFs extend the depth-of-field up to a factor of ˜5. Precision performance of the DH-PSFs, based on an information theoretical analysis, is compared with other 3D methods with conclusion that the DH-PSFs provide the best precision and the longest depth-of-field. Out of various possible DH-PSFs, a suitable PSF is obtained for super-resolution microscopy. The DH-PSFs are implemented in imaging systems, such as a microscope, with a special phase modulation at the pupil plane. Surface-relief elements which are polarization-insensitive and ˜90% light efficient are developed for phase modulation. The photon-efficient DH-PSF microscopes thus developed are used, along with optimal position estimation algorithms, for tracking and super-resolution imaging in 3D. Imaging at depths-of-field of up to 2.5 microm is achieved without focus scanning. Microtubules were imaged with 3D resolution of (6, 9, 39) nm, which is in close agreement with the theoretical limit. A quantitative study of co-localization of two proteins in volume was conducted in live bacteria. In the last part of the thesis practical aspects of the DH-PSF microscope are discussed. A method to stabilize it, for extended periods of time, with 3-4 nm precision in 3D is developed. 3D Super-resolution is demonstrated without drift. A PSF correction algorithm is demonstrated to improve characteristics of the DH-PSF in an experiment, where it is implemented with a polarization-insensitive liquid crystal spatial light modulator.

  9. Effects of speckle/pixel size ratio on temporal and spatial speckle-contrast analysis of dynamic scattering systems: Implications for measurements of blood-flow dynamics.

    PubMed

    Ramirez-San-Juan, J C; Mendez-Aguilar, E; Salazar-Hermenegildo, N; Fuentes-Garcia, A; Ramos-Garcia, R; Choi, B

    2013-01-01

    Laser Speckle Contrast Imaging (LSCI) is an optical technique used to generate blood flow maps with high spatial and temporal resolution. It is well known that in LSCI, the speckle size must exceed the Nyquist criterion to maximize the speckle's pattern contrast. In this work, we study experimentally the effect of speckle-pixel size ratio not only in dynamic speckle contrast, but also on the calculation of the relative flow speed for temporal and spatial analysis. Our data suggest that the temporal LSCI algorithm is more accurate at assessing the relative changes in flow speed than the spatial algorithm.

  10. Large scale tracking algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For highermore » resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.« less

  11. The formation of quantum images and their transformation and super-resolution reading

    NASA Astrophysics Data System (ADS)

    Balakin, D. A.; Belinsky, A. V.

    2016-05-01

    Images formed by light with suppressed photon fluctuations are interesting objects for studies with the aim of increasing their limiting information capacity and quality. This light in the sub-Poisson state can be prepared in a resonator filled with a medium with Kerr nonlinearity, in which self-phase modulation takes place. Spatially and temporally multimode light beams are studied and the production of spatial frequency spectra of suppressed photon fluctuations is described. The efficient operation regimes of the system are found. A particular schematic solution is described, which allows one to realize the potential possibilities laid in the formation of the squeezed states of light to a maximum degree during self-phase modulation in a resonator for the maximal suppression of amplitude quantum noises upon two-dimensional imaging. The efficiency of using light with suppressed quantum fluctuations for computer image processing is studied. An algorithm is described for interpreting measurements for increasing the resolution with respect to the geometrical resolution. A mathematical model that characterizes the measurement scheme is constructed and the problem of the image reconstruction is solved. The algorithm for the interpretation of images is verified. Conditions are found for the efficient application of sub-Poisson light for super-resolution imaging. It is found that the image should have a low contrast and be maximally transparent.

  12. Super resolution PLIF demonstrated in turbulent jet flows seeded with I2

    NASA Astrophysics Data System (ADS)

    Xu, Wenjiang; Liu, Ning; Ma, Lin

    2018-05-01

    Planar laser induced fluorescence (PLIF) represents an indispensable tool for flow and flame imaging. However, the PLIF technique suffers from limited spatial resolution or blurring in many situations, which restricts its applicability and capability. This work describes a new method, named SR-PLIF (super-resolution PLIF), to overcome these limitations and enhance the capability of PLIF. The method uses PLIF images captured simultaneously from two (or more) orientations to reconstruct a final PLIF image with resolution enhanced or blurring removed. This paper reports the development of the reconstruction algorithm, and the experimental demonstration of the SR-PLIF method both with controlled samples and with turbulent flows seeded with iodine vapor. Using controlled samples with two cameras, the spatial resolution in the best case was improved from 0.06 mm in the projections to 0.03 mm in the SR image, in terms of the spreading width of a sharp edge. With turbulent flows, an image sharpness measure was developed to quantify the spatial resolution, and SR reconstruction with two cameras can effectively improve the spatial resolution compared to the projections in terms of the sharpness measure.

  13. An algorithm to detect fire activity using Meteosat: fine tuning and quality assesment

    NASA Astrophysics Data System (ADS)

    Amraoui, M.; DaCamara, C. C.; Ermida, S. L.

    2012-04-01

    Hot spot detection by means of sensors on-board geostationary satellites allows studying wildfire activity at hourly and even sub-hourly intervals, an advantage that cannot be met by polar orbiters. Since 1997, the Satellite Application Facility for Land Surface Analysis has been running an operational procedure that allows detecting active fires based on information from Meteosat-8/SEVIRI. This is the so-called Fire Detection and Monitoring (FD&M) product and the procedure takes advantage of the temporal resolution of SEVIRI (one image every 15 min), and relies on information from SEVIRI channels (namely 0.6, 0.8, 3.9, 10.8 and 12.0 μm) together with information on illumination angles. The method is based on heritage from contextual algorithms designed for polar, sun-synchronous instruments, namely NOAA/AVHRR and MODIS/TERRAAQUA. A potential fire pixel is compared with the neighboring ones and the decision is made based on relative thresholds as derived from the pixels in the neighborhood. Generally speaking, the observed fire incidence compares well against hot spots extracted from the global daily active fire product developed by the MODIS Fire Team. However, values of probability of detection (POD) tend to be quite low, a result that may be partially expected by the finer resolution of MODIS. The aim of the present study is to make a systematic assessment of the impacts on POD and False Alarm Ratio (FAR) of the several parameters that are set in the algorithms. Such parameters range from the threshold values of brightness temperature in the IR3.9 and 10.8 channels that are used to select potential fire pixels up to the extent of the background grid and thresholds used to statistically characterize the radiometric departures of a potential pixel from the respective background. The impact of different criteria to identify pixels contaminated by clouds, smoke and sun glint is also evaluated. Finally, the advantages that may be brought to the algorithm by adding contextual tests in the time domain are discussed. The study lays the grounds to the development of improved quality flags that will be integrated in the FD&M product in the nearby future.

  14. Moving object detection in top-view aerial videos improved by image stacking

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  15. Improving Nocturnal Fire Detection with the VIIRS Day-Night Band

    NASA Technical Reports Server (NTRS)

    Polivka, Thomas N.; Wang, Jun; Ellison, Luke T.; Hyer, Edward J.; Ichoku, Charles M.

    2016-01-01

    Building on existing techniques for satellite remote sensing of fires, this paper takes advantage of the day-night band (DNB) aboard the Visible Infrared Imaging Radiometer Suite (VIIRS) to develop the Firelight Detection Algorithm (FILDA), which characterizes fire pixels based on both visible-light and infrared (IR) signatures at night. By adjusting fire pixel selection criteria to include visible-light signatures, FILDA allows for significantly improved detection of pixels with smaller and/or cooler subpixel hotspots than the operational Interface Data Processing System (IDPS) algorithm. VIIRS scenes with near-coincident Advanced Spaceborne Thermal Emission and Reflection (ASTER) overpasses are examined after applying the operational VIIRS fire product algorithm and including a modified "candidate fire pixel selection" approach from FILDA that lowers the 4-µm brightness temperature (BT) threshold but includes a minimum DNB radiance. FILDA is shown to be effective in detecting gas flares and characterizing fire lines during large forest fires (such as the Rim Fire in California and High Park fire in Colorado). Compared with the operational VIIRS fire algorithm for the study period, FILDA shows a large increase (up to 90%) in the number of detected fire pixels that can be verified with the finer resolution ASTER data (90 m). Part (30%) of this increase is likely due to a combined use of DNB and lower 4-µm BT thresholds for fire detection in FILDA. Although further studies are needed, quantitative use of the DNB to improve fire detection could lead to reduced response times to wildfires and better estimate of fire characteristics (smoldering and flaming) at night.

  16. Determination of target detection limits in hyperspectral data using band selection and dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.

    2016-10-01

    Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally, generally useful wavelength ranges are determined and the optimal amount of principal components is analyzed.

  17. Online Mapping and Perception Algorithms for Multi-robot Teams Operating in Urban Environments

    DTIC Science & Technology

    2015-01-01

    each method on a 2.53 GHz Intel i5 laptop. All our algorithms are hand-optimized, implemented in Java and single threaded. To determine which algorithm...approach would be to label all the pixels in the image with an x, y, z point. However, the angular resolution of the camera is finer than that of the...edge criterion. That is, each edge is either present or absent. In [42], edge existence is further screened by a fixed threshold for angular

  18. A proposed STAR microvertex detector using Active Pixel Sensors with some relevant studies on APS performance

    NASA Astrophysics Data System (ADS)

    Kleinfelder, S.; Li, S.; Bieser, F.; Gareus, R.; Greiner, L.; King, J.; Levesque, J.; Matis, H. S.; Oldenburg, M.; Ritter, H. G.; Retiere, F.; Rose, A.; Schweda, K.; Shabetai, A.; Sichtermann, E.; Thomas, J. H.; Wieman, H. H.; Bichsel, H.

    2006-09-01

    A vertex detector that can measure particles with charm or bottom quarks would dramatically expand the physics capability of the STAR detector at RHIC. To accomplish this, we are proposing to build the Heavy Flavor Tracker (HFT) using 2×2 cm Active Pixels Sensors (APS). Ten of these APS chips will be arranged on a ladder (0.28% of a radiation length) at radii of 1.5 and at 5.0 cm. We have examined several properties of APS chips, so that we can characterize the performance of this detector. Using 1.5 GeV/ c electrons, we have measured the charge collected and compared it to the expected charge. To achieve high efficiency, we have considered two different cluster finding algorithms and found that the choice of algorithm is dependent on noise level. We have demonstrated that a Scanning Electron Microscope can probe properties of an APS chip. In particular, we studied several position resolution algorithms. Finally, we studied the properties of pixel pitches from 5 to 30 μm.

  19. Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution

    PubMed Central

    Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin

    2016-01-01

    The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114

  20. Example-Based Super-Resolution Fluorescence Microscopy.

    PubMed

    Jia, Shu; Han, Boran; Kutz, J Nathan

    2018-04-23

    Capturing biological dynamics with high spatiotemporal resolution demands the advancement in imaging technologies. Super-resolution fluorescence microscopy offers spatial resolution surpassing the diffraction limit to resolve near-molecular-level details. While various strategies have been reported to improve the temporal resolution of super-resolution imaging, all super-resolution techniques are still fundamentally limited by the trade-off associated with the longer image acquisition time that is needed to achieve higher spatial information. Here, we demonstrated an example-based, computational method that aims to obtain super-resolution images using conventional imaging without increasing the imaging time. With a low-resolution image input, the method provides an estimate of its super-resolution image based on an example database that contains super- and low-resolution image pairs of biological structures of interest. The computational imaging of cellular microtubules agrees approximately with the experimental super-resolution STORM results. This new approach may offer potential improvements in temporal resolution for experimental super-resolution fluorescence microscopy and provide a new path for large-data aided biomedical imaging.

  1. Single-Image Super Resolution for Multispectral Remote Sensing Data Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Liebel, L.; Körner, M.

    2016-06-01

    In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.

  2. Imaging resolution and properties analysis of super resolution microscopy with parallel detection under different noise, detector and image restoration conditions

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu

    2018-06-01

    Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.

  3. A Model-Based Approach for the Measurement of Eye Movements Using Image Processing

    NASA Technical Reports Server (NTRS)

    Sung, Kwangjae; Reschke, Millard F.

    1997-01-01

    This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.

  4. Studying the Surfaces of the Icy Galilean Satellites With JIMO

    NASA Astrophysics Data System (ADS)

    Prockter, L.; Schenk, P.; Pappalardo, R.

    2003-12-01

    The Geology subgroup of the Jupiter Icy Moons Orbiter (JIMO) Science Definition Team (SDT) has been working with colleagues within the planetary science community to determine the key outstanding science goals that could be met by the JIMO mission. Geological studies of the Galilean satellites will benefit from the spacecraft's long orbital periods around each satellite, lasting from one to several months. This mission plan allows us to select the optimal viewing conditions to complete global compositional and morphologic mapping at high resolution, and to target geologic features of key scientific interest at very high resolution. Community input to this planning process suggests two major science objectives, along with corresponding measurements proposed to meet them. Objective 1: Determine the origins of surface features and their implications for geological history and evolution. This encompasses investigations of magmatism (intrusion, extrusion, and diapirism), tectonism (isostatic compensation, and styles of faulting, flexure and folding), impact cratering (morphology and distribution), and gradation (erosion and deposition) processes (impact gardening, sputtering, mass wasting and frosts). Suggested measurements to meet this goal include (1) two dimensional global topographic mapping sufficient to discriminate features at a spatial scale of 10 m, and with better than or equal to 1 m relative vertical accuracy, (2) nested images of selected target areas at a range of resolutions down to the submeter pixel scale, (3) global (albedo) mapping at better than or equal to 10 m/pixel, and (4) multispectral global mapping in at least 3 colors at better than or equal to 100 m/pixel, with some subsets at better than 30 m/pixel. Objective 2. Identify and characterize potential landing sites for future missions. A primary component to the success of future landed missions is full characterization of potential sites in terms of their relative age, geological interest, and engineering safety. Measurement requirements suggested to meet this goal (in addition to the requirements of Objective 1) include the acquisition of super-high resolution images of selected target areas (with intermediate context imaging) down to 25 cm/pixel scale. The Geology subgroup passed these recommendations to the full JIMO Science Definition Team, to be incorporated into the final science recommendations for the JIMO mission.

  5. Development of Multi-Sensor Global Cloud and Radiance Composites for DSCOVR EPIC Imager with Subpixel Definition

    NASA Technical Reports Server (NTRS)

    Khlopenkov, Konstantin V.; Duda, David; Thieman, Mandana; Sun-mack, Szedung; Su, Wenying; Minnis, Patrick; Bedka, Kristopher

    2017-01-01

    The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). EPIC delivers adequate spatial resolution imagery but only in shortwave bands (317-780 nm), while NISTAR measures the top-of-atmosphere (TOA) whole-disk radiance in shortwave and longwave broadband windows. Accurate calculation of albedo and outgoing longwave flux requires a high-resolution scene identification such as the radiance observations and cloud properties retrievals from low earth orbit (LEO, including NASA Terra and Aqua MODIS, Suomi-NPP VIIRS, and NOAA AVHRR) and geosynchronous (GEO, including GOES east and west, METEOSAT, INSAT-3D, MTSAT-2, and Himawari-8) satellite imagers. The cloud properties are derived using the Clouds and the Earth's Radiant Energy System (CERES) mission Cloud Subsystem group algorithms. These properties have to be co-located with EPIC pixels to provide the scene identification and to select anisotropic directional models (ADMs), which are then used to adjust the NISTAR-measured radiance and subsequently obtain the global daytime shortwave and longwave fluxes. This work presents an algorithm for optimal merging of selected radiance and cloud property parameters derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. Selection of satellite data for each 5-km pixel is based on an aggregated rating that incorporates five parameters: nominal satellite resolution, pixel time relative to the EPIC time, viewing zenith angle, distance from day/night terminator, and probability of sun glint. To provide a smoother transition in the merged output, in regions where candidate pixel data from two satellite sources have comparable aggregated rating, the selection decision is defined by the cumulative function of the normal distribution so that abrupt changes in the visual appearance of the composite data are avoided. Higher spatial accuracy in the composite product is achieved by using the inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling.

  6. The Super-linear Slope of the Spatially Resolved Star Formation Law in NGC 3521 and NGC 5194 (M51a)

    NASA Astrophysics Data System (ADS)

    Liu, Guilin; Koda, Jin; Calzetti, Daniela; Fukuhara, Masayuki; Momose, Rieko

    2011-07-01

    We have conducted interferometric observations with the Combined Array for Research in Millimeter Astronomy (CARMA) and an on-the-fly mapping with the 45 m telescope at Nobeyama Radio Observatory (NRO45) in the CO (J = 1-0) emission line of the nearby spiral galaxy NGC 3521. Using the new combined CARMA + NRO45 data of NGC 3521, together with similar data for NGC 5194 (M51a) and archival SINGS Hα, 24 μm THINGS H I, and Galaxy Evolution Explorer/Far-UV (FUV) data for these two galaxies, we investigate the empirical scaling law that connects the surface density of star formation rate (SFR) and cold gas (known as the Schmidt-Kennicutt law or S-K law) on a spatially resolved basis and find a super-linear slope for the S-K law when carefully subtracting the background emissions in the SFR image. We argue that plausibly deriving SFR maps of nearby galaxies requires the diffuse stellar and dust background emission to be subtracted carefully (especially in the mid-infrared and to a lesser extent in the FUV). Applying this approach, we perform a pixel-by-pixel analysis on both galaxies and quantitatively show that the controversial result whether the molecular S-K law (expressed as \\Sigma _SFR\\propto \\Sigma _H_2^{\\gamma _H_2}) is super-linear or basically linear is a result of removing or preserving the local background. In both galaxies, the power index of the molecular S-K law is super-linear (\\gamma _H_2\\gtrsim 1.5) at the highest available resolution (~230 pc) and decreases monotonically for decreasing resolution. We also find in both galaxies that the scatter of the molecular S-K law (\\sigma _H_2) monotonically increases as the resolution becomes higher, indicating a trend for which the S-K law breaks down below some scale. Both \\gamma _H_2 and \\sigma _H_2 are systematically larger in M51a than in NGC 3521, but when plotted against the de-projected scale (δdp), both quantities become highly consistent for the two galaxies, tentatively suggesting that the sub-kpc molecular S-K law in spiral galaxies depends only on the scale being considered, without varying among spiral galaxies. A logarithmic function \\gamma _H_2=-1.1 log [\\delta _dp/kpc]+1.4 and a linear relation \\sigma _H_2=-0.2 [\\delta _dp/kpc]+0.7 are obtained through fitting to the M51a data, which describes both galaxies impressively well on sub-kpc scales. A larger sample of galaxies with better sensitivity, resolution, and broader field of view are required to test the general applicability of these relations.

  7. Experimental Study of Super-Resolution Using a Compressive Sensing Architecture

    DTIC Science & Technology

    2015-03-01

    Intelligence 24(9), 1167–1183 (2002). [3] Lin, Z. and Shum, H.-Y., “Fundamental limits of reconstruction-based superresolution algorithms under local...IEEE Transactions on 52, 1289–1306 (April 2006). [9] Marcia, R. and Willett, R., “Compressive coded aperture superresolution image reconstruction,” in

  8. Temporally flickering nanoparticles for compound cellular imaging and super resolution

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Tali; Danan, Yossef; Meir, Rinat; Meiri, Amihai; Zalevsky, Zeev

    2016-03-01

    This work presents the use of flickering nanoparticles for imaging biological samples. The method has high noise immunity, and it enables the detection of overlapping types of GNPs, at significantly sub-diffraction distances, making it attractive for super resolving localization microscopy techniques. The method utilizes a lock-in technique at which the imaging of the sample is done using a time-modulated laser beam that match the number of the types of gold nanoparticles (GNPs) that label a given sample, and resulting in the excitation of the temporal flickering of the scattered light at known temporal frequencies. The final image where the GNPs are spatially separated is obtained using post processing where the proper spectral components corresponding to the different modulation frequencies are extracted. This allows the simultaneous super resolved imaging of multiple types of GNPs that label targets of interest within biological samples. Additionally applying the post-processing algorithm of the K-factor image decomposition algorithm can further improve the performance of the proposed approach.

  9. Quantifying the Uncertainty in High Spatial and Temporal Resolution Synthetic Land Surface Reflectance at Pixel Level Using Ground-Based Measurements

    NASA Astrophysics Data System (ADS)

    Kong, J.; Ryu, Y.

    2017-12-01

    Algorithms for fusing high temporal frequency and high spatial resolution satellite images are widely used to develop dense time-series land surface observations. While many studies have revealed that the synthesized frequent high spatial resolution images could be successfully applied in vegetation mapping and monitoring, validation and correction of fused images have not been focused than its importance. To evaluate the precision of fused image in pixel level, in-situ reflectance measurements which could account for the pixel-level heterogeneity are necessary. In this study, the synthetic images of land surface reflectance were predicted by the coarse high-frequency images acquired from MODIS and high spatial resolution images from Landsat-8 OLI using the Flexible Spatiotemporal Data Fusion (FSDAF). Ground-based reflectance was measured by JAZ Spectrometer (Ocean Optics, Dunedin, FL, USA) on rice paddy during five main growth stages in Cheorwon-gun, Republic of Korea, where the landscape heterogeneity changes through the growing season. After analyzing the spatial heterogeneity and seasonal variation of land surface reflectance based on the ground measurements, the uncertainties of the fused images were quantified at pixel level. Finally, this relationship was applied to correct the fused reflectance images and build the seasonal time series of rice paddy surface reflectance. This dataset could be significant for rice planting area extraction, phenological stages detection, and variables estimation.

  10. Performance of In-Pixel Circuits for Photon Counting Arrays (PCAs) Based on Polycrystalline Silicon TFTs

    PubMed Central

    Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua; Street, Robert A.; Lu, Jeng Ping

    2017-01-01

    Photon counting arrays (PCAs), defined as pixelated imagers which measure the absorbed energy of x-ray photons individually and record this information digitally, are of increasing clinical interest. A number of PCA prototypes with a 1 mm pixel-to-pixel pitch have recently been fabricated with polycrystalline silicon (poly-Si) — a thin-film technology capable of creating monolithic imagers of a size commensurate with human anatomy. In this study, analog and digital simulation frameworks were developed to provide insight into the influence of individual poly-Si transistors on pixel circuit performance — information that is not readily available through empirical means. The simulation frameworks were used to characterize the circuit designs employed in the prototypes. The analog framework, which determines the noise produced by individual transistors, was used to estimate energy resolution, as well as to identify which transistors contribute the most noise. The digital framework, which analyzes how well circuits function in the presence of significant variations in transistor properties, was used to estimate how fast a circuit can produce an output (referred to as output count rate). In addition, an algorithm was developed and used to estimate the minimum pixel pitch that could be achieved for the pixel circuits of the current prototypes. The simulation frameworks predict that the analog component of the PCA prototypes could have energy resolution as low as 8.9% FWHM at 70 keV; and the digital components should work well even in the presence of significant TFT variations, with the fastest component having output count rates as high as 3 MHz. Finally, based on conceivable improvements in the underlying fabrication process, the algorithm predicts that the 1 mm pitch of the current PCA prototypes could be reduced significantly, potentially to between ~240 and 290 μm. PMID:26878107

  11. Performance of in-pixel circuits for photon counting arrays (PCAs) based on polycrystalline silicon TFTs.

    PubMed

    Liang, Albert K; Koniczek, Martin; Antonuk, Larry E; El-Mohri, Youcef; Zhao, Qihua; Street, Robert A; Lu, Jeng Ping

    2016-03-07

    Photon counting arrays (PCAs), defined as pixelated imagers which measure the absorbed energy of x-ray photons individually and record this information digitally, are of increasing clinical interest. A number of PCA prototypes with a 1 mm pixel-to-pixel pitch have recently been fabricated with polycrystalline silicon (poly-Si)-a thin-film technology capable of creating monolithic imagers of a size commensurate with human anatomy. In this study, analog and digital simulation frameworks were developed to provide insight into the influence of individual poly-Si transistors on pixel circuit performance-information that is not readily available through empirical means. The simulation frameworks were used to characterize the circuit designs employed in the prototypes. The analog framework, which determines the noise produced by individual transistors, was used to estimate energy resolution, as well as to identify which transistors contribute the most noise. The digital framework, which analyzes how well circuits function in the presence of significant variations in transistor properties, was used to estimate how fast a circuit can produce an output (referred to as output count rate). In addition, an algorithm was developed and used to estimate the minimum pixel pitch that could be achieved for the pixel circuits of the current prototypes. The simulation frameworks predict that the analog component of the PCA prototypes could have energy resolution as low as 8.9% full width at half maximum (FWHM) at 70 keV; and the digital components should work well even in the presence of significant thin-film transistor (TFT) variations, with the fastest component having output count rates as high as 3 MHz. Finally, based on conceivable improvements in the underlying fabrication process, the algorithm predicts that the 1 mm pitch of the current PCA prototypes could be reduced significantly, potentially to between ~240 and 290 μm.

  12. Super-resolution Doppler beam sharpening method using fast iterative adaptive approach-based spectral estimation

    NASA Astrophysics Data System (ADS)

    Mao, Deqing; Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu

    2018-01-01

    Doppler beam sharpening (DBS) is a critical technology for airborne radar ground mapping in forward-squint region. In conventional DBS technology, the narrow-band Doppler filter groups formed by fast Fourier transform (FFT) method suffer from low spectral resolution and high side lobe levels. The iterative adaptive approach (IAA), based on the weighted least squares (WLS), is applied to the DBS imaging applications, forming narrower Doppler filter groups than the FFT with lower side lobe levels. Regrettably, the IAA is iterative, and requires matrix multiplication and inverse operation when forming the covariance matrix, its inverse and traversing the WLS estimate for each sampling point, resulting in a notably high computational complexity for cubic time. We propose a fast IAA (FIAA)-based super-resolution DBS imaging method, taking advantage of the rich matrix structures of the classical narrow-band filtering. First, we formulate the covariance matrix via the FFT instead of the conventional matrix multiplication operation, based on the typical Fourier structure of the steering matrix. Then, by exploiting the Gohberg-Semencul representation, the inverse of the Toeplitz covariance matrix is computed by the celebrated Levinson-Durbin (LD) and Toeplitz-vector algorithm. Finally, the FFT and fast Toeplitz-vector algorithm are further used to traverse the WLS estimates based on the data-dependent trigonometric polynomials. The method uses the Hermitian feature of the echo autocorrelation matrix R to achieve its fast solution and uses the Toeplitz structure of R to realize its fast inversion. The proposed method enjoys a lower computational complexity without performance loss compared with the conventional IAA-based super-resolution DBS imaging method. The results based on simulations and measured data verify the imaging performance and the operational efficiency.

  13. The NASA/GEWEX Surface Radiation Budget Release 4 Integrated Product: An Assessment of Improvements in Algorithms and Inputs

    NASA Astrophysics Data System (ADS)

    Stackhouse, P. W., Jr.; Cox, S. J.; Mikovitz, J. C.; Zhang, T.; Gupta, S. K.

    2016-12-01

    The NASA/GEWEX Surface Radiation Budget (SRB) project produces, validates and analyzes shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. The current release 3.0/3.1 consists of 1x1 degree radiative fluxes (available at gewex-srb.larc.nasa.gov) and is produced using the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This ISCCP DX product is subsampled to 30 km. ISCCP is currently recalibrating and reprocessing their entire data series, to be released as the H product series, with its highest resolution at 10km pixel resolution. The nine-fold increase in number of pixels will allow SRB to produce a higher resolution gridded product (e.g. 0.5 degree or higher), as well as the production of pixel-level fluxes. Other key input improvements include a detailed aerosol history using the Max Planck Institute Aerosol Climatology (MAC), temperature and moisture profiles from HIRS, and new topography, surface type, and snow/ice maps. Here we present results for the improved GEWEX Shortwave and Longwave algorithm (GSW and GLW) with new ISCCP data (for at least 5 years, 2005-2009), various other improved input data sets and incorporation of many additional internal SRB model improvements. We assess the radiative fluxes from new SRB products and contrast these at various resolutions. All these fluxes are compared to both surface measurements and to CERES SYN1Deg and EBAF data products for assessment of the effect of improvements. The SRB data produced will be released as part of the Release 4.0 Integrated Product that shares key input and output quantities with other GEWEX global products providing estimates of the Earth's global water and energy cycle (i.e., ISCCP, SeaFlux, LandFlux, NVAP, etc.).

  14. Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980

    NASA Astrophysics Data System (ADS)

    Barbe, D. F.

    1980-01-01

    Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.

  15. Advanced Topics in Space Situational Awareness

    DTIC Science & Technology

    2007-11-07

    34super-resolution." Such optical superresolution is characteristic of many model-based image processing algorithms, and reflects the incorporation of...Sampling Theorem," J. Opt. Soc. Am. A, vol. 24, 311-325 (2007). [39] S. Prasad, "Digital and Optical Superresolution of Low-Resolution Image Sequences," Un...wavefront coding for the specific application of extension of image depth well beyond what is possible in a standard imaging system. The problem of optical

  16. Optimized multiple linear mappings for single image super-resolution

    NASA Astrophysics Data System (ADS)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  17. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Zhang, Z.; Ackerman, S. A.; Maddux, B. C.

    2012-12-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the 1.6, 2.1, and 3.7 μm spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "not-clear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud edges (defined by immediate adjacency to "clear" MOD/MYD35 pixels) as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the 1D cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  18. Measurements with MÖNCH, a 25 μm pixel pitch hybrid pixel detector

    NASA Astrophysics Data System (ADS)

    Ramilli, M.; Bergamaschi, A.; Andrae, M.; Brückner, M.; Cartier, S.; Dinapoli, R.; Fröjdh, E.; Greiffenberg, D.; Hutwelker, T.; Lopez-Cuenca, C.; Mezza, D.; Mozzanica, A.; Ruat, M.; Redford, S.; Schmitt, B.; Shi, X.; Tinti, G.; Zhang, J.

    2017-01-01

    MÖNCH is a hybrid silicon pixel detector based on charge integration and with analog readout, featuring a pixel size of 25×25 μm2. The latest working prototype consists of an array of 400×400 identical pixels for a total active area of 1×1 cm2. Its design is optimized for the single photon regime. An exhaustive characterization of this large area prototype has been carried out in the past months, and it confirms an ENC in the order of 35 electrons RMS and a dynamic range of ~4×12 keV photons in high gain mode, which increases to ~100×12 keV photons with the lowest gain setting. The low noise levels of MÖNCH make it a suitable candidate for X-ray detection at energies around 1 keV and below. Imaging applications in particular can benefit significantly from the use of MÖNCH: due to its extremely small pixel pitch, the detector intrinsically offers excellent position resolution. Moreover, in low flux conditions, charge sharing between neighboring pixels allows the use of position interpolation algorithms which grant a resolution at the micrometer-level. Its energy reconstruction and imaging capabilities have been tested for the first time at a low energy beamline at PSI, with photon energies between 1.75 keV and 3.5 keV, and results will be shown.

  19. High spatiotemporal resolution measurement of regional lung air volumes from 2D phase contrast x-ray images.

    PubMed

    Leong, Andrew F T; Fouras, Andreas; Islam, M Sirajul; Wallace, Megan J; Hooper, Stuart B; Kitchen, Marcus J

    2013-04-01

    Described herein is a new technique for measuring regional lung air volumes from two-dimensional propagation-based phase contrast x-ray (PBI) images at very high spatial and temporal resolution. Phase contrast dramatically increases lung visibility and the outlined volumetric reconstruction technique quantifies dynamic changes in respiratory function. These methods can be used for assessing pulmonary disease and injury and for optimizing mechanical ventilation techniques for preterm infants using animal models. The volumetric reconstruction combines the algorithms of temporal subtraction and single image phase retrieval (SIPR) to isolate the image of the lungs from the thoracic cage in order to measure regional lung air volumes. The SIPR algorithm was used to recover the change in projected thickness of the lungs on a pixel-by-pixel basis (pixel dimensions ≈ 16.2 μm). The technique has been validated using numerical simulation and compared results of measuring regional lung air volumes with and without the use of temporal subtraction for removing the thoracic cage. To test this approach, a series of PBI images of newborn rabbit pups mechanically ventilated at different frequencies was employed. Regional lung air volumes measured from PBI images of newborn rabbit pups showed on average an improvement of at least 20% in 16% of pixels within the lungs in comparison to that measured without the use of temporal subtraction. The majority of pixels that showed an improvement was found to be in regions occupied by bone. Applying the volumetric technique to sequences of PBI images of newborn rabbit pups, it is shown that lung aeration at birth can be highly heterogeneous. This paper presents an image segmentation technique based on temporal subtraction that has successfully been used to isolate the lungs from PBI chest images, allowing the change in lung air volume to be measured over regions as small as the pixel size. Using this technique, it is possible to measure changes in regional lung volume at high spatial and temporal resolution during breathing at much lower x-ray dose than would be required using computed tomography.

  20. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    PubMed Central

    Wang, Guizhou; Liu, Jianbo; He, Guojin

    2013-01-01

    This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808

  1. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  2. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  3. Mixing geometric and radiometric features for change classification

    NASA Astrophysics Data System (ADS)

    Fournier, Alexandre; Descombes, Xavier; Zerubia, Josiane

    2008-02-01

    Most basic change detection algorithms use a pixel-based approach. Whereas such approach is quite well defined for monitoring important area changes (such as urban growth monitoring) in low resolution images, an object based approach seems more relevant when the change detection is specifically aimed toward targets (such as small buildings and vehicles). In this paper, we present an approach that mixes radiometric and geometric features to qualify the changed zones. The goal is to establish bounds (appearance, disappearance, substitution ...) between the detected changes and the underlying objects. We proceed by first clustering the change map (containing each pixel bitemporal radiosity) in different classes using the entropy-kmeans algorithm. Assuming that most man-made objects have a polygonal shape, a polygonal approximation algorithm is then used in order to characterize the resulting zone shapes. Hence allowing us to refine the primary rough classification, by integrating the polygon orientations in the state space. Tests are currently conducted on Quickbird data.

  4. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    NASA Astrophysics Data System (ADS)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  5. Vertex shading of the three-dimensional model based on ray-tracing algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.

  6. Non-heuristic automatic techniques for overcoming low signal-to-noise-ratio bias of localization microscopy and multiple signal classification algorithm.

    PubMed

    Agarwal, Krishna; Macháň, Radek; Prasad, Dilip K

    2018-03-21

    Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.

  7. A weighted optimization approach to time-of-flight sensor fusion.

    PubMed

    Schwarz, Sebastian; Sjostrom, Marten; Olsson, Roger

    2014-01-01

    Acquiring scenery depth is a fundamental task in computer vision, with many applications in manufacturing, surveillance, or robotics relying on accurate scenery information. Time-of-flight cameras can provide depth information in real-time and overcome short-comings of traditional stereo analysis. However, they provide limited spatial resolution and sophisticated upscaling algorithms are sought after. In this paper, we present a sensor fusion approach to time-of-flight super resolution, based on the combination of depth and texture sources. Unlike other texture guided approaches, we interpret the depth upscaling process as a weighted energy optimization problem. Three different weights are introduced, employing different available sensor data. The individual weights address object boundaries in depth, depth sensor noise, and temporal consistency. Applied in consecutive order, they form three weighting strategies for time-of-flight super resolution. Objective evaluations show advantages in depth accuracy and for depth image based rendering compared with state-of-the-art depth upscaling. Subjective view synthesis evaluation shows a significant increase in viewer preference by a factor of four in stereoscopic viewing conditions. To the best of our knowledge, this is the first extensive subjective test performed on time-of-flight depth upscaling. Objective and subjective results proof the suitability of our approach to time-of-flight super resolution approach for depth scenery capture.

  8. Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique

    NASA Technical Reports Server (NTRS)

    Wargo, M. J.; Witt, A. F.

    1992-01-01

    A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.

  9. Automatic determination of the artery vein ratio in retinal images

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; van Ginneken, Bram; Abràmoff, Michael D.

    2010-03-01

    A lower ratio between the width of the arteries and veins (Arteriolar-to-Venular diameter Ratio, AVR) on the retina, is well established to be predictive of stroke and other cardiovascular events in adults, as well as an increased risk of retinopathy of prematurity in premature infants. This work presents an automatic method that detects the location of the optic disc, determines the appropriate region of interest (ROI), classifies the vessels in the ROI into arteries and veins, measures their widths and calculates the AVR. After vessel segmentation and vessel width determination the optic disc is located and the system eliminates all vessels outside the AVR measurement ROI. The remaining vessels are thinned, vessel crossing and bifurcation points are removed leaving a set of vessel segments containing centerline pixels. Features are extracted from each centerline pixel that are used to assign them a soft label indicating the likelihood the pixel is part of a vein. As all centerline pixels in a connected segment should be the same type, the median soft label is assigned to each centerline pixel in the segment. Next artery vein pairs are matched using an iterative algorithm and the widths of the vessels is used to calculate the AVR. We train and test the algorithm using a set of 25 high resolution digital color fundus photographs a reference standard that indicates for the major vessels in the images whether they are an artery or a vein. We compared the AVR values produced by our system with those determined using a computer assisted method in 15 high resolution digital color fundus photographs and obtained a correlation coefficient of 0.881.

  10. Pre-launch Performance Assessment of the VIIRS Ice Surface Temperature Algorithm

    NASA Astrophysics Data System (ADS)

    Ip, J.; Hauss, B.

    2008-12-01

    The VIIRS Ice Surface Temperature (IST) environmental data product provides the surface temperature of sea-ice at VIIRS moderate resolution (750m) during both day and night. To predict the IST, the retrieval algorithm utilizes a split-window approach with Long-wave Infrared (LWIR) channels at 10.76 μm (M15) and 12.01 μm (M16) to correct for atmospheric water vapor. The split-window approach using these LWIR channels is AVHRR and MODIS heritage, where the MODIS formulation has a slightly modified functional form. The algorithm relies on the VIIRS Cloud Mask IP for identifying cloudy and ocean pixels, the VIIRS Ice Concentration IP for identifying ice pixels, and the VIIRS Aerosol Optical Thickness (AOT) IP for excluding pixels with AOT greater than 1.0. In this paper, we will report the pre-launch performance assessment of the IST retrieval. We have taken two separate approaches to perform this assessment, one based on global synthetic data and the other based on proxy data from Terra MODIS. Results of the split- window algorithm have been assessed by comparison either to synthetic "truth" or results of the MODIS retrieval. We will also show that the results of the assessment with proxy data are consistent with those obtained using the global synthetic data.

  11. Assessing the impact of background spectral graph construction techniques on the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.; Albano, James A.; Basener, William F.

    2012-06-01

    Anomaly detection algorithms have historically been applied to hyperspectral imagery in order to identify pixels whose material content is incongruous with the background material in the scene. Typically, the application involves extracting man-made objects from natural and agricultural surroundings. A large challenge in designing these algorithms is determining which pixels initially constitute the background material within an image. The topological anomaly detection (TAD) algorithm constructs a graph theory-based, fully non-parametric topological model of the background in the image scene, and uses codensity to measure deviation from this background. In TAD, the initial graph theory structure of the image data is created by connecting an edge between any two pixel vertices x and y if the Euclidean distance between them is less than some resolution r. While this type of proximity graph is among the most well-known approaches to building a geometric graph based on a given set of data, there is a wide variety of dierent geometrically-based techniques. In this paper, we present a comparative test of the performance of TAD across four dierent constructs of the initial graph: mutual k-nearest neighbor graph, sigma-local graph for two different values of σ > 1, and the proximity graph originally implemented in TAD.

  12. Mutual information registration of multi-spectral and multi-resolution images of DigitalGlobe's WorldView-3 imaging satellite

    NASA Astrophysics Data System (ADS)

    Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio

    2017-05-01

    WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.

  13. LSA SAF Meteosat FRP products - Part 1: Algorithms, product contents, and analysis

    NASA Astrophysics Data System (ADS)

    Wooster, M. J.; Roberts, G.; Freeborn, P. H.; Xu, W.; Govaerts, Y.; Beeby, R.; He, J.; Lattanzio, A.; Fisher, D.; Mullen, R.

    2015-11-01

    Characterizing changes in landscape fire activity at better than hourly temporal resolution is achievable using thermal observations of actively burning fires made from geostationary Earth Observation (EO) satellites. Over the last decade or more, a series of research and/or operational "active fire" products have been developed from geostationary EO data, often with the aim of supporting biomass burning fuel consumption and trace gas and aerosol emission calculations. Such Fire Radiative Power (FRP) products are generated operationally from Meteosat by the Land Surface Analysis Satellite Applications Facility (LSA SAF) and are available freely every 15 min in both near-real-time and archived form. These products map the location of actively burning fires and characterize their rates of thermal radiative energy release (FRP), which is believed proportional to rates of biomass consumption and smoke emission. The FRP-PIXEL product contains the full spatio-temporal resolution FRP data set derivable from the SEVIRI (Spinning Enhanced Visible and Infrared Imager) imager onboard Meteosat at a 3 km spatial sampling distance (decreasing away from the west African sub-satellite point), whilst the FRP-GRID product is an hourly summary at 5° grid resolution that includes simple bias adjustments for meteorological cloud cover and regional underestimation of FRP caused primarily by underdetection of low FRP fires. Here we describe the enhanced geostationary Fire Thermal Anomaly (FTA) detection algorithm used to deliver these products and detail the methods used to generate the atmospherically corrected FRP and per-pixel uncertainty metrics. Using SEVIRI scene simulations and real SEVIRI data, including from a period of Meteosat-8 "special operations", we describe certain sensor and data pre-processing characteristics that influence SEVIRI's active fire detection and FRP measurement capability, and use these to specify parameters in the FTA algorithm and to make recommendations for the forthcoming Meteosat Third Generation operations in relation to active fire measures. We show that the current SEVIRI FTA algorithm is able to discriminate actively burning fires covering down to 10-4 of a pixel and that it appears more sensitive to fire than other algorithms used to generate many widely exploited active fire products. Finally, we briefly illustrate the information contained within the current Meteosat FRP-PIXEL and FRP-GRID products, providing example analyses for both individual fires and multi-year regional-scale fire activity; the companion paper (Roberts et al., 2015) provides a full product performance evaluation and a demonstration of product use within components of the Copernicus Atmosphere Monitoring Service (CAMS).

  14. Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.

    2016-12-01

    Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.

  15. SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.

    PubMed

    Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A

    2016-11-01

    Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.

  16. Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.

    PubMed

    Steimers, A; Farnung, W; Kohl-Bareis, M

    2016-01-01

    We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.

  17. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  18. Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography

    NASA Technical Reports Server (NTRS)

    Xu, Feng; Deshpande, Manohar

    2012-01-01

    Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.

  19. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  20. A method for generating high resolution satellite image time series

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.

  1. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  2. Stable and accurate methods for identification of water bodies from Landsat series imagery using meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Gamshadzaei, Mohammad Hossein; Rahimzadegan, Majid

    2017-10-01

    Identification of water extents in Landsat images is challenging due to surfaces with similar reflectance to water extents. The objective of this study is to provide stable and accurate methods for identifying water extents in Landsat images based on meta-heuristic algorithms. Then, seven Landsat images were selected from various environmental regions in Iran. Training of the algorithms was performed using 40 water pixels and 40 nonwater pixels in operational land imager images of Chitgar Lake (one of the study regions). Moreover, high-resolution images from Google Earth were digitized to evaluate the results. Two approaches were considered: index-based and artificial intelligence (AI) algorithms. In the first approach, nine common water spectral indices were investigated. AI algorithms were utilized to acquire coefficients of optimal band combinations to extract water extents. Among the AI algorithms, the artificial neural network algorithm and also the ant colony optimization, genetic algorithm, and particle swarm optimization (PSO) meta-heuristic algorithms were implemented. Index-based methods represented different performances in various regions. Among AI methods, PSO had the best performance with average overall accuracy and kappa coefficient of 93% and 98%, respectively. The results indicated the applicability of acquired band combinations to extract accurately and stably water extents in Landsat imagery.

  3. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  4. Parametric boundary reconstruction algorithm for industrial CT metrology application.

    PubMed

    Yin, Zhye; Khare, Kedar; De Man, Bruno

    2009-01-01

    High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.

  5. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement

    PubMed Central

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-01-01

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893

  6. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement.

    PubMed

    Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming

    2018-02-07

    There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.

  7. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    NASA Astrophysics Data System (ADS)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  8. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    NASA Astrophysics Data System (ADS)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  9. Research on compression performance of ultrahigh-definition videos

    NASA Astrophysics Data System (ADS)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  10. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  11. Method for hyperspectral imagery exploitation and pixel spectral unmixing

    NASA Technical Reports Server (NTRS)

    Lin, Ching-Fang (Inventor)

    2003-01-01

    An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.

  12. Single image super resolution algorithm based on edge interpolation in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhang, Mengqun; Zhang, Wei; He, Xinyu

    2017-11-01

    In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.

  13. The Effect of Shadow Area on Sgm Algorithm and Disparity Map Refinement from High Resolution Satellite Stereo Images

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.

    2017-09-01

    Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.

  14. The Belle II Pixel Detector Data Acquisition and Background Suppression System

    NASA Astrophysics Data System (ADS)

    Lautenbach, K.; Deschamps, B.; Dingfelder, J.; Getzkow, D.; Geßler, T.; Konorov, I.; Kühn, W.; Lange, S.; Levit, D.; Liu, Z.-A.; Marinas, C.; Münchow, D.; Rabusov, A.; Reiter, S.; Spruck, B.; Wessel, C.; Zhao, J.

    2017-06-01

    The Belle II experiment at the future SuperKEKB collider in Tsukuba, Japan, features a design luminosity of 8 · 1035 cm-2s-1, which is a factor of 40 larger than that of its predecessor Belle. The pixel detector (PXD) with about 8 million pixels is based on the DEPFET technology and will improve the vertex resolution in beam direction by a factor of 2. With an estimated trigger rate of 30 kHz, the PXD is expected to generate a data rate of 20 GBytes/s, which is about 10 times larger than the amount of data generated by all other Belle II subdetectors. Due to the large beam-related background, the PXD requires a data acquisition system with high-bandwidth data links and realtime background reduction by a factor of 30. To achieve this, the Belle II pixel DAQ uses an FPGA-based computing platform with high speed serial links implemented in the ATCA (Advanced Telecommunications Computing Architecture) standard. The architecture and performance of the data acquisition system and data reduction of the PXD will be presented. In April 2016 and February 2017 a prototype PXD-DAQ system operated in a test beam campaign delivered data with the whole readout chain under realistic high rate conditions. Final results from the beam test will be presented.

  15. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    USGS Publications Warehouse

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, John L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  16. Fundamental techniques for resolution enhancement of average subsampled images

    NASA Astrophysics Data System (ADS)

    Shen, Day-Fann; Chiu, Chui-Wen

    2012-07-01

    Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.

  17. Heterogeneous CPU-GPU moving targets detection for UAV video

    NASA Astrophysics Data System (ADS)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  18. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  19. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    PubMed Central

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  20. Pixelation Effects in Weak Lensing

    NASA Technical Reports Server (NTRS)

    High, F. William; Rhodes, Jason; Massey, Richard; Ellis, Richard

    2007-01-01

    Weak gravitational lensing can be used to investigate both dark matter and dark energy but requires accurate measurements of the shapes of faint, distant galaxies. Such measurements are hindered by the finite resolution and pixel scale of digital cameras. We investigate the optimum choice of pixel scale for a space-based mission, using the engineering model and survey strategy of the proposed Supernova Acceleration Probe as a baseline. We do this by simulating realistic astronomical images containing a known input shear signal and then attempting to recover the signal using the Rhodes, Refregier, and Groth algorithm. We find that the quality of shear measurement is always improved by smaller pixels. However, in practice, telescopes are usually limited to a finite number of pixels and operational life span, so the total area of a survey increases with pixel size. We therefore fix the survey lifetime and the number of pixels in the focal plane while varying the pixel scale, thereby effectively varying the survey size. In a pure trade-off for image resolution versus survey area, we find that measurements of the matter power spectrum would have minimum statistical error with a pixel scale of 0.09' for a 0.14' FWHM point-spread function (PSF). The pixel scale could be increased to 0.16' if images dithered by exactly half-pixel offsets were always available. Some of our results do depend on our adopted shape measurement method and should be regarded as an upper limit: future pipelines may require smaller pixels to overcome systematic floors not yet accessible, and, in certain circumstances, measuring the shape of the PSF might be more difficult than those of galaxies. However, the relative trends in our analysis are robust, especially those of the surface density of resolved galaxies. Our approach thus provides a snapshot of potential in available technology, and a practical counterpart to analytic studies of pixelation, which necessarily assume an idealized shape measurement method.

  1. Influence of the Pixel Sizes of Reference Computed Tomography on Single-photon Emission Computed Tomography Image Reconstruction Using Conjugate-gradient Algorithm.

    PubMed

    Okuda, Kyohei; Sakimoto, Shota; Fujii, Susumu; Ida, Tomonobu; Moriyama, Shigeru

    The frame-of-reference using computed-tomography (CT) coordinate system on single-photon emission computed tomography (SPECT) reconstruction is one of the advanced characteristics of the xSPECT reconstruction system. The aim of this study was to reveal the influence of the high-resolution frame-of-reference on the xSPECT reconstruction. 99m Tc line-source phantom and National Electrical Manufacturers Association (NEMA) image quality phantom were scanned using the SPECT/CT system. xSPECT reconstructions were performed with the reference CT images in different sizes of the display field-of-view (DFOV) and pixel. The pixel sizes of the reconstructed xSPECT images were close to 2.4 mm, which is acquired as originally projection data, even if the reference CT resolution was varied. The full width at half maximum (FWHM) of the line-source, absolute recovery coefficient, and background variability of image quality phantom were independent on the sizes of DFOV in the reference CT images. The results of this study revealed that the image quality of the reconstructed xSPECT images is not influenced by the resolution of frame-of-reference on SPECT reconstruction.

  2. Development of an imaging method for quantifying a large digital PCR droplet

    NASA Astrophysics Data System (ADS)

    Huang, Jen-Yu; Lee, Shu-Sheng; Hsu, Yu-Hsiang

    2017-02-01

    Portable devices have been recognized as the future linkage between end-users and lab-on-a-chip devices. It has a user friendly interface and provides apps to interface headphones, cameras, and communication duct, etc. In particular, the digital resolution of cameras installed in smartphones or pads already has a high imaging resolution with a high number of pixels. This unique feature has triggered researches to integrate optical fixtures with smartphone to provide microscopic imaging capabilities. In this paper, we report our study on developing a portable diagnostic tool based on the imaging system of a smartphone and a digital PCR biochip. A computational algorithm is developed to processing optical images taken from a digital PCR biochip with a smartphone in a black box. Each reaction droplet is recorded in pixels and is analyzed in a sRGB (red, green, and blue) color space. Multistep filtering algorithm and auto-threshold algorithm are adopted to minimize background noise contributed from ccd cameras and rule out false positive droplets, respectively. Finally, a size-filtering method is applied to identify the number of positive droplets to quantify target's concentration. Statistical analysis is then performed for diagnostic purpose. This process can be integrated in an app and can provide a user friendly interface without professional training.

  3. Multi-year monitoring of paddy rice planting area in Northeast China using MODIS time series data.

    PubMed

    Shi, Jing-jing; Huang, Jing-feng; Zhang, Feng

    2013-10-01

    The objective of this study was to investigate the tempo-spatial distribution of paddy rice in Northeast China using moderate resolution imaging spectroradiometer (MODIS) data. We developed an algorithm for detection and estimation of the transplanting and flooding periods of paddy rice with a combination of enhanced vegetation index (EVI) and land surface water index with a central wavelength at 2130 nm (LSWI2130). In two intensive sites in Northeast China, fine resolution satellite imagery was used to validate the performance of the algorithm at pixel and 3×3 pixel window levels, respectively. The commission and omission errors in both of the intensive sites were approximately less than 20%. Based on the algorithm, annual distribution of paddy rice in Northeast China from 2001 to 2009 was mapped and analyzed. The results demonstrated that the MODIS-derived area was highly correlated with published agricultural statistical data with a coefficient of determination (R(2)) value of 0.847. It also revealed a sharp decline in 2003, especially in the Sanjiang Plain located in the northeast of Heilongjiang Province, due to the oversupply and price decline of rice in 2002. These results suggest that the approaches are available for accurate and reliable monitoring of rice cultivated areas and variation on a large scale.

  4. Multi-year monitoring of paddy rice planting area in Northeast China using MODIS time series data*

    PubMed Central

    Shi, Jing-jing; Huang, Jing-feng; Zhang, Feng

    2013-01-01

    The objective of this study was to investigate the tempo-spatial distribution of paddy rice in Northeast China using moderate resolution imaging spectroradiometer (MODIS) data. We developed an algorithm for detection and estimation of the transplanting and flooding periods of paddy rice with a combination of enhanced vegetation index (EVI) and land surface water index with a central wavelength at 2 130 nm (LSWI2130). In two intensive sites in Northeast China, fine resolution satellite imagery was used to validate the performance of the algorithm at pixel and 3×3 pixel window levels, respectively. The commission and omission errors in both of the intensive sites were approximately less than 20%. Based on the algorithm, annual distribution of paddy rice in Northeast China from 2001 to 2009 was mapped and analyzed. The results demonstrated that the MODIS-derived area was highly correlated with published agricultural statistical data with a coefficient of determination (R 2) value of 0.847. It also revealed a sharp decline in 2003, especially in the Sanjiang Plain located in the northeast of Heilongjiang Province, due to the oversupply and price decline of rice in 2002. These results suggest that the approaches are available for accurate and reliable monitoring of rice cultivated areas and variation on a large scale. PMID:24101210

  5. Fractional Snowcover Estimates from Earth Observing System (EOS) Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS)

    NASA Technical Reports Server (NTRS)

    Salomonson, Vincent V.

    2002-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) on the NASA Earth Observing System (EOS) Terra and Aqua missions has shown considerable capability for mapping snowcover. The typical approach that has used, along with other criteria, the Normalized Snow Difference Index (NDSI) that takes the difference between 500 meter observations at 1.64 micrometers (MODIS band 6) and 0.555 micrometers (MODIS band 4) over the sum of these observations to determine whether MODIS pixels are snowcovered or not in mapping the extent of snowcover. For many hydrological and climate studies using remote sensing of snowcover, it is desirable to assess if the MODIS snowcover observations could not be enhanced by providing the fraction of snowcover in each MODIS observation (pixel). Pursuant to this objective studies have been conducted to assess whether there is sufficient "signal%o in the NDSI parameter to provide useful estimates of fractional snowcover in each MODIS 500 meter pixel. To accomplish this objective high spatial resolution (30 meter) Landsat snowcover observations were used and co-registered with MODIS 500 meter pixels. The NDSI approach was used to assess whether a Landsat pixel was or was not snowcovered. Then the number of snowcovered Landsat pixels within a MODIS pixel was used to determine the fraction of snowcover within each MODIS pixel. The e results were then used to develop statistical relationships between the NDSI value for each 500 meter MODIS pixel and the fraction of snowcover in the MODIS pixel. Such studies were conducted for three widely different areas covered by Landsat scenes in Alaska, Russia, and the Quebec Province in Canada. The statistical relationships indicate that a 10 percent accuracy can be attained. The variability in the statistical relationship for the three areas was found to be remarkably similar (-0.02 for mean error and less than 0.01 for mean absolute error and standard deviation). Independent tests of the relationships were accomplished by taking the relationship of fractional snow-cover to NDSI from one area (e.g., Alaska) and testing it on the other two areas (e.g. Russia and Quebec). Again the results showed that fractional snow-cover can be estimated to 10 percent. The results have been shown to have advantages over other published fractional snowcover algorithms applied to MODIS data. Most recently the fractional snow-cover algorithm has been applied using 500-meter observations over the state of Colorado for a period spanning 25 days. The results exhibit good behavior in mapping the spatial and temporal variability in snowcover over that 25-day period. Overall these studies indicate that robust estimates of fractional snow-cover can be attained using the NDSI parameter over areas extending in size from watersheds relatively large compared to MODIS pixels to global land cover. Other refinements to this approach as well as different approaches are being examined for mapping fractional snow-cover using MODIS observations.

  6. An Algorithm of an X-ray Hit Allocation to a Single Pixel in a Cluster and Its Test-Circuit Implementation

    DOE PAGES

    Deptuch, Grzegorz W.; Fahim, Farah; Grybos, Pawel; ...

    2017-06-28

    An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel, that recovers composite signals and event driven strobes, to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals, that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32 × 32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3-μm X-ray beam. Furthermore, the results of these tests are given in this paper assessing physical implementation of the algorithm.« less

  7. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  8. Improved localization accuracy in stochastic super-resolution fluorescence microscopy by K-factor image deshadowing

    PubMed Central

    Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G.; Menon, Rajesh; Gerton, Jordan M.; Jorgensen, Erik M.; Zalevsky, Zeev

    2013-01-01

    Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution. PMID:24466491

  9. Human vision-based algorithm to hide defective pixels in LCDs

    NASA Astrophysics Data System (ADS)

    Kimpe, Tom; Coulier, Stefaan; Van Hoey, Gert

    2006-02-01

    Producing displays without pixel defects or repairing defective pixels is technically not possible at this moment. This paper presents a new approach to solve this problem: defects are made invisible for the user by using image processing algorithms based on characteristics of the human eye. The performance of this new algorithm has been evaluated using two different methods. First of all the theoretical response of the human eye was analyzed on a series of images and this before and after applying the defective pixel compensation algorithm. These results show that indeed it is possible to mask a defective pixel. A second method was to perform a psycho-visual test where users were asked whether or not a defective pixel could be perceived. The results of these user tests also confirm the value of the new algorithm. Our "defective pixel correction" algorithm can be implemented very efficiently and cost-effectively as pixel-dataprocessing algorithms inside the display in for instance an FPGA, a DSP or a microprocessor. The described techniques are also valid for both monochrome and color displays ranging from high-quality medical displays to consumer LCDTV applications.

  10. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-03-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shaanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modeled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modeled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI), elevation and aspect have small and additive effects on improving the spatial scaling between these two resolutions.

  11. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-07-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modelled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modelled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI) and elevation have small and additive effects on improving the spatial scaling between these two resolutions.

  12. Experimental assessment and analysis of super-resolution in fluorescence microscopy based on multiple-point spread function fitting of spectrally demultiplexed images

    NASA Astrophysics Data System (ADS)

    Nishimura, Takahiro; Kimura, Hitoshi; Ogura, Yusuke; Tanida, Jun

    2018-06-01

    This paper presents an experimental assessment and analysis of super-resolution microscopy based on multiple-point spread function fitting of spectrally demultiplexed images using a designed DNA structure as a test target. For the purpose, a DNA structure was designed to have binding sites at a certain interval that is smaller than the diffraction limit. The structure was labeled with several types of quantum dots (QDs) to acquire their spatial information as spectrally encoded images. The obtained images are analyzed with a point spread function multifitting algorithm to determine the QD locations that indicate the binding site positions. The experimental results show that the labeled locations can be observed beyond the diffraction-limited resolution using three-colored fluorescence images that were obtained with a confocal fluorescence microscope. Numerical simulations show that labeling with eight types of QDs enables the positions aligned at 27.2-nm pitches on the DNA structure to be resolved with high accuracy.

  13. Interferometric study of Betelgeuse in H band

    NASA Astrophysics Data System (ADS)

    Haubois, X.; Perrin, G.; Lacour, S.; Schuller, P. A.; Monnier, J. D.; Berger, J.-P.; Ridgway, S. T.; Millan-Gabet, R.; Pedretti, E.; Traub, W. A.

    2006-06-01

    We present 3 telescope interferometric observations of the super giant star Betelgeuse (Alpha Ori, M2Iab) using the IOTA/IONIC interferometer (Whipple Observatory, Arizona) in early October 2005. Since IOTA is a 3 telescope interferometer, we were able to make closure phase measurements which allow us to image the star with several pixels across the disk. We discuss the fondamental parameters of Betelgeuse such as diameter, limb darkening and effective temperature. For the first time at this spatial resolution in the H band, closure phases provide interesting insights on the features of the object since we detect a spot corresponding to 0.5% of the total received flux.

  14. Hexagonal Pixels and Indexing Scheme for Binary Images

    NASA Technical Reports Server (NTRS)

    Johnson, Gordon G.

    2004-01-01

    A scheme for resampling binaryimage data from a rectangular grid to a regular hexagonal grid and an associated tree-structured pixel-indexing scheme keyed to the level of resolution have been devised. This scheme could be utilized in conjunction with appropriate image-data-processing algorithms to enable automated retrieval and/or recognition of images. For some purposes, this scheme is superior to a prior scheme that relies on rectangular pixels: one example of such a purpose is recognition of fingerprints, which can be approximated more closely by use of line segments along hexagonal axes than by line segments along rectangular axes. This scheme could also be combined with algorithms for query-image-based retrieval of images via the Internet. A binary image on a rectangular grid is generated by raster scanning or by sampling on a stationary grid of rectangular pixels. In either case, each pixel (each cell in the rectangular grid) is denoted as either bright or dark, depending on whether the light level in the pixel is above or below a prescribed threshold. The binary data on such an image are stored in a matrix form that lends itself readily to searches of line segments aligned with either or both of the perpendicular coordinate axes. The first step in resampling onto a regular hexagonal grid is to make the resolution of the hexagonal grid fine enough to capture all the binaryimage detail from the rectangular grid. In practice, this amounts to choosing a hexagonal-cell width equal to or less than a third of the rectangular- cell width. Once the data have been resampled onto the hexagonal grid, the image can readily be checked for line segments aligned with the hexagonal coordinate axes, which typically lie at angles of 30deg, 90deg, and 150deg with respect to say, the horizontal rectangular coordinate axis. Optionally, one can then rotate the rectangular image by 90deg, then again sample onto the hexagonal grid and check for line segments at angles of 0deg, 60deg, and 120deg to the original horizontal coordinate axis. The net result is that one has checked for line segments at angular intervals of 30deg. For even finer angular resolution, one could, for example, then rotate the rectangular-grid image +/-45deg before sampling to perform checking for line segments at angular intervals of 15deg.

  15. Bringing the Coastal Zone into Finer Focus

    NASA Astrophysics Data System (ADS)

    Guild, L. S.; Hooker, S. B.; Kudela, R. M.; Morrow, J. H.; Torres-Perez, J. L.; Palacios, S. L.; Negrey, K.; Dungan, J. L.

    2015-12-01

    Measurements over extents from submeter to 10s of meters are critical science requirements for the design and integration of remote sensing instruments for coastal zone research. Various coastal ocean phenomena operate at different scales (e.g. meters to kilometers). For example, river plumes and algal blooms have typical extents of 10s of meters and therefore can be resolved with satellite data, however, shallow benthic ecosystem (e.g., coral, seagrass, and kelp) biodiversity and change are best studied at resolutions of submeter to meter, below the pixel size of typical satellite products. The delineation of natural phenomena do not fit nicely into gridded pixels and the coastal zone is complicated by mixed pixels at the land-sea interface with a range of bio-optical signals from terrestrial and water components. In many standard satellite products, these coastal mixed pixels are masked out because they confound algorithms for the ocean color parameter suite. In order to obtain data at the land/sea interface, finer spatial resolution satellite data can be achieved yet spectral resolution is sacrificed. This remote sensing resolution challenge thwarts the advancement of research in the coastal zone. Further, remote sensing of benthic ecosystems and shallow sub-surface phenomena are challenged by the requirements to sense through the sea surface and through a water column with varying light conditions from the open ocean to the water's edge. For coastal waters, >80% of the remote sensing signal is scattered/absorbed due to the atmospheric constituents, sun glint from the sea surface, and water column components. In addition to in-water measurements from various platforms (e.g., ship, glider, mooring, and divers), low altitude aircraft outfitted with high quality bio-optical radiometer sensors and targeted channels matched with in-water sensors and higher altitude platform sensors for ocean color products, bridge the sea-truth measurements to the pixels acquired from satellite and high altitude platforms. We highlight a novel NASA airborne calibration, validation, and research capability for addressing the coastal remote sensing resolution challenge.

  16. Image quality improvement in cone-beam CT using the super-resolution technique.

    PubMed

    Oyama, Asuka; Kumagai, Shinobu; Arai, Norikazu; Takata, Takeshi; Saikawa, Yusuke; Shiraishi, Kenshiro; Kobayashi, Takenori; Kotoku, Jun'ichi

    2018-04-05

    This study was conducted to improve cone-beam computed tomography (CBCT) image quality using the super-resolution technique, a method of inferring a high-resolution image from a low-resolution image. This technique is used with two matrices, so-called dictionaries, constructed respectively from high-resolution and low-resolution image bases. For this study, a CBCT image, as a low-resolution image, is represented as a linear combination of atoms, the image bases in the low-resolution dictionary. The corresponding super-resolution image was inferred by multiplying the coefficients and the high-resolution dictionary atoms extracted from planning CT images. To evaluate the proposed method, we computed the root mean square error (RMSE) and structural similarity (SSIM). The resulting RMSE and SSIM between the super-resolution images and the planning CT images were, respectively, as much as 0.81 and 1.29 times better than those obtained without using the super-resolution technique. We used super-resolution technique to improve the CBCT image quality.

  17. Impact of spatial resolution on cirrus infrared satellite retrievals in the presence of cloud heterogeneity

    NASA Astrophysics Data System (ADS)

    Fauchez, T.; Platnick, S. E.; Meyer, K.; Zhang, Z.; Cornet, C.; Szczap, F.; Dubuisson, P.

    2015-12-01

    Cirrus clouds are an important part of the Earth radiation budget but an accurate assessment of their role remains highly uncertain. Cirrus optical properties such as Cloud Optical Thickness (COT) and ice crystal effective particle size are often retrieved with a combination of Visible/Near InfraRed (VNIR) and ShortWave-InfraRed (SWIR) reflectance channels. Alternatively, Thermal InfraRed (TIR) techniques, such as the Split Window Technique (SWT), have demonstrated better accuracy for thin cirrus effective radius retrievals with small effective radii. However, current global operational algorithms for both retrieval methods assume that cloudy pixels are horizontally homogeneous (Plane Parallel Approximation (PPA)) and independent (Independent Pixel Approximation (IPA)). The impact of these approximations on ice cloud retrievals needs to be understood and, as far as possible, corrected. Horizontal heterogeneity effects in the TIR spectrum are mainly dominated by the PPA bias that primarily depends on the COT subpixel heterogeneity; for solar reflectance channels, in addition to the PPA bias, the IPA can lead to significant retrieval errors due to a significant photon horizontal transport between cloudy columns, as well as brightening and shadowing effects that are more difficult to quantify. Furthermore TIR retrievals techniques have demonstrated better retrieval accuracy for thin cirrus having small effective radii over solar reflectance techniques. The TIR range is thus particularly relevant in order to characterize, as accurately as possible, thin cirrus clouds. Heterogeneity effects in the TIR are evaluated as a function of spatial resolution in order to estimate the optimal spatial resolution for TIR retrieval applications. These investigations are performed using a cirrus 3D cloud generator (3DCloud), a 3D radiative transfer code (3DMCPOL), and two retrieval algorithms, namely the operational MODIS retrieval algorithm (MOD06) and a research-level SWT algorithm.

  18. Field-portable lensfree tomographic microscope†

    PubMed Central

    Isikman, Serhan O.; Bishara, Waheb; Sikora, Uzair; Yaglidere, Oguzhan; Yeah, John; Ozcan, Aydogan

    2011-01-01

    We present a field-portable lensfree tomographic microscope, which can achieve sectional imaging of a large volume (~20 mm3) on a chip with an axial resolution of <7 μm. In this compact tomographic imaging platform (weighing only ~110 grams), 24 light-emitting diodes (LEDs) that are each butt-coupled to a fibre-optic waveguide are controlled through a cost-effective micro-processor to sequentially illuminate the sample from different angles to record lensfree holograms of the sample that is placed on the top of a digital sensor array. In order to generate pixel super-resolved (SR) lensfree holograms and hence digitally improve the achievable lateral resolution, multiple sub-pixel shifted holograms are recorded at each illumination angle by electromagnetically actuating the fibre-optic waveguides using compact coils and magnets. These SR projection holograms obtained over an angular range of ~50° are rapidly reconstructed to yield projection images of the sample, which can then be back-projected to compute tomograms of the objects on the sensor-chip. The performance of this compact and light-weight lensfree tomographic microscope is validated by imaging micro-beads of different dimensions as well as a Hymenolepis nana egg, which is an infectious parasitic flatworm. Achieving a decent three-dimensional spatial resolution, this field-portable on-chip optical tomographic microscope might provide a useful toolset for telemedicine and high-throughput imaging applications in resource-poor settings. PMID:21573311

  19. Super-resolution imaging applied to moving object tracking

    NASA Astrophysics Data System (ADS)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  20. Color lensless digital holographic microscopy with micrometer resolution.

    PubMed

    Garcia-Sucerquia, Jorge

    2012-05-15

    Color digital lensless holographic microscopy with micrometer resolution is presented. Multiwavelength illumination of a biological sample and a posteriori color composition of the amplitude images individually reconstructed are used to obtain full-color representation of the microscopic specimen. To match the sizes of the reconstructed holograms for each wavelength, a reconstruction algorithm that allows for choosing the pixel size at the reconstruction plane independently of the wavelength and the reconstruction distance is used. The method is illustrated with experimental results.

  1. Resolution Enhancement of MODIS-derived Water Indices for Studying Persistent Flooding

    NASA Astrophysics Data System (ADS)

    Underwood, L. W.; Kalcic, M. T.; Fletcher, R. M.

    2012-12-01

    Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.

  2. Resolution Enhancement of MODIS-Derived Water Indices for Studying Persistent Flooding

    NASA Technical Reports Server (NTRS)

    Underwood, L. W.; Kalcic, Maria; Fletcher, Rose

    2012-01-01

    Monitoring coastal marshes for persistent flooding and salinity stress is a high priority issue in Louisiana. Remote sensing can identify environmental variables that can be indicators of marsh habitat conditions, and offer timely and relatively accurate information for aiding wetland vegetation management. Monitoring activity accuracy is often limited by mixed pixels which occur when areas represented by the pixel encompasses more than one cover type. Mixtures of marsh grasses and open water in 250m Moderate Resolution Imaging Spectroradiometer (MODIS) data can impede flood area estimation. Flood mapping of such mixtures requires finer spatial resolution data to better represent the cover type composition within 250m MODIS pixel. Fusion of MODIS and Landsat can improve both spectral and temporal resolution of time series products to resolve rapid changes from forcing mechanisms like hurricane winds and storm surge. For this study, using a method for estimating sub-pixel values from a MODIS time series of a Normalized Difference Water Index (NDWI), using temporal weighting, was implemented to map persistent flooding in Louisiana coastal marshes. Ordinarily NDWI computed from daily 250m MODIS pixels represents a mixture of fragmented marshes and water. Here, sub-pixel NDWI values were derived for MODIS data using Landsat 30-m data. Each MODIS pixel was disaggregated into a mixture of the eight cover types according to the classified image pixels falling inside the MODIS pixel. The Landsat pixel means for each cover type inside a MODIS pixel were computed for the Landsat data preceding the MODIS image in time and for the Landsat data succeeding the MODIS image. The Landsat data were then weighted exponentially according to closeness in date to the MODIS data. The reconstructed MODIS data were produced by summing the product of fractional cover type with estimated NDWI values within each cover type. A new daily time series was produced using both the reconstructed 250-m MODIS, with enhanced features, and the approximated daily 30-m high-resolution image based on Landsat data. The algorithm was developed and tested over the Calcasieu-Sabine Basin, which was heavily inundated by storm surge from Hurricane Ike to study the extent and duration of flooding following the storm. Time series for 2000-2009, covering flooding events by Hurricane Rita in 2005 and Hurricane Ike in 2008, were derived. High resolution images were formed for all days in 2008 between the first cloud free Landsat scene and the last cloud-free Landsat scene. To refine and validate flooding maps, each time series was compared to Louisiana Coastwide Reference Monitoring System (CRMS) station water levels adjusted to marsh to optimize thresholds for MODIS-derived time series of NDWI. Seasonal fluctuations were adjusted by subtracting ten year average NDWI for marshes, excluding the hurricane events. Results from different NDWI indices and a combination of indices were compared. Flooding persistence that was mapped with higher-resolution data showed some improvement over the original MODIS time series estimates. The advantage of this novel technique is that improved mapping of extent and duration of inundation can be provided.

  3. Toward an Objective Enhanced-V Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Brunner, Jason; Feltz, Wayne; Moses, John; Rabin, Robert; Ackerman, Steven

    2007-01-01

    The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V' signature, has been observed to occur during and preceding severe weather in previous studies. This study describes an algorithmic approach to objectively detect enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of cross correlation statistics of pixels and thresholds of enhanced-V quantitative parameters. The effectiveness of the enhanced-V detection method will be examined using Geostationary Operational Environmental Satellite, MODerate-resolution Imaging Spectroradiometer, and Advanced Very High Resolution Radiometer image data from case studies in the 2003-2006 seasons. The main goal of this study is to develop an objective enhanced-V detection algorithm for future implementation into operations with future sensors, such as GOES-R.

  4. Simulations of radiation-damaged 3D detectors for the Super-LHC

    NASA Astrophysics Data System (ADS)

    Pennicard, D.; Pellegrini, G.; Fleta, C.; Bates, R.; O'Shea, V.; Parkes, C.; Tartoni, N.

    2008-07-01

    Future high-luminosity colliders, such as the Super-LHC at CERN, will require pixel detectors capable of withstanding extremely high radiation damage. In this article, the performances of various 3D detector structures are simulated with up to 1×1016 1 MeV- neq/cm2 radiation damage. The simulations show that 3D detectors have higher collection efficiency and lower depletion voltages than planar detectors due to their small electrode spacing. When designing a 3D detector with a large pixel size, such as an ATLAS sensor, different electrode column layouts are possible. Using a small number of n+ readout electrodes per pixel leads to higher depletion voltages and lower collection efficiency, due to the larger electrode spacing. Conversely, using more electrodes increases both the insensitive volume occupied by the electrode columns and the capacitive noise. Overall, the best performance after 1×1016 1 MeV- neq/cm2 damage is achieved by using 4-6 n+ electrodes per pixel.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deptuch, G. W.; Fahim, F.; Grybos, P.

    An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel that recovers composite signals and event driven strobes to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32×32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3 μm X-ray beam. The results of these tests are given in the paper assessing physical implementation of the algorithm.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deptuch, Grzegorz W.; Fahim, Farah; Grybos, Pawel

    An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels tomore » one virtual pixel, that recovers composite signals and event driven strobes, to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals, that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32 × 32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3-μm X-ray beam. Furthermore, the results of these tests are given in this paper assessing physical implementation of the algorithm.« less

  7. Evaluation of fluorophores for optimal performance in localization-based super-resolution imaging

    PubMed Central

    Dempsey, Graham T.; Vaughan, Joshua C.; Chen, Kok Hao; Bates, Mark; Zhuang, Xiaowei

    2011-01-01

    One approach to super-resolution fluorescence imaging uses sequential activation and localization of individual fluorophores to achieve high spatial resolution. Essential to this technique is the choice of fluorescent probes — the properties of the probes, including photons per switching event, on/off duty cycle, photostability, and number of switching cycles, largely dictate the quality of super-resolution images. While many probes have been reported, a systematic characterization of the properties of these probes and their impact on super-resolution image quality has been described in only a few cases. Here, we quantitatively characterized the switching properties of 26 organic dyes and directly related these properties to the quality of super-resolution images. This analysis provides a set of guidelines for characterization of super-resolution probes and a resource for selecting probes based on performance. Our evaluation identified several photoswitchable dyes with good to excellent performance in four independent spectral ranges, with which we demonstrated low crosstalk, four-color super-resolution imaging. PMID:22056676

  8. Hybrid Optimization of Object-Based Classification in High-Resolution Images Using Continous ANT Colony Algorithm with Emphasis on Building Detection

    NASA Astrophysics Data System (ADS)

    Tamimi, E.; Ebadi, H.; Kiani, A.

    2017-09-01

    Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.

  9. Status and Construction of the Belle II DEPFET pixel system

    NASA Astrophysics Data System (ADS)

    Lütticke, Florian

    2014-06-01

    DEpleted P-channel Field Effect Transistor (DEPFET) active pixel detectors combine detection with a first amplification stage in a fully depleted detector, resulting in an superb signal-to-noise ratio even for thin sensors. Two layers of thin (75 micron) silicon DEPFET pixels will be used as the innermost vertex system, very close to the beam pipe in the Belle II detector at the SuperKEKB facility. The status of the 8 million DEPFET pixels detector, latest developments and current system tests will be discussed.

  10. Tracking subpixel targets in domestic environments

    NASA Astrophysics Data System (ADS)

    Govinda, V.; Ralph, J. F.; Spencer, J. W.; Goulermas, J. Y.; Smith, D. H.

    2006-05-01

    In recent years, closed circuit cameras have become a common feature of urban life. There are environments however where the movement of people needs to be monitored but high resolution imaging is not necessarily desirable: rooms where privacy is required and the occupants are not comfortable with the perceived intrusion. Examples might include domiciliary care environments, prisons and other secure facilities, and even large open plan offices. This paper discusses algorithms that allow activity within this type of sensitive environment to be monitored using data from low resolution cameras (ones where all objects of interest are sub-pixel and cannot be resolved) and other non-intrusive sensors. The algorithms are based on techniques originally developed for wide area reconnaissance and surveillance applications. Of particular importance is determining the minimum spatial resolution that is required to provide a specific level of coverage and reliability.

  11. Super-resolution convolutional neural network for the improvement of the image quality of magnified images in chest radiographs

    NASA Astrophysics Data System (ADS)

    Umehara, Kensuke; Ota, Junko; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki

    2017-02-01

    Single image super-resolution (SR) method can generate a high-resolution (HR) image from a low-resolution (LR) image by enhancing image resolution. In medical imaging, HR images are expected to have a potential to provide a more accurate diagnosis with the practical application of HR displays. In recent years, the super-resolution convolutional neural network (SRCNN), which is one of the state-of-the-art deep learning based SR methods, has proposed in computer vision. In this study, we applied and evaluated the SRCNN scheme to improve the image quality of magnified images in chest radiographs. For evaluation, a total of 247 chest X-rays were sampled from the JSRT database. The 247 chest X-rays were divided into 93 training cases with non-nodules and 152 test cases with lung nodules. The SRCNN was trained using the training dataset. With the trained SRCNN, the HR image was reconstructed from the LR one. We compared the image quality of the SRCNN and conventional image interpolation methods, nearest neighbor, bilinear and bicubic interpolations. For quantitative evaluation, we measured two image quality metrics, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the SRCNN scheme, PSNR and SSIM were significantly higher than those of three interpolation methods (p<0.001). Visual assessment confirmed that the SRCNN produced much sharper edge than conventional interpolation methods without any obvious artifacts. These preliminary results indicate that the SRCNN scheme significantly outperforms conventional interpolation algorithms for enhancing image resolution and that the use of the SRCNN can yield substantial improvement of the image quality of magnified images in chest radiographs.

  12. Validation of Suomi-NPP VIIRS sea ice concentration with very high-resolution satellite and airborne camera imagery

    NASA Astrophysics Data System (ADS)

    Baldwin, Daniel; Tschudi, Mark; Pacifici, Fabio; Liu, Yinghui

    2017-08-01

    Two independent VIIRS-based Sea Ice Concentration (SIC) products are validated against SIC as estimated from Very High Spatial Resolution Imagery for several VIIRS overpasses. The 375 m resolution VIIRS SIC from the Interface Data Processing Segment (IDPS) SIC algorithm is compared against estimates made from 2 m DigitalGlobe (DG) WorldView-2 imagery and also against estimates created from 10 cm Digital Mapping System (DMS) camera imagery. The 750 m VIIRS SIC from the Enterprise SIC algorithm is compared against DG imagery. The IDPS vs. DG comparisons reveal that, due to algorithm issues, many of the IDPS SIC retrievals were falsely assigned ice-free values when the pixel was clearly over ice. These false values increased the validation bias and RMS statistics. The IDPS vs. DMS comparisons were largely over ice-covered regions and did not demonstrate the false retrieval issue. The validation results show that products from both the IDPS and Enterprise algorithms were within or very close to the 10% accuracy (bias) specifications in both the non-melting and melting conditions, but only products from the Enterprise algorithm met the 25% specifications for the uncertainty (RMS).

  13. An Integrated Retrieval Framework for AMSR2: Implications for Light Precipitation and Sea Ice Edge Detectability

    NASA Astrophysics Data System (ADS)

    Duncan, D.; Kummerow, C. D.; Meier, W.

    2016-12-01

    Over the lifetime of AMSR-E, operational retrieval algorithms were developed and run for precipitation, ocean suite (SST, wind speed, cloud liquid water path, and column water vapor over ocean), sea ice, snow water equivalent, and soil moisture. With a separate algorithm for each group, the retrievals were never interactive or integrated in any way despite many co-sensitivities. AMSR2, the follow-on mission to AMSR-E, retrieves the same parameters at a slightly higher spatial resolution. We have combined the operational algorithms for AMSR2 in a way that facilitates sharing information between the retrievals. Difficulties that arose were mainly related to calibration, spatial resolution, coastlines, and order of processing. The integration of all algorithms for AMSR2 has numerous benefits, including better detection of light precipitation and sea ice, fewer screened out pixels, and better quality flags. Integrating the algorithms opens up avenues for investigating the limits of detectability for precipitation from a passive microwave radiometer and the impact of spatial resolution on sea ice edge detection; these are investigated using CloudSat and MODIS coincident observations from the A-Train constellation.

  14. A novel method to improve MODIS AOD retrievals in cloudy pixels using an analog ensemble approach

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Raman, A.; Delle Monache, L.; Alessandrini, S.; Cheng, W. Y. Y.; Gaubert, B.; Arellano, A. F.

    2016-12-01

    Particulate matter (PM) concentrations are one of the fundamental indicators of air quality. Earth orbiting satellite platforms acquire column aerosol abundance that can in turn provide information about the PM concentrations. One of the serious limitations of column aerosol retrievals from low earth orbiting satellites is that these algorithms are based on clear sky assumptions. They do not retrieve AOD in cloudy pixels. After filtering cloudy pixels, these algorithms also arbitrarily remove brightest and darkest 25% of remaining pixels over ocean and brightest and darkest 50% pixels over land to filter any residual contamination from clouds. This becomes a critical issue especially in regions that experience monsoon, like Asia and North America. In case of North America, monsoon season experiences wide variety of extreme air quality events such as fires in California and dust storms in Arizona. Assessment of these episodic events warrants frequent monitoring of aerosol observations from remote sensing retrievals. In this study, we demonstrate a method to fill in cloudy pixels in Moderate Imaging Resolution Spectroradiometer (MODIS) AOD retrievals based on ensembles generated using an analog-based approach (AnEn). It provides a probabilistic distribution of AOD in cloudy pixels using historical records of model simulations of meteorological predictors such as AOD, relative humidity, and wind speed, and past observational records of MODIS AOD at a given target site. We use simulations from a coupled community weather forecasting model with chemistry (WRF-Chem) run at a resolution comparable to MODIS AOD. Analogs selected from summer months (June, July) of 2011-2013 from model and corresponding observations are used as a training dataset. Then, missing AOD retrievals in cloudy pixels in the last 31 days of the selected period are estimated. Here, we use AERONET stations as target sites to facilitate comparison against in-situ measurements. We use two approaches to evaluate the estimated AOD: 1) by comparing against reanalysis AOD, 2) by inverting AOD to PM10 concentrations and then comparing those with measured PM10. AnEn is an efficient approach to generate an ensemble as it involves only one model run and provides an estimate of uncertainty that complies with the physical and chemical state of the atmosphere.

  15. Towards real-time image deconvolution: application to confocal and STED microscopy

    PubMed Central

    Zanella, R.; Zanghirati, G.; Cavicchioli, R.; Zanni, L.; Boccacci, P.; Bertero, M.; Vicidomini, G.

    2013-01-01

    Although deconvolution can improve the quality of any type of microscope, the high computational time required has so far limited its massive spreading. Here we demonstrate the ability of the scaled-gradient-projection (SGP) method to provide accelerated versions of the most used algorithms in microscopy. To achieve further increases in efficiency, we also consider implementations on graphic processing units (GPUs). We test the proposed algorithms both on synthetic and real data of confocal and STED microscopy. Combining the SGP method with the GPU implementation we achieve a speed-up factor from about a factor 25 to 690 (with respect the conventional algorithm). The excellent results obtained on STED microscopy images demonstrate the synergy between super-resolution techniques and image-deconvolution. Further, the real-time processing allows conserving one of the most important property of STED microscopy, i.e the ability to provide fast sub-diffraction resolution recordings. PMID:23982127

  16. Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves

    NASA Astrophysics Data System (ADS)

    Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua

    2017-09-01

    In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.

  17. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  18. Adaptive Markov Random Fields for Example-Based Super-resolution of Faces

    NASA Astrophysics Data System (ADS)

    Stephenson, Todd A.; Chen, Tsuhan

    2006-12-01

    Image enhancement of low-resolution images can be done through methods such as interpolation, super-resolution using multiple video frames, and example-based super-resolution. Example-based super-resolution, in particular, is suited to images that have a strong prior (for those frameworks that work on only a single image, it is more like image restoration than traditional, multiframe super-resolution). For example, hallucination and Markov random field (MRF) methods use examples drawn from the same domain as the image being enhanced to determine what the missing high-frequency information is likely to be. We propose to use even stronger prior information by extending MRF-based super-resolution to use adaptive observation and transition functions, that is, to make these functions region-dependent. We show with face images how we can adapt the modeling for each image patch so as to improve the resolution.

  19. Co-registration of Laser Altimeter Tracks with Digital Terrain Models and Applications in Planetary Science

    NASA Technical Reports Server (NTRS)

    Glaeser, P.; Haase, I.; Oberst, J.; Neumann, G. A.

    2013-01-01

    We have derived algorithms and techniques to precisely co-register laser altimeter profiles with gridded Digital Terrain Models (DTMs), typically derived from stereo images. The algorithm consists of an initial grid search followed by a least-squares matching and yields the translation parameters at sub-pixel level needed to align the DTM and the laser profiles in 3D space. This software tool was primarily developed and tested for co-registration of laser profiles from the Lunar Orbiter Laser Altimeter (LOLA) with DTMs derived from the Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) stereo images. Data sets can be co-registered with positional accuracy between 0.13 m and several meters depending on the pixel resolution and amount of laser shots, where rough surfaces typically result in more accurate co-registrations. Residual heights of the data sets are as small as 0.18 m. The software can be used to identify instrument misalignment, orbit errors, pointing jitter, or problems associated with reference frames being used. Also, assessments of DTM effective resolutions can be obtained. From the correct position between the two data sets, comparisons of surface morphology and roughness can be made at laser footprint- or DTM pixel-level. The precise co-registration allows us to carry out joint analysis of the data sets and ultimately to derive merged high-quality data products. Examples of matching other planetary data sets, like LOLA with LRO Wide Angle Camera (WAC) DTMs or Mars Orbiter Laser Altimeter (MOLA) with stereo models from the High Resolution Stereo Camera (HRSC) as well as Mercury Laser Altimeter (MLA) with Mercury Dual Imaging System (MDIS) are shown to demonstrate the broad science applications of the software tool.

  20. High Resolution Trichromatic Road Surface Scanning with a Line Scan Camera and Light Emitting Diode Lighting for Road-Kill Detection.

    PubMed

    Lopes, Gil; Ribeiro, A Fernando; Sillero, Neftalí; Gonçalves-Seco, Luís; Silva, Cristiano; Franch, Marc; Trigueiros, Paulo

    2016-04-19

    This paper presents a road surface scanning system that operates with a trichromatic line scan camera with light emitting diode (LED) lighting achieving road surface resolution under a millimeter. It was part of a project named Roadkills-Intelligent systems for surveying mortality of amphibians in Portuguese roads, sponsored by the Portuguese Science and Technology Foundation. A trailer was developed in order to accommodate the complete system with standalone power generation, computer image capture and recording, controlled lighting to operate day or night without disturbance, incremental encoder with 5000 pulses per revolution attached to one of the trailer wheels, under a meter Global Positioning System (GPS) localization, easy to utilize with any vehicle with a trailer towing system and focused on a complete low cost solution. The paper describes the system architecture of the developed prototype, its calibration procedure, the performed experimentation and some obtained results, along with a discussion and comparison with existing systems. Sustained operating trailer speeds of up to 30 km/h are achievable without loss of quality at 4096 pixels' image width (1 m width of road surface) with 250 µm/pixel resolution. Higher scanning speeds can be achieved by lowering the image resolution (120 km/h with 1 mm/pixel). Computer vision algorithms are under development to operate on the captured images in order to automatically detect road-kills of amphibians.

  1. Dynamically re-configurable CMOS imagers for an active vision system

    NASA Technical Reports Server (NTRS)

    Yang, Guang (Inventor); Pain, Bedabrata (Inventor)

    2005-01-01

    A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.

  2. Mortality risk score prediction in an elderly population using machine learning.

    PubMed

    Rose, Sherri

    2013-03-01

    Standard practice for prediction often relies on parametric regression methods. Interesting new methods from the machine learning literature have been introduced in epidemiologic studies, such as random forest and neural networks. However, a priori, an investigator will not know which algorithm to select and may wish to try several. Here I apply the super learner, an ensembling machine learning approach that combines multiple algorithms into a single algorithm and returns a prediction function with the best cross-validated mean squared error. Super learning is a generalization of stacking methods. I used super learning in the Study of Physical Performance and Age-Related Changes in Sonomans (SPPARCS) to predict death among 2,066 residents of Sonoma, California, aged 54 years or more during the period 1993-1999. The super learner for predicting death (risk score) improved upon all single algorithms in the collection of algorithms, although its performance was similar to that of several algorithms. Super learner outperformed the worst algorithm (neural networks) by 44% with respect to estimated cross-validated mean squared error and had an R2 value of 0.201. The improvement of super learner over random forest with respect to R2 was approximately 2-fold. Alternatives for risk score prediction include the super learner, which can provide improved performance.

  3. Changes to the Spectral Extraction Algorithm at the Third COS FUV Lifetime Position

    NASA Astrophysics Data System (ADS)

    Taylor, Joanna M.; Azalee Bostroem, K.; Debes, John H.; Ely, Justin; Hernandez, Svea; Hodge, Philip E.; Jedrzejewski, Robert I.; Lindsay, Kevin; Lockwood, Sean A.; Massa, Derck; Oliveira, Cristina M.; Penton, Steven V.; Proffitt, Charles R.; Roman-Duval, Julia; Sahnow, David J.; Sana, Hugues; Sonnentrucker, Paule

    2015-01-01

    Due to the effects of gain sag on flux on the COS FUV microchannel plate detector, the COS FUV spectra will be moved in February 2015 to a pristine location on the detector, from Lifetime Position 2 (LP2) to LP3. The spectra will be shifted in the cross-dispersion (XD) direction by -2.5", about -31 pixels, from the original LP1. In contrast, LP2 was shifted by +3.5", about 41 pixels, from LP1. By reducing the LP3-LP1 separation compared to the LP2-LP1 separation, we achieve maximal spectral resolution at LP3 while preserving more detector area for future lifetime positions. In the current version of the COS boxcar extraction algorithm, flux is summed within a box of fixed height that is larger than the PSF. Bad pixels located anywhere within the extraction box cause the entire column to be discarded. At the new LP3 position the current extraction box will overlap with LP1 regions of low gain (pixels which have lost >5% of their sensitivity). As a result, large portions of spectra will be discarded, even though these flagged pixels will be located in the wings of the profiles and contain a negligible fraction of the total source flux. To avoid unnecessarily discarding columns affected by such pixels, an algorithm is needed that can judge whether the effects of gain-sagged pixels on the extracted flux are significant. The "two-zone" solution adopted for pipeline use was tailored specifically for the COS FUV data characteristics: First, using a library of 1-D spectral centroid ("trace") locations, residual geometric distortions in the XD direction are removed. Next, 2-D template profiles are aligned with the observed spectral image. Encircled energy contours are calculated and an inner zone that contains 80% of the flux is defined, as well as an outer zone that contains 99% of the flux. With this approach, only pixels flagged as bad in the inner 80% zone will cause columns to be discarded while flagged pixels in the outer zones do not affect extraction. Finally, all good columns are summed in the XD direction to obtain a 1-D extracted spectrum. We present examples of the trace and profile libraries that are used in the two-zone extraction and compare the performance of the two-zone and boxcar algorithms.

  4. Development of multi-sensor global cloud and radiance composites for earth radiation budget monitoring from DSCOVR

    NASA Astrophysics Data System (ADS)

    Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher

    2017-10-01

    The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-km resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95% of the globe.

  5. Development of Multi-Sensor Global Cloud and Radiance Composites for Earth Radiation Budget Monitoring from DSCOVR

    NASA Technical Reports Server (NTRS)

    Khlopenkov, Konstantin; Duda, David; Thieman, Mandana; Minnis, Patrick; Su, Wenying; Bedka, Kristopher

    2017-01-01

    The Deep Space Climate Observatory (DSCOVR) enables analysis of the daytime Earth radiation budget via the onboard Earth Polychromatic Imaging Camera (EPIC) and National Institute of Standards and Technology Advanced Radiometer (NISTAR). Radiance observations and cloud property retrievals from low earth orbit and geostationary satellite imagers have to be co-located with EPIC pixels to provide scene identification in order to select anisotropic directional models needed to calculate shortwave and longwave fluxes. A new algorithm is proposed for optimal merging of selected radiances and cloud properties derived from multiple satellite imagers to obtain seamless global hourly composites at 5-kilometer resolution. An aggregated rating is employed to incorporate several factors and to select the best observation at the time nearest to the EPIC measurement. Spatial accuracy is improved using inverse mapping with gradient search during reprojection and bicubic interpolation for pixel resampling. The composite data are subsequently remapped into EPIC-view domain by convolving composite pixels with the EPIC point spread function (PSF) defined with a half-pixel accuracy. PSF-weighted average radiances and cloud properties are computed separately for each cloud phase. The algorithm has demonstrated contiguous global coverage for any requested time of day with a temporal lag of under 2 hours in over 95 percent of the globe.

  6. Super resolution reconstruction of μ-CT image of rock sample using neighbour embedding algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhu; Rahman, Sheik S.; Arns, Christoph H.

    2018-03-01

    X-ray computed tomography (μ-CT) is considered to be the most effective way to obtain the inner structure of rock sample without destructions. However, its limited resolution hampers its ability to probe sub-micro structures which is critical for flow transportation of rock sample. In this study, we propose an innovative methodology to improve the resolution of μ-CT image using neighbour embedding algorithm where low frequency information is provided by μ-CT image itself while high frequency information is supplemented by high resolution scanning electron microscopy (SEM) image. In order to obtain prior for reconstruction, a large number of image patch pairs contain high- and low- image patches are extracted from the Gaussian image pyramid generated by SEM image. These image patch pairs contain abundant information about tomographic evolution of local porous structures under different resolution spaces. Relying on the assumption of self-similarity of porous structure, this prior information can be used to supervise the reconstruction of high resolution μ-CT image effectively. The experimental results show that the proposed method is able to achieve the state-of-the-art performance.

  7. Ion-ion coincidence imaging at high event rate using an in-vacuum pixel detector.

    PubMed

    Long, Jingming; Furch, Federico J; Durá, Judith; Tremsin, Anton S; Vallerga, John; Schulz, Claus Peter; Rouzée, Arnaud; Vrakking, Marc J J

    2017-07-07

    A new ion-ion coincidence imaging spectrometer based on a pixelated complementary metal-oxide-semiconductor detector has been developed for the investigation of molecular ionization and fragmentation processes in strong laser fields. Used as a part of a velocity map imaging spectrometer, the detection system is comprised of a set of microchannel plates and a Timepix detector. A fast time-to-digital converter (TDC) is used to enhance the ion time-of-flight resolution by correlating timestamps registered separately by the Timepix detector and the TDC. In addition, sub-pixel spatial resolution (<6 μm) is achieved by the use of a center-of-mass centroiding algorithm. This performance is achieved while retaining a high event rate (10 4 per s). The spectrometer was characterized and used in a proof-of-principle experiment on strong field dissociative double ionization of carbon dioxide molecules (CO 2 ), using a 400 kHz repetition rate laser system. The experimental results demonstrate that the spectrometer can detect multiple ions in coincidence, making it a valuable tool for studying the fragmentation dynamics of molecules in strong laser fields.

  8. Ion-ion coincidence imaging at high event rate using an in-vacuum pixel detector

    NASA Astrophysics Data System (ADS)

    Long, Jingming; Furch, Federico J.; Durá, Judith; Tremsin, Anton S.; Vallerga, John; Schulz, Claus Peter; Rouzée, Arnaud; Vrakking, Marc J. J.

    2017-07-01

    A new ion-ion coincidence imaging spectrometer based on a pixelated complementary metal-oxide-semiconductor detector has been developed for the investigation of molecular ionization and fragmentation processes in strong laser fields. Used as a part of a velocity map imaging spectrometer, the detection system is comprised of a set of microchannel plates and a Timepix detector. A fast time-to-digital converter (TDC) is used to enhance the ion time-of-flight resolution by correlating timestamps registered separately by the Timepix detector and the TDC. In addition, sub-pixel spatial resolution (<6 μm) is achieved by the use of a center-of-mass centroiding algorithm. This performance is achieved while retaining a high event rate (104 per s). The spectrometer was characterized and used in a proof-of-principle experiment on strong field dissociative double ionization of carbon dioxide molecules (CO2), using a 400 kHz repetition rate laser system. The experimental results demonstrate that the spectrometer can detect multiple ions in coincidence, making it a valuable tool for studying the fragmentation dynamics of molecules in strong laser fields.

  9. Detector motion method to increase spatial resolution in photon-counting detectors

    NASA Astrophysics Data System (ADS)

    Lee, Daehee; Park, Kyeongjin; Lim, Kyung Taek; Cho, Gyuseong

    2017-03-01

    Medical imaging requires high spatial resolution of an image to identify fine lesions. Photon-counting detectors in medical imaging have recently been rapidly replacing energy-integrating detectors due to the former`s high spatial resolution, high efficiency and low noise. Spatial resolution in a photon counting image is determined by the pixel size. Therefore, the smaller the pixel size, the higher the spatial resolution that can be obtained in an image. However, detector redesigning is required to reduce pixel size, and an expensive fine process is required to integrate a signal processing unit with reduced pixel size. Furthermore, as the pixel size decreases, charge sharing severely deteriorates spatial resolution. To increase spatial resolution, we propose a detector motion method using a large pixel detector that is less affected by charge sharing. To verify the proposed method, we utilized a UNO-XRI photon-counting detector (1-mm CdTe, Timepix chip) at the maximum X-ray tube voltage of 80 kVp. A similar spatial resolution of a 55- μm-pixel image was achieved by application of the proposed method to a 110- μm-pixel detector with a higher signal-to-noise ratio. The proposed method could be a way to increase spatial resolution without a pixel redesign when pixels severely suffer from charge sharing as pixel size is reduced.

  10. Axial resolution improvement in spectral domain optical coherence tomography using a depth-adaptive maximum-a-posterior framework

    NASA Astrophysics Data System (ADS)

    Boroomand, Ameneh; Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka

    2015-03-01

    The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.

  11. Development of adaptive noise reduction filter algorithm for pediatric body images in a multi-detector CT

    NASA Astrophysics Data System (ADS)

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki

    2008-03-01

    Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.

  12. MODIS Snow and Sea Ice Products

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Riggs, George A.; Salomonson, Vincent V.

    2004-01-01

    In this chapter, we describe the suite of Earth Observing System (EOS) Moderate-Resolution Imaging Spectroradiometer (MODIS) Terra and Aqua snow and sea ice products. Global, daily products, developed at Goddard Space Flight Center, are archived and distributed through the National Snow and Ice Data Center at various resolutions and on different grids useful for different communities Snow products include binary snow cover, snow albedo, and in the near future, fraction of snow in a 5OO-m pixel. Sea ice products include ice extent determined with two different algorithms, and sea ice surface temperature. The algorithms used to develop these products are described. Both the snow and sea ice products, available since February 24,2000, are useful for modelers. Validation of the products is also discussed.

  13. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  14. Super-resolved Parallel MRI by Spatiotemporal Encoding

    PubMed Central

    Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio

    2016-01-01

    Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293

  15. Deep learning massively accelerates super-resolution localization microscopy.

    PubMed

    Ouyang, Wei; Aristov, Andrey; Lelek, Mickaël; Hao, Xian; Zimmer, Christophe

    2018-06-01

    The speed of super-resolution microscopy methods based on single-molecule localization, for example, PALM and STORM, is limited by the need to record many thousands of frames with a small number of observed molecules in each. Here, we present ANNA-PALM, a computational strategy that uses artificial neural networks to reconstruct super-resolution views from sparse, rapidly acquired localization images and/or widefield images. Simulations and experimental imaging of microtubules, nuclear pores, and mitochondria show that high-quality, super-resolution images can be reconstructed from up to two orders of magnitude fewer frames than usually needed, without compromising spatial resolution. Super-resolution reconstructions are even possible from widefield images alone, though adding localization data improves image quality. We demonstrate super-resolution imaging of >1,000 fields of view containing >1,000 cells in ∼3 h, yielding an image spanning spatial scales from ∼20 nm to ∼2 mm. The drastic reduction in acquisition time and sample irradiation afforded by ANNA-PALM enables faster and gentler high-throughput and live-cell super-resolution imaging.

  16. A cloud mask methodology for high resolution remote sensing data combining information from high and medium resolution optical sensors

    NASA Astrophysics Data System (ADS)

    Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús

    2011-09-01

    This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.

  17. The super-resolution debate

    NASA Astrophysics Data System (ADS)

    Won, Rachel

    2018-05-01

    In the quest for nanoscopy with super-resolution, consensus from the imaging community is that super-resolution is not always needed and that scientists should choose an imaging technique based on their specific application.

  18. Segmentation of remotely sensed data using parallel region growing

    NASA Technical Reports Server (NTRS)

    Tilton, J. C.; Cox, S. C.

    1983-01-01

    The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.

  19. Global satellite composites - 20 years of evolution

    NASA Astrophysics Data System (ADS)

    Kohrs, Richard A.; Lazzara, Matthew A.; Robaidek, Jerrold O.; Santek, David A.; Knuth, Shelley L.

    2014-01-01

    For two decades, the University of Wisconsin Space Science and Engineering Center (SSEC) and the Antarctic Meteorological Research Center (AMRC) have been creating global, regional and hemispheric satellite composites. These composites have proven useful in research, operational forecasting, commercial applications and educational outreach. Using the Man computer Interactive Data System (McIDAS) software developed at SSEC, infrared window composites were created by combining Geostationary Operational Environmental Satellite (GOES), and polar orbiting data from the SSEC Data Center and polar data acquired at McMurdo and Palmer stations, Antarctica. Increased computer processing speed has allowed for more advanced algorithms to address the decision making process for co-located pixels. The algorithms have evolved from a simplistic maximum brightness temperature to those that account for distance from the sub-satellite point, parallax displacement, pixel time and resolution. The composites are the state-of-the-art means for merging/mosaicking satellite imagery.

  20. Urban Density Indices Using Mean Shift-Based Upsampled Elevetion Data

    NASA Astrophysics Data System (ADS)

    Charou, E.; Gyftakis, S.; Bratsolis, E.; Tsenoglou, T.; Papadopoulou, Th. D.; Vassilas, N.

    2015-04-01

    Urban density is an important factor for several fields, e.g. urban design, planning and land management. Modern remote sensors deliver ample information for the estimation of specific urban land classification classes (2D indicators), and the height of urban land classification objects (3D indicators) within an Area of Interest (AOI). In this research, two of these indicators, Building Coverage Ratio (BCR) and Floor Area Ratio (FAR) are numerically and automatically derived from high-resolution airborne RGB orthophotos and LiDAR data. In the pre-processing step the low resolution elevation data are fused with the high resolution optical data through a mean-shift based discontinuity preserving smoothing algorithm. The outcome is an improved normalized digital surface model (nDSM) is an upsampled elevation data with considerable improvement regarding region filling and "straightness" of elevation discontinuities. In a following step, a Multilayer Feedforward Neural Network (MFNN) is used to classify all pixels of the AOI to building or non-building categories. For the total surface of the block and the buildings we consider the number of their pixels and the surface of the unit pixel. Comparisons of the automatically derived BCR and FAR indicators with manually derived ones shows the applicability and effectiveness of the methodology proposed.

  1. Using triple gamma coincidences with a pixelated semiconductor Compton-PET scanner: a simulation study

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; Chmeissani, M.

    2016-01-01

    The Voxel Imaging PET (VIP) Pathfinder project presents a novel design using pixelated semiconductor detectors for nuclear medicine applications to achieve the intrinsic image quality limits set by physics. The conceptual design can be extended to a Compton gamma camera. The use of a pixelated CdTe detector with voxel sizes of 1 × 1 × 2 mm3 guarantees optimal energy and spatial resolution. However, the limited time resolution of semiconductor detectors makes it impossible to use Time Of Flight (TOF) with VIP PET. TOF is used in order to improve the signal to noise ratio (SNR) by using only the most probable portion of the Line-Of-Response (LOR) instead of its entire length. To overcome the limitation of CdTe time resolution, we present in this article a simulation study using β+-γ emitting isotopes with a Compton-PET scanner. When the β+ annihilates with an electron it produces two gammas which produce a LOR in the PET scanner, while the additional gamma, when scattered in the scatter detector, provides a Compton cone that intersects with the aforementioned LOR. The intersection indicates, within a few mm of uncertainty along the LOR, the origin of the beta-gamma decay. Hence, one can limit the part of the LOR used by the image reconstruction algorithm.

  2. Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.

    PubMed

    Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas

    2016-03-01

    Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.

  3. A coarse-to-fine approach for medical hyperspectral image classification with sparse representation

    NASA Astrophysics Data System (ADS)

    Chang, Lan; Zhang, Mengmeng; Li, Wei

    2017-10-01

    A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.

  4. Ambiguity of Quality in Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Leptoukh, Greg

    2010-01-01

    This slide presentation reviews some of the issues in quality of remote sensing data. Data "quality" is used in several different contexts in remote sensing data, with quite different meanings. At the pixel level, quality typically refers to a quality control process exercised by the processing algorithm, not an explicit declaration of accuracy or precision. File level quality is usually a statistical summary of the pixel-level quality but is of doubtful use for scenes covering large areal extents. Quality at the dataset or product level, on the other hand, usually refers to how accurately the dataset is believed to represent the physical quantities it purports to measure. This assessment often bears but an indirect relationship at best to pixel level quality. In addition to ambiguity at different levels of granularity, ambiguity is endemic within levels. Pixel-level quality terms vary widely, as do recommendations for use of these flags. At the dataset/product level, quality for low-resolution gridded products is often extrapolated from validation campaigns using high spatial resolution swath data, a suspect practice at best. Making use of quality at all levels is complicated by the dependence on application needs. We will present examples of the various meanings of quality in remote sensing data and possible ways forward toward a more unified and usable quality framework.

  5. Resolution recovery for Compton camera using origin ensemble algorithm.

    PubMed

    Andreyev, A; Celler, A; Ozsahin, I; Sitek, A

    2016-08-01

    Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.

  6. GIFTS SM EDU Level 1B Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.

  7. Pre-Launch Performance Assessment of the VIIRS Land Surface Temperature Environmental Data Record

    NASA Astrophysics Data System (ADS)

    Hauss, B.; Ip, J.; Agravante, H.

    2009-12-01

    The Visible/Infrared Imager Radiometer Suite (VIIRS) Land Surface Temperature (LST) Environmental Data Record (EDR) provides the surface temperature of land surface including coastal and inland-water pixels at VIIRS moderate resolution (750m) during both day and night. To predict the LST under optimal conditions, the retrieval algorithm utilizes a dual split-window approach with both Short-wave Infrared (SWIR) channels at 3.70 µm (M12) and 4.05 µm (M13), and Long-wave Infrared (LWIR) channels at 10.76 µm (M15) and 12.01 µm (M16) to correct for atmospheric water vapor. Under less optimal conditions, the algorithm uses a fallback split-window approach with M15 and M16 channels. By comparison, the MODIS generalized split-window algorithm only uses the LWIR bands in the retrieval of surface temperature because of the concern for both solar contamination and large emissivity variations in the SWIR bands. In this paper, we assess whether these concerns are real and whether there is an impact on the precision and accuracy of the LST retrieval. The algorithm relies on the VIIRS Cloud Mask IP for identifying cloudy and ocean pixels, the VIIRS Surface Type EDR for identifying the IGBP land cover type for the pixels, and the VIIRS Aerosol Optical Thickness (AOT) IP for excluding pixels with AOT greater than 1.0. In this paper, we will report the pre-launch performance assessment of the LST EDR based on global synthetic data and proxy data from Terra MODIS. Results of both the split-window and dual split-window algorithms will be assessed by comparison either to synthetic "truth" or results of the MODIS retrieval. We will also show that the results of the assessment with proxy data are consistent with those obtained using the global synthetic data.

  8. Mediterranean Land Use and Land Cover Classification Assessment Using High Spatial Resolution Data

    NASA Astrophysics Data System (ADS)

    Elhag, Mohamed; Boteva, Silvena

    2016-10-01

    Landscape fragmentation is noticeably practiced in Mediterranean regions and imposes substantial complications in several satellite image classification methods. To some extent, high spatial resolution data were able to overcome such complications. For better classification performances in Land Use Land Cover (LULC) mapping, the current research adopts different classification methods comparison for LULC mapping using Sentinel-2 satellite as a source of high spatial resolution. Both of pixel-based and an object-based classification algorithms were assessed; the pixel-based approach employs Maximum Likelihood (ML), Artificial Neural Network (ANN) algorithms, Support Vector Machine (SVM), and, the object-based classification uses the Nearest Neighbour (NN) classifier. Stratified Masking Process (SMP) that integrates a ranking process within the classes based on spectral fluctuation of the sum of the training and testing sites was implemented. An analysis of the overall and individual accuracy of the classification results of all four methods reveals that the SVM classifier was the most efficient overall by distinguishing most of the classes with the highest accuracy. NN succeeded to deal with artificial surface classes in general while agriculture area classes, and forest and semi-natural area classes were segregated successfully with SVM. Furthermore, a comparative analysis indicates that the conventional classification method yielded better accuracy results than the SMP method overall with both classifiers used, ML and SVM.

  9. A hybrid MLP-CNN classifier for very fine resolution remotely sensed image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Ce; Pan, Xin; Li, Huapeng; Gardiner, Andy; Sargent, Isabel; Hare, Jonathon; Atkinson, Peter M.

    2018-06-01

    The contextual-based convolutional neural network (CNN) with deep architecture and pixel-based multilayer perceptron (MLP) with shallow structure are well-recognized neural network algorithms, representing the state-of-the-art deep learning method and the classical non-parametric machine learning approach, respectively. The two algorithms, which have very different behaviours, were integrated in a concise and effective way using a rule-based decision fusion approach for the classification of very fine spatial resolution (VFSR) remotely sensed imagery. The decision fusion rules, designed primarily based on the classification confidence of the CNN, reflect the generally complementary patterns of the individual classifiers. In consequence, the proposed ensemble classifier MLP-CNN harvests the complementary results acquired from the CNN based on deep spatial feature representation and from the MLP based on spectral discrimination. Meanwhile, limitations of the CNN due to the adoption of convolutional filters such as the uncertainty in object boundary partition and loss of useful fine spatial resolution detail were compensated. The effectiveness of the ensemble MLP-CNN classifier was tested in both urban and rural areas using aerial photography together with an additional satellite sensor dataset. The MLP-CNN classifier achieved promising performance, consistently outperforming the pixel-based MLP, spectral and textural-based MLP, and the contextual-based CNN in terms of classification accuracy. This research paves the way to effectively address the complicated problem of VFSR image classification.

  10. Combined multi-plane phase retrieval and super-resolution optical fluctuation imaging for 4D cell microscopy

    NASA Astrophysics Data System (ADS)

    Descloux, A.; Grußmayer, K. S.; Bostan, E.; Lukes, T.; Bouwens, A.; Sharipov, A.; Geissbuehler, S.; Mahul-Mellier, A.-L.; Lashuel, H. A.; Leutenegger, M.; Lasser, T.

    2018-03-01

    Super-resolution fluorescence microscopy provides unprecedented insight into cellular and subcellular structures. However, going `beyond the diffraction barrier' comes at a price, since most far-field super-resolution imaging techniques trade temporal for spatial super-resolution. We propose the combination of a novel label-free white light quantitative phase imaging with fluorescence to provide high-speed imaging and spatial super-resolution. The non-iterative phase retrieval relies on the acquisition of single images at each z-location and thus enables straightforward 3D phase imaging using a classical microscope. We realized multi-plane imaging using a customized prism for the simultaneous acquisition of eight planes. This allowed us to not only image live cells in 3D at up to 200 Hz, but also to integrate fluorescence super-resolution optical fluctuation imaging within the same optical instrument. The 4D microscope platform unifies the sensitivity and high temporal resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy.

  11. Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations

    PubMed Central

    Gustafsson, Nils; Culley, Siân; Ashdown, George; Owen, Dylan M.; Pereira, Pedro Matos; Henriques, Ricardo

    2016-01-01

    Despite significant progress, high-speed live-cell super-resolution studies remain limited to specialized optical setups, generally requiring intense phototoxic illumination. Here, we describe a new analytical approach, super-resolution radial fluctuations (SRRF), provided as a fast graphics processing unit-enabled ImageJ plugin. In the most challenging data sets for super-resolution, such as those obtained in low-illumination live-cell imaging with GFP, we show that SRRF is generally capable of achieving resolutions better than 150 nm. Meanwhile, for data sets similar to those obtained in PALM or STORM imaging, SRRF achieves resolutions approaching those of standard single-molecule localization analysis. The broad applicability of SRRF and its performance at low signal-to-noise ratios allows super-resolution using modern widefield, confocal or TIRF microscopes with illumination orders of magnitude lower than methods such as PALM, STORM or STED. We demonstrate this by super-resolution live-cell imaging over timescales ranging from minutes to hours. PMID:27514992

  12. Multiple Acquisition InSAR Analysis: Persistent Scatterer and Small Baseline Approaches

    NASA Astrophysics Data System (ADS)

    Hooper, A.

    2006-12-01

    InSAR techniques that process data from multiple acquisitions enable us to form time series of deformation and also allow us to reduce error terms present in single interferograms. There are currently two broad categories of methods that deal with multiple images: persistent scatterer methods and small baseline methods. The persistent scatterer approach relies on identifying pixels whose scattering properties vary little with time and look angle. Pixels that are dominated by a singular scatterer best meet these criteria; therefore, images are processed at full resolution to both increase the chance of there being only one dominant scatterer present, and to reduce the contribution from other scatterers within each pixel. In images where most pixels contain multiple scatterers of similar strength, even at the highest possible resolution, the persistent scatterer approach is less optimal, as the scattering characteristics of these pixels vary substantially with look angle. In this case, an approach that interferes only pairs of images for which the difference in look angle is small makes better sense, and resolution can be sacrificed to reduce the effects of the look angle difference by band-pass filtering. This is the small baseline approach. Existing small baseline methods depend on forming a series of multilooked interferograms and unwrapping each one individually. This approach fails to take advantage of two of the benefits of processing multiple acquisitions, however, which are usually embodied in persistent scatterer methods: the ability to find and extract the phase for single-look pixels with good signal-to-noise ratio that are surrounded by noisy pixels, and the ability to unwrap more robustly in three dimensions, the third dimension being that of time. We have developed, therefore, a new small baseline method to select individual single-look pixels that behave coherently in time, so that isolated stable pixels may be found. After correction for various error terms, the phase values of the selected pixels are unwrapped using a new three-dimensional algorithm. We apply our small baseline method to an area in southern Iceland that includes Katla and Eyjafjallajökull volcanoes, and retrieve a time series of deformation that shows transient deformation due to intrusion of magma beneath Eyjafjallajökull. We also process the data using the Stanford method for persistent scatterers (StaMPS) for comparison.

  13. Solving for the Surface: An Automated Approach to THEMIS Atmospheric Correction

    NASA Astrophysics Data System (ADS)

    Ryan, A. J.; Salvatore, M. R.; Smith, R.; Edwards, C. S.; Christensen, P. R.

    2013-12-01

    Here we present the initial results of an automated atmospheric correction algorithm for the Thermal Emission Imaging System (THEMIS) instrument, whereby high spectral resolution Thermal Emission Spectrometer (TES) data are queried to generate numerous atmospheric opacity values for each THEMIS infrared image. While the pioneering methods of Bandfield et al. [2004] also used TES spectra to atmospherically correct THEMIS data, the algorithm presented here is a significant improvement because of the reduced dependency on user-defined inputs for individual images. Additionally, this technique is particularly useful for correcting THEMIS images that have captured a range of atmospheric conditions and/or surface elevations, issues that have been difficult to correct for using previous techniques. Thermal infrared observations of the Martian surface can be used to determine the spatial distribution and relative abundance of many common rock-forming minerals. This information is essential to understanding the planet's geologic and climatic history. However, the Martian atmosphere also has absorptions in the thermal infrared which complicate the interpretation of infrared measurements obtained from orbit. TES has sufficient spectral resolution (143 bands at 10 cm-1 sampling) to linearly unmix and remove atmospheric spectral end-members from the acquired spectra. THEMIS has the benefit of higher spatial resolution (~100 m/pixel vs. 3x5 km/TES-pixel) but has lower spectral resolution (8 surface sensitive spectral bands). As such, it is not possible to isolate the surface component by unmixing the atmospheric contribution from the THEMIS spectra, as is done with TES. Bandfield et al. [2004] developed a technique using atmospherically corrected TES spectra as tie-points for constant radiance offset correction and surface emissivity retrieval. This technique is the primary method used to correct THEMIS but is highly susceptible to inconsistent results if great care in the selection of TES spectra is not exercised. Our algorithm implements a newly populated TES database that was created using PostgreSQL/PostGIS geospatial database. TES pixels that meet user-defined quality criteria and that intersect a THEMIS observation of interest may be quickly retrieved using this new database. The THEMIS correction process [Bandfield et al. 2004] is then run using all TES pixels that pass an additional set of TES-THEMIS relational quality checks. The result is a spatially correlated set of atmospheric opacity values, determined from the difference between each atmospherically corrected TES pixel and the overlapping portion of the THEMIS image. The dust and ice contributions to the atmospheric opacity are estimated using known dust and ice spectral dependencies [Smith et al. 2003]. These opacity values may be used to determine atmospheric variation across the scene, from which topography- and temperature-scaled atmospheric contribution may be calculated and removed. References: Bandfield, JL et al. [2004], JGR 109, E10008. Smith, MD et al. [2003], JGR 108, E11, 5115.

  14. Super-resolution photoacoustic microscopy using joint sparsity

    NASA Astrophysics Data System (ADS)

    Burgholzer, P.; Haltmeier, M.; Berer, T.; Leiss-Holzinger, E.; Murray, T. W.

    2017-07-01

    We present an imaging method that uses the random optical speckle patterns that naturally emerge as light propagates through strongly scattering media as a structured illumination source for photoacoustic imaging. Our approach, termed blind structured illumination photoacoustic microscopy (BSIPAM), was inspired by recent work in fluorescence microscopy where super-resolution imaging was demonstrated using multiple unknown speckle illumination patterns. We extend this concept to the multiple scattering domain using photoacoustics (PA), with the speckle pattern serving to generate ultrasound. The optical speckle pattern that emerges as light propagates through diffuse media provides structured illumination to an object placed behind a scattering wall. The photoacoustic signal produced by such illumination is detected using a focused ultrasound transducer. We demonstrate through both simulation and experiment, that by acquiring multiple photoacoustic images, each produced by a different random and unknown speckle pattern, an image of an absorbing object can be reconstructed with a spatial resolution far exceeding that of the ultrasound transducer. We experimentally and numerically demonstrate a gain in resolution of more than a factor of two by using multiple speckle illuminations. The variations in the photoacoustic signals generated with random speckle patterns are utilized in BSIPAM using a novel reconstruction algorithm. Exploiting joint sparsity, this algorithm is capable of reconstructing the absorbing structure from measured PA signals with a resolution close to the speckle size. Another way to excite random excitation for photoacoustic imaging are small absorbing particles, including contrast agents, which flow through small vessels. For such a set-up, the joint-sparsity is generated by the fact that all the particles move in the same vessels. Structured illumination in that case is not necessary.

  15. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  16. Comparison of satellite reflectance algorithms for estimating ...

    EPA Pesticide Factsheets

    We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop simple proxies for algal blooms and to facilitate portability between multispectral satellite imagers for regional algal bloom monitoring. Narrow band hyperspectral aircraft images were upscaled spectrally and spatially to simulate 5 current and near future satellite imaging systems. Established and new Chl-a algorithms were then applied to the synthetic satellite images and then compared to calibrated Chl-a water truth measurements collected from 44 sites within one hour of aircraft acquisition of the imagery. Masks based on the spatial resolution of the synthetic satellite imagery were then applied to eliminate mixed pixels including vegetated shorelines. Medium-resolution Landsat and finer resolution data were evaluated against 29 coincident water truth sites. Coarse-resolution MODIS and MERIS-like data were evaluated against 9 coincident water truth sites. Each synthetic satellite data set was then evaluated for the performance of a variety of spectrally appropriate algorithms with regard to the estimation of Chl-a concentrations against the water truth data set. The goal is to inform water resource decisions on the appropriate satellite data acquisition and processing for the es

  17. GENIE: a hybrid genetic algorithm for feature classification in multispectral images

    NASA Astrophysics Data System (ADS)

    Perkins, Simon J.; Theiler, James P.; Brumby, Steven P.; Harvey, Neal R.; Porter, Reid B.; Szymanski, John J.; Bloch, Jeffrey J.

    2000-10-01

    We consider the problem of pixel-by-pixel classification of a multi- spectral image using supervised learning. Conventional spuervised classification techniques such as maximum likelihood classification and less conventional ones s uch as neural networks, typically base such classifications solely on the spectral components of each pixel. It is easy to see why: the color of a pixel provides a nice, bounded, fixed dimensional space in which these classifiers work well. It is often the case however, that spectral information alone is not sufficient to correctly classify a pixel. Maybe spatial neighborhood information is required as well. Or maybe the raw spectral components do not themselves make for easy classification, but some arithmetic combination of them would. In either of these cases we have the problem of selecting suitable spatial, spectral or spatio-spectral features that allow the classifier to do its job well. The number of all possible such features is extremely large. How can we select a suitable subset? We have developed GENIE, a hybrid learning system that combines a genetic algorithm that searches a space of image processing operations for a set that can produce suitable feature planes, and a more conventional classifier which uses those feature planes to output a final classification. In this paper we show that the use of a hybrid GA provides significant advantages over using either a GA alone or more conventional classification methods alone. We present results using high-resolution IKONOS data, looking for regions of burned forest and for roads.

  18. New DTM Extraction Approach from Airborne Images Derived Dsm

    NASA Astrophysics Data System (ADS)

    Mousa, Y. A.; Helmholz, P.; Belton, D.

    2017-05-01

    In this work, a new filtering approach is proposed for a fully automatic Digital Terrain Model (DTM) extraction from very high resolution airborne images derived Digital Surface Models (DSMs). Our approach represents an enhancement of the existing DTM extraction algorithm Multi-directional and Slope Dependent (MSD) by proposing parameters that are more reliable for the selection of ground pixels and the pixelwise classification. To achieve this, four main steps are implemented: Firstly, 8 well-distributed scanlines are used to search for minima as a ground point within a pre-defined filtering window size. These selected ground points are stored with their positions on a 2D surface to create a network of ground points. Then, an initial DTM is created using an interpolation method to fill the gaps in the 2D surface. Afterwards, a pixel to pixel comparison between the initial DTM and the original DSM is performed utilising pixelwise classification of ground and non-ground pixels by applying a vertical height threshold. Finally, the pixels classified as non-ground are removed and the remaining holes are filled. The approach is evaluated using the Vaihingen benchmark dataset provided by the ISPRS working group III/4. The evaluation includes the comparison of our approach, denoted as Network of Ground Points (NGPs) algorithm, with the DTM created based on MSD as well as a reference DTM generated from LiDAR data. The results show that our proposed approach over performs the MSD approach.

  19. High-resolution three-dimensional imaging radar

    NASA Technical Reports Server (NTRS)

    Cooper, Ken B. (Inventor); Chattopadhyay, Goutam (Inventor); Siegel, Peter H. (Inventor); Dengler, Robert J. (Inventor); Schlecht, Erich T. (Inventor); Mehdi, Imran (Inventor); Skalare, Anders J. (Inventor)

    2010-01-01

    A three-dimensional imaging radar operating at high frequency e.g., 670 GHz, is disclosed. The active target illumination inherent in radar solves the problem of low signal power and narrow-band detection by using submillimeter heterodyne mixer receivers. A submillimeter imaging radar may use low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform. Three-dimensional images are generated through range information derived for each pixel scanned over a target. A peak finding algorithm may be used in processing for each pixel to differentiate material layers of the target. Improved focusing is achieved through a compensation signal sampled from a point source calibration target and applied to received signals from active targets prior to FFT-based range compression to extract and display high-resolution target images. Such an imaging radar has particular application in detecting concealed weapons or contraband.

  20. Edge detection for optical synthetic aperture based on deep neural network

    NASA Astrophysics Data System (ADS)

    Tan, Wenjie; Hui, Mei; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin

    2017-09-01

    Synthetic aperture optics systems can meet the demands of the next-generation space telescopes being lighter, larger and foldable. However, the boundaries of segmented aperture systems are much more complex than that of the whole aperture. More edge regions mean more imaging edge pixels, which are often mixed and discretized. In order to achieve high-resolution imaging, it is necessary to identify the gaps between the sub-apertures and the edges of the projected fringes. In this work, we introduced the algorithm of Deep Neural Network into the edge detection of optical synthetic aperture imaging. According to the detection needs, we constructed image sets by experiments and simulations. Based on MatConvNet, a toolbox of MATLAB, we ran the neural network, trained it on training image set and tested its performance on validation set. The training was stopped when the test error on validation set stopped declining. As an input image is given, each intra-neighbor area around the pixel is taken into the network, and scanned pixel by pixel with the trained multi-hidden layers. The network outputs make a judgment on whether the center of the input block is on edge of fringes. We experimented with various pre-processing and post-processing techniques to reveal their influence on edge detection performance. Compared with the traditional algorithms or their improvements, our method makes decision on a much larger intra-neighbor, and is more global and comprehensive. Experiments on more than 2,000 images are also given to prove that our method outperforms classical algorithms in optical images-based edge detection.

  1. Vectorial mask optimization methods for robust optical lithography

    NASA Astrophysics Data System (ADS)

    Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.

    2012-10-01

    Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.

  2. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.

  3. Pluto Close-up, Now in Color

    NASA Image and Video Library

    2015-12-10

    This enhanced color mosaic combines some of the sharpest views of Pluto that NASA's New Horizons spacecraft obtained during its July 14 flyby. The pictures are part of a sequence taken near New Horizons' closest approach to Pluto, with resolutions of about 250-280 feet (77-85 meters) per pixel -- revealing features smaller than half a city block on Pluto's surface. Lower resolution color data (at about 2,066 feet, or 630 meters, per pixel) were added to create this new image. The images form a strip 50 miles (80 kilometers) wide, trending (top to bottom) from the edge of "badlands" northwest of the informally named Sputnik Planum, across the al-Idrisi mountains, onto the shoreline of Pluto's "heart" feature, and just into its icy plains. They combine pictures from the telescopic Long Range Reconnaissance Imager (LORRI) taken approximately 15 minutes before New Horizons' closest approach to Pluto, with -- from a range of only 10,000 miles (17,000 kilometers) -- with color data (in near-infrared, red and blue) gathered by the Ralph/Multispectral Visible Imaging Camera (MVIC) 25 minutes before the LORRI pictures. The wide variety of cratered, mountainous and glacial terrains seen here gives scientists and the public alike a breathtaking, super-high-resolution color window into Pluto's geology. e border between the relatively smooth Sputnik Planum ice sheet and the pitted area, with a series of hills forming slightly inside this unusual "shoreline." http://photojournal.jpl.nasa.gov/catalog/PIA20213

  4. Field-portable lensfree tomographic microscope.

    PubMed

    Isikman, Serhan O; Bishara, Waheb; Sikora, Uzair; Yaglidere, Oguzhan; Yeah, John; Ozcan, Aydogan

    2011-07-07

    We present a field-portable lensfree tomographic microscope, which can achieve sectional imaging of a large volume (∼20 mm(3)) on a chip with an axial resolution of <7 μm. In this compact tomographic imaging platform (weighing only ∼110 grams), 24 light-emitting diodes (LEDs) that are each butt-coupled to a fibre-optic waveguide are controlled through a cost-effective micro-processor to sequentially illuminate the sample from different angles to record lensfree holograms of the sample that is placed on the top of a digital sensor array. In order to generate pixel super-resolved (SR) lensfree holograms and hence digitally improve the achievable lateral resolution, multiple sub-pixel shifted holograms are recorded at each illumination angle by electromagnetically actuating the fibre-optic waveguides using compact coils and magnets. These SR projection holograms obtained over an angular range of ±50° are rapidly reconstructed to yield projection images of the sample, which can then be back-projected to compute tomograms of the objects on the sensor-chip. The performance of this compact and light-weight lensfree tomographic microscope is validated by imaging micro-beads of different dimensions as well as a Hymenolepis nana egg, which is an infectious parasitic flatworm. Achieving a decent three-dimensional spatial resolution, this field-portable on-chip optical tomographic microscope might provide a useful toolset for telemedicine and high-throughput imaging applications in resource-poor settings. This journal is © The Royal Society of Chemistry 2011

  5. Automated quantification of surface water inundation in wetlands using optical satellite imagery

    USGS Publications Warehouse

    DeVries, Ben; Huang, Chengquan; Lang, Megan W.; Jones, John W.; Huang, Wenli; Creed, Irena F.; Carroll, Mark L.

    2017-01-01

    We present a fully automated and scalable algorithm for quantifying surface water inundation in wetlands. Requiring no external training data, our algorithm estimates sub-pixel water fraction (SWF) over large areas and long time periods using Landsat data. We tested our SWF algorithm over three wetland sites across North America, including the Prairie Pothole Region, the Delmarva Peninsula and the Everglades, representing a gradient of inundation and vegetation conditions. We estimated SWF at 30-m resolution with accuracies ranging from a normalized root-mean-square-error of 0.11 to 0.19 when compared with various high-resolution ground and airborne datasets. SWF estimates were more sensitive to subtle inundated features compared to previously published surface water datasets, accurately depicting water bodies, large heterogeneously inundated surfaces, narrow water courses and canopy-covered water features. Despite this enhanced sensitivity, several sources of errors affected SWF estimates, including emergent or floating vegetation and forest canopies, shadows from topographic features, urban structures and unmasked clouds. The automated algorithm described in this article allows for the production of high temporal resolution wetland inundation data products to support a broad range of applications.

  6. Multispectral image enhancement processing for microsat-borne imager

    NASA Astrophysics Data System (ADS)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  7. PixelLearn

    NASA Technical Reports Server (NTRS)

    Mazzoni, Dominic; Wagstaff, Kiri; Bornstein, Benjamin; Tang, Nghia; Roden, Joseph

    2006-01-01

    PixelLearn is an integrated user-interface computer program for classifying pixels in scientific images. Heretofore, training a machine-learning algorithm to classify pixels in images has been tedious and difficult. PixelLearn provides a graphical user interface that makes it faster and more intuitive, leading to more interactive exploration of image data sets. PixelLearn also provides image-enhancement controls to make it easier to see subtle details in images. PixelLearn opens images or sets of images in a variety of common scientific file formats and enables the user to interact with several supervised or unsupervised machine-learning pixel-classifying algorithms while the user continues to browse through the images. The machinelearning algorithms in PixelLearn use advanced clustering and classification methods that enable accuracy much higher than is achievable by most other software previously available for this purpose. PixelLearn is written in portable C++ and runs natively on computers running Linux, Windows, or Mac OS X.

  8. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  9. Development of a spatio-temporal disaggregation method (DisNDVI) for generating a time series of fine resolution NDVI images

    NASA Astrophysics Data System (ADS)

    Bindhu, V. M.; Narasimhan, B.

    2015-03-01

    Normalized Difference Vegetation Index (NDVI), a key parameter in understanding the vegetation dynamics, has high spatial and temporal variability. However, continuous monitoring of NDVI is not feasible at fine spatial resolution (<60 m) owing to the long revisit time needed by the satellites to acquire the fine spatial resolution data. Further, the study attains significance in the case of humid tropical regions of the earth, where the prevailing atmospheric conditions restrict availability of fine resolution cloud free images at a high temporal frequency. As an alternative to the lack of high resolution images, the current study demonstrates a novel disaggregation method (DisNDVI) which integrates the spatial information from a single fine resolution image and temporal information in terms of crop phenology from time series of coarse resolution images to generate estimates of NDVI at fine spatial and temporal resolution. The phenological variation of the pixels captured at the coarser scale provides the basis for relating the temporal variability of the pixel with the NDVI available at fine resolution. The proposed methodology was tested over a 30 km × 25 km spatially heterogeneous study area located in the south of Tamil Nadu, India. The robustness of the algorithm was assessed by an independent comparison of the disaggregated NDVI and observed NDVI obtained from concurrent Landsat ETM+ imagery. The results showed good spatial agreement across the study area dominated with agriculture and forest pixels, with a root mean square error of 0.05. The validation done at the coarser scale showed that disaggregated NDVI spatially averaged to 240 m compared well with concurrent MODIS NDVI at 240 m (R2 > 0.8). The validation results demonstrate the effectiveness of DisNDVI in improving the spatial and temporal resolution of NDVI images for utility in fine scale hydrological applications such as crop growth monitoring and estimation of evapotranspiration.

  10. Compositing multitemporal remote sensing data sets

    USGS Publications Warehouse

    Qi, J.; Huete, A.R.; Hood, J.; Kerr, Y.

    1993-01-01

    To eliminate cloud and atmosphere-affected pixels, the compositing of multi temporal remote sensing data sets is done by selecting the maximum vale of the normalized different vegetation index (NDVI) within a compositing period. The NDVI classifier, however, is strongly affected by surface type and anisotropic properties, sensor viewing geometries, and atmospheric conditions. Consequently, the composited, multi temporal, remote sensing data contain substantial noise from these external conditions. Consequently, the composited, multi temporal, remote sensing data contain substantial noise from these external effects. To improve the accuracy of compositing products, two key approaches can be taken: one is to refine the compositing classifier (NDVI) and the other is to improve existing compositing algorithms. In this project, an alternative classifier was developed and an alternative pixel selection criterion was proposed for compositing. The new classifier and the alternative compositing algorithm were applied to an advanced very high resolution radiometer data set of different biome types in the United States. The results were compared with the maximum value compositing and the best index slope extraction algorithms. The new approaches greatly reduced the high frequency noises related to the external factors and repainted more reliable data. The results suggest that the geometric-optical canopy properties of specific biomes may be needed in compositing. Limitations of the new approaches include the dependency of pixel selection on the length of the composite period and data discontinuity.

  11. Study of sub-pixel position resolution with time-correlated transient signals in 3D pixelated CdZnTe detectors with varying pixel sizes

    NASA Astrophysics Data System (ADS)

    Ocampo Giraldo, L.; Bolotnikov, A. E.; Camarda, G. S.; De Geronimo, G.; Fried, J.; Gul, R.; Hodges, D.; Hossain, A.; Ünlü, K.; Vernon, E.; Yang, G.; James, R. B.

    2018-03-01

    We evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enabling use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 μm (650 nm) to scan over a selected 3 × 3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.

  12. Study of sub-pixel position resolution with time-correlated transient signals in 3D pixelated CdZnTe detectors with varying pixel sizes

    DOE PAGES

    Giraldo, L. Ocampo; Bolotnikov, A. E.; Camarda, G. S.; ...

    2017-12-18

    Here, we evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μμm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enablingmore » use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 m (650 nm) to scan over a selected 3×3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.« less

  13. Study of sub-pixel position resolution with time-correlated transient signals in 3D pixelated CdZnTe detectors with varying pixel sizes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giraldo, L. Ocampo; Bolotnikov, A. E.; Camarda, G. S.

    Here, we evaluated the sub-pixel position resolution achievable in large-volume CdZnTe pixelated detectors with conventional pixel patterns and for several different pixel sizes: 2.8 mm, 1.72 mm, 1.4 mm and 0.8 mm. Achieving position resolution below the physical dimensions of pixels (sub-pixel resolution) is a practical path for making high-granularity position-sensitive detectors, <100 μμm, using a limited number of pixels dictated by the mechanical constraints and multi-channel readout electronics. High position sensitivity is important for improving the imaging capability of CZT gamma cameras. It also allows for making more accurate corrections of response non-uniformities caused by crystal defects, thus enablingmore » use of standard-grade (unselected) and less expensive CZT crystals for producing large-volume position-sensitive CZT detectors feasible for many practical applications. We analyzed the digitized charge signals from a representative 9 pixels and the cathode, generated using a pulsed-laser light beam focused down to 10 m (650 nm) to scan over a selected 3×3 pixel area. We applied our digital pulse processing technique to the time-correlated signals captured from adjacent pixels to achieve and evaluate the capability for sub-pixel position resolution. As an example, we also demonstrated an application of 3D corrections to improve the energy resolution and positional information of the events for the tested detectors.« less

  14. High-resolution LIDAR and ground observations of snow cover in a complex forested terrain in the Sierra Nevada - implications for optical remote sensing of seasonal snow.

    NASA Astrophysics Data System (ADS)

    Kostadinov, T. S.; Harpold, A.; Hill, R.; McGwire, K.

    2017-12-01

    Seasonal snow cover is a key component of the hydrologic regime in many regions of the world, especially those in temperate latitudes with mountainous terrain and dry summers. Such regions support large human populations which depend on the mountain snowpack for their water supplies. It is thus important to quantify snow cover accurately and continuously in these regions. Optical remote-sensing methods are able to detect snow and leverage space-borne spectroradiometers with global coverage such as MODIS to produce global snow cover maps. However, snow is harder to detect accurately in mountainous forested terrain, where topography influences retrieval algorithms, and importantly - forest canopies complicate radiative transfer and obfuscate the snow. Current satellite snow cover algorithms assume that fractional snow-covered area (fSCA) under the canopy is the same as the fSCA in the visible portion of the pixel. In-situ observations and first principles considerations indicate otherwise, therefore there is a need for improvement of the under-canopy correction of snow cover. Here, we leverage multiple LIDAR overflights and in-situ observations with a distributed fiber-optic temperature sensor (DTS) to quantify snow cover under canopy as opposed to gap areas at the Sagehen Experimental Forest in the Northern Sierra Nevada, California, USA. Snow-off LIDAR overflights from 2014 are used to create a baseline high-resolution digital elevation model and classify pixels at 1 m resolution as canopy-covered or gap. Low canopy pixels are excluded from the analysis. Snow-on LIDAR overflights conducted by the Airborne Snow Observatory in 2016 are then used to classify all pixels as snow-covered or not and quantify fSCA under canopies vs. in gap areas over the Sagehen watershed. DTS observations are classified as snow-covered or not based on diel temperature fluctuations and used as validation for the LIDAR observations. LIDAR- and DTS-derived fSCA is also compared with retrievals from hyperspectral imaging spectroradiometer (AVIRIS) data. Initial evidence suggest fSCA was generally lower under canopy and that overall snow cover estimates were overestimated as a result. Implications for a canopy correction applicable to coarser-resolution sensors like MODIS are discussed, as are topography and view angle effects.

  15. Validation of aerosol optical depth uncertainties within the ESA Climate Change Initiative

    NASA Astrophysics Data System (ADS)

    Stebel, Kerstin; Povey, Adam; Popp, Thomas; Capelle, Virginie; Clarisse, Lieven; Heckel, Andreas; Kinne, Stefan; Klueser, Lars; Kolmonen, Pekka; de Leeuw, Gerrit; North, Peter R. J.; Pinnock, Simon; Sogacheva, Larisa; Thomas, Gareth; Vandenbussche, Sophie

    2017-04-01

    Uncertainty is a vital component of any climate data record as it provides the context with which to understand the quality of the data and compare it to other measurements. Therefore, pixel-level uncertainties are provided for all aerosol products that have been developed in the framework of the Aerosol_cci project within ESA's Climate Change Initiative (CCI). Validation of these estimated uncertainties is necessary to demonstrate that they provide a useful representation of the distribution of error. We propose a technique for the statistical validation of AOD (aerosol optical depth) uncertainty by comparison to high-quality ground-based observations and present results for ATSR (Along Track Scanning Radiometer) and IASI (Infrared Atmospheric Sounding Interferometer) data records. AOD at 0.55 µm and its uncertainty was calculated with three AOD retrieval algorithms using data from the ATSR instruments (ATSR-2 (1995-2002) and AATSR (2002-2012)). Pixel-level uncertainties were calculated through error propagation (ADV/ASV, ORAC algorithms) or parameterization of the error's dependence on the geophysical retrieval conditions (SU algorithm). Level 2 data are given as super-pixels of 10 km x 10 km. As validation data, we use direct-sun observations of AOD from the AERONET (AErosol RObotic NETwork) and MAN (Maritime Aerosol Network) sun-photometer networks, which are substantially more accurate than satellite retrievals. Neglecting the uncertainty in AERONET observations and possible issues with their ability to represent a satellite pixel area, the error in the retrieval can be approximated by the difference between the satellite and AERONET retrievals (herein referred to as "error"). To evaluate how well the pixel-level uncertainty represents the observed distribution of error, we look at the distribution of the ratio D between the "error" and the ATSR uncertainty. If uncertainties are well represented, D should be normally distributed and 68.3% of values should fall within the range [-1, +1]. A non-zero mean of D indicates the presence of residual systematic errors. If the fraction is smaller than 68%, uncertainties are underestimated; if it is larger, uncertainties are overestimated. For the three ATSR algorithms, we provide statistics and an evaluation at a global scale (separately for land and ocean/coastal regions), for high/low AOD regimes, and seasonal and regional statistics (e.g. Europe, N-Africa, East-Asia, N-America). We assess the long-term stability of the uncertainty estimates over the 17-year time series, and the consistency between ATSR-2 and AATSR results (during their period of overlap). Furthermore, we exploit the possibility to adapt the uncertainty validation concept to the IASI datasets. Ten-year data records (2007-2016) of dust AOD have been generated with four algorithms using IASI observations over the greater Sahara region [80°W - 120°E, 0°N - 40°N]. For validation, the coarse mode AOD at 0.55 μm from the AERONET direct-sun spectral deconvolution algorithm (SDA) product may be used as a proxy for desert dust. The uncertainty validation results for IASI are still tentative, as larger IASI pixel sizes and the conversion of the IASI AOD values from infrared to visible wavelengths for comparison to ground-based observations introduces large uncertainties.

  16. DHS Internship Paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreyer, J

    2007-09-18

    During my internship at Lawrence Livermore National Laboratory I worked with microcalorimeter gamma-ray and fast-neutron detectors based on superconducting Transition Edge Sensors (TESs). These instruments are being developed for fundamental science and nuclear non-proliferation applications because of their extremely high energy resolution; however, this comes at the expense of a small pixel size and slow decay times. The small pixel sizes are being addressed by developing detector arrays while the low count rate is being addressed by developing Digital Signal Processors (DSPs) that allow higher throughput than traditional pulse processing algorithms. Traditionally, low-temperature microcalorimeter pulses have been processed off-line withmore » optimum filtering routines based on the measured spectral characteristics of the signal and the noise. These optimum filters rely on the spectral content of the signal being identical for all events, and therefore require capturing the entire pulse signal without pile-up. In contrast, the DSP algorithm being developed is based on differences in signal levels before and after a trigger event, and therefore does not require the waveform to fully decay, or even the signal level to be close to the base line. The readout system allows for real time data acquisition and analysis at count rates exceeding 100 Hz for pulses with several {approx}ms decay times with minimal loss of energy resolution. Originally developed for gamma-ray analysis with HPGe detectors we have modified the hardware and firmware of the system to accommodate the slower TES signals and optimized the parameters of the filtering algorithm to maximize either resolution or throughput. The following presents an overview of the digital signal processing hardware and discusses the results of characterization measurements made to determine the systems performance.« less

  17. A New Sparse Representation Framework for Reconstruction of an Isotropic High Spatial Resolution MR Volume From Orthogonal Anisotropic Resolution Scans.

    PubMed

    Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K

    2017-05-01

    In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.

  18. Glacier Melt Detection in Complex Terrain Using New AMSR-E Calibrated Enhanced Daily EASE-Grid 2.0 Brightness Temperature (CETB) Earth System Data Record

    NASA Astrophysics Data System (ADS)

    Ramage, J. M.; Brodzik, M. J.; Hardman, M.

    2016-12-01

    Passive microwave (PM) 18 GHz and 36 GHz horizontally- and vertically-polarized brightness temperatures (Tb) channels from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) have been important sources of information about snow melt status in glacial environments, particularly at high latitudes. PM data are sensitive to the changes in near-surface liquid water that accompany melt onset, melt intensification, and refreezing. Overpasses are frequent enough that in most areas multiple (2-8) observations per day are possible, yielding the potential for determining the dynamic state of the snow pack during transition seasons. AMSR-E Tb data have been used effectively to determine melt onset and melt intensification using daily Tb and diurnal amplitude variation (DAV) thresholds. Due to mixed pixels in historically coarse spatial resolution Tb data, melt analysis has been impractical in ice-marginal zones where pixels may be only fractionally snow/ice covered, and in areas where the glacier is near large bodies of water: even small regions of open water in a pixel severely impact the microwave signal. We use the new enhanced-resolution Calibrated Passive Microwave Daily EASE-Grid 2.0 Brightness Temperature (CETB) Earth System Data Record product's twice daily obserations to test and update existing snow melt algorithms by determining appropriate melt thresholds for both Tb and DAV for the CETB 18 and 36 GHz channels. We use the enhanced resolution data to evaluate melt characteristics along glacier margins and melt transition zones during the melt seasons in locations spanning a wide range of melt scenarios, including the Patagonian Andes, the Alaskan Coast Range, and the Russian High Arctic icecaps. We quantify how improvement of spatial resolution from the original 12.5 - 25 km-scale pixels to the enhanced resolution of 3.125 - 6.25 km improves the ability to evaluate melt timing across boundaries and transition zones in diverse glacial environments.

  19. Satellite image time series simulation for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Guo, Tao

    2014-11-01

    The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to generate satellite images in higher spatial and temporal details, which further enables satellite image time series simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and the temporal change is then projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only, and usage of costly high resolution data can be reduced as much as possible, and it presents an efficient solution with great cost performance to build up an economically operational monitoring service for environment, agriculture, forest, land use investigation, and other applications.

  20. How Small Can Impact Craters Be Detected at Large Scale by Automated Algorithms?

    NASA Astrophysics Data System (ADS)

    Bandeira, L.; Machado, M.; Pina, P.; Marques, J. S.

    2013-12-01

    The last decade has seen a widespread publication of crater detection algorithms (CDA) with increasing detection performances. The adaptive nature of some of the algorithms [1] has permitting their use in the construction or update of global catalogues for Mars and the Moon. Nevertheless, the smallest craters detected in these situations by CDA have 10 pixels in diameter (or about 2 km in MOC-WA images) [2] or can go down to 16 pixels or 200 m in HRSC imagery [3]. The availability of Martian images with metric (HRSC and CTX) and centimetric (HiRISE) resolutions is permitting to unveil craters not perceived before, thus automated approaches seem a natural way of detecting the myriad of these structures. In this study we present the efforts, based on our previous algorithms [2-3] and new training strategies, to push the automated detection of craters to a dimensional threshold as close as possible to the detail that can be perceived on the images, something that has not been addressed yet in a systematic way. The approach is based on the selection of candidate regions of the images (portions that contain crescent highlight and shadow shapes indicating a possible presence of a crater) using mathematical morphology operators (connected operators of different sizes) and on the extraction of texture features (Haar-like) and classification by Adaboost, into crater and non-crater. This is a supervised approach, meaning that a training phase, in which manually labelled samples are provided, is necessary so the classifier can learn what crater and non-crater structures are. The algorithm is intensively tested in Martian HiRISE images, from different locations on the planet, in order to cover the largest surface types from the geological point view (different ages and crater densities) and also from the imaging or textural perspective (different degrees of smoothness/roughness). The quality of the detections obtained is clearly dependent on the dimension of the craters intended to be detected: the lower this limit is, the higher the false detection rates are. A detailed evaluation is performed with breakdown results by crater dimension and image or surface type, permitting to realize that automated detections in large crater datasets in HiRISE imagery datasets with 25cm/pixel resolution can be successfully done (high correct and low false positive detections) until a crater dimension of about 8-10 m or 32-40 pixels. [1] Martins L, Pina P. Marques JS, Silveira M, 2009, Crater detection by a boosting approach. IEEE Geoscience and Remote Sensing Letters 6: 127-131. [2] Salamuniccar G, Loncaric S, Pina P. Bandeira L., Saraiva J, 2011, MA130301GT catalogue of Martian impact craters and advanced evaluation of crater detection algorithms using diverse topography and image datasets. Planetary and Space Science 59: 111-131. [3] Bandeira L, Ding W, Stepinski T, 2012, Detection of sub-kilometer craters in high resolution planetary images using shape and texture features. Advances in Space Research 49: 64-74.

  1. Estimating Daily Evapotranspiration Based on A Model of Evapotranspiration Fraction (EF) for Mixed Pixels

    NASA Astrophysics Data System (ADS)

    Xin, X.; Li, F.; Peng, Z.; Qinhuo, L.

    2017-12-01

    Land surface heterogeneities significantly affect the reliability and accuracy of remotely sensed evapotranspiration (ET), and it gets worse for lower resolution data. At the same time, temporal scale extrapolation of the instantaneous latent heat flux (LE) at satellite overpass time to daily ET are crucial for applications of such remote sensing product. The purpose of this paper is to propose a simple but efficient model for estimating daytime evapotranspiration considering heterogeneity of mixed pixels. In order to do so, an equation to calculate evapotranspiration fraction (EF) of mixed pixels was derived based on two key assumptions. Assumption 1: the available energy (AE) of each sub-pixel equals approximately to that of any other sub-pixels in the same mixed pixel within acceptable margin of bias, and as same as the AE of the mixed pixel. It's only for a simpification of the equation, and its uncertainties and resulted errors in estimated ET are very small. Assumption 2: EF of each sub-pixel equals to the EF of the nearest pure pixel(s) of same land cover type. This equation is supposed to be capable of correcting the spatial scale error of the mixed pixels EF and can be used to calculated daily ET with daily AE data.The model was applied to an artificial oasis in the midstream of Heihe River. HJ-1B satellite data were used to estimate the lumped fluxes at the scale of 300 m after resampling the 30-m resolution datasets to 300 m resolution, which was used to carry on the key step of the model. The results before and after correction were compare to each other and validated using site data of eddy-correlation systems. Results indicated that the new model is capable of improving accuracy of daily ET estimation relative to the lumped method. Validations at 12 sites of eddy-correlation systems for 9 days of HJ-1B overpass showed that the R² increased to 0.82 from 0.62; the RMSE decreased to 1.60 MJ/m² from 2.47MJ/m²; the MBE decreased from 1.92 MJ/m² to 1.18MJ/m², which is a quite significant enhancement.The model is easy to apply. And the moduler of inhomogeneous surfaces is independent and easy to be embedded in the traditional remote sensing algorithms of heat fluxes to get daily ET, which were mainly designed to calculate LE or ET under unsaturated conditions and did not consider heterogeneities of land surface.

  2. Low complexity pixel-based halftone detection

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Han, Seong Wook; Jarno, Mielikainen; Lee, Chulhee

    2011-10-01

    With the rapid advances of the internet and other multimedia technologies, the digital document market has been growing steadily. Since most digital images use halftone technologies, quality degradation occurs when one tries to scan and reprint them. Therefore, it is necessary to extract the halftone areas to produce high quality printing. In this paper, we propose a low complexity pixel-based halftone detection algorithm. For each pixel, we considered a surrounding block. If the block contained any flat background regions, text, thin lines, or continuous or non-homogeneous regions, the pixel was classified as a non-halftone pixel. After excluding those non-halftone pixels, the remaining pixels were considered to be halftone pixels. Finally, documents were classified as pictures or photo documents by calculating the halftone pixel ratio. The proposed algorithm proved to be memory-efficient and required low computation costs. The proposed algorithm was easily implemented using GPU.

  3. Super-resolution fluorescence microscopy by stepwise optical saturation

    PubMed Central

    Zhang, Yide; Nallathamby, Prakash D.; Vigil, Genevieve D.; Khan, Aamir A.; Mason, Devon E.; Boerckel, Joel D.; Roeder, Ryan K.; Howard, Scott S.

    2018-01-01

    Super-resolution fluorescence microscopy is an important tool in biomedical research for its ability to discern features smaller than the diffraction limit. However, due to its difficult implementation and high cost, the super-resolution microscopy is not feasible in many applications. In this paper, we propose and demonstrate a saturation-based super-resolution fluorescence microscopy technique that can be easily implemented and requires neither additional hardware nor complex post-processing. The method is based on the principle of stepwise optical saturation (SOS), where M steps of raw fluorescence images are linearly combined to generate an image with a M-fold increase in resolution compared with conventional diffraction-limited images. For example, linearly combining (scaling and subtracting) two images obtained at regular powers extends the resolution by a factor of 1.4 beyond the diffraction limit. The resolution improvement in SOS microscopy is theoretically infinite but practically is limited by the signal-to-noise ratio. We perform simulations and experimentally demonstrate super-resolution microscopy with both one-photon (confocal) and multiphoton excitation fluorescence. We show that with the multiphoton modality, the SOS microscopy can provide super-resolution imaging deep in scattering samples. PMID:29675306

  4. Optimized random phase only holograms.

    PubMed

    Zea, Alejandro Velez; Barrera Ramirez, John Fredy; Torroba, Roberto

    2018-02-15

    We propose a simple and efficient technique capable of generating Fourier phase only holograms with a reconstruction quality similar to the results obtained with the Gerchberg-Saxton (G-S) algorithm. Our proposal is to use the traditional G-S algorithm to optimize a random phase pattern for the resolution, pixel size, and target size of the general optical system without any specific amplitude data. This produces an optimized random phase (ORAP), which is used for fast generation of phase only holograms of arbitrary amplitude targets. This ORAP needs to be generated only once for a given optical system, avoiding the need for costly iterative algorithms for each new target. We show numerical and experimental results confirming the validity of the proposal.

  5. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  6. SRRF: Universal live-cell super-resolution microscopy.

    PubMed

    Culley, Siân; Tosheva, Kalina L; Matos Pereira, Pedro; Henriques, Ricardo

    2018-08-01

    Super-resolution microscopy techniques break the diffraction limit of conventional optical microscopy to achieve resolutions approaching tens of nanometres. The major advantage of such techniques is that they provide resolutions close to those obtainable with electron microscopy while maintaining the benefits of light microscopy such as a wide palette of high specificity molecular labels, straightforward sample preparation and live-cell compatibility. Despite this, the application of super-resolution microscopy to dynamic, living samples has thus far been limited and often requires specialised, complex hardware. Here we demonstrate how a novel analytical approach, Super-Resolution Radial Fluctuations (SRRF), is able to make live-cell super-resolution microscopy accessible to a wider range of researchers. We show its applicability to live samples expressing GFP using commercial confocal as well as laser- and LED-based widefield microscopes, with the latter achieving long-term timelapse imaging with minimal photobleaching. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Sparse representation-based volumetric super-resolution algorithm for 3D CT images of reservoir rocks

    NASA Astrophysics Data System (ADS)

    Li, Zhengji; Teng, Qizhi; He, Xiaohai; Yue, Guihua; Wang, Zhengyong

    2017-09-01

    The parameter evaluation of reservoir rocks can help us to identify components and calculate the permeability and other parameters, and it plays an important role in the petroleum industry. Until now, computed tomography (CT) has remained an irreplaceable way to acquire the microstructure of reservoir rocks. During the evaluation and analysis, large samples and high-resolution images are required in order to obtain accurate results. Owing to the inherent limitations of CT, however, a large field of view results in low-resolution images, and high-resolution images entail a smaller field of view. Our method is a promising solution to these data collection limitations. In this study, a framework for sparse representation-based 3D volumetric super-resolution is proposed to enhance the resolution of 3D voxel images of reservoirs scanned with CT. A single reservoir structure and its downgraded model are divided into a large number of 3D cubes of voxel pairs and these cube pairs are used to calculate two overcomplete dictionaries and the sparse-representation coefficients in order to estimate the high frequency component. Future more, to better result, a new feature extract method with combine BM4D together with Laplacian filter are introduced. In addition, we conducted a visual evaluation of the method, and used the PSNR and FSIM to evaluate it qualitatively.

  8. Algorithm for Detecting a Bright Spot in an Image

    NASA Technical Reports Server (NTRS)

    2009-01-01

    An algorithm processes the pixel intensities of a digitized image to detect and locate a circular bright spot, the approximate size of which is known in advance. The algorithm is used to find images of the Sun in cameras aboard the Mars Exploration Rovers. (The images are used in estimating orientations of the Rovers relative to the direction to the Sun.) The algorithm can also be adapted to tracking of circular shaped bright targets in other diverse applications. The first step in the algorithm is to calculate a dark-current ramp a correction necessitated by the scheme that governs the readout of pixel charges in the charge-coupled-device camera in the original Mars Exploration Rover application. In this scheme, the fraction of each frame period during which dark current is accumulated in a given pixel (and, hence, the dark-current contribution to the pixel image-intensity reading) is proportional to the pixel row number. For the purpose of the algorithm, the dark-current contribution to the intensity reading from each pixel is assumed to equal the average of intensity readings from all pixels in the same row, and the factor of proportionality is estimated on the basis of this assumption. Then the product of the row number and the factor of proportionality is subtracted from the reading from each pixel to obtain a dark-current-corrected intensity reading. The next step in the algorithm is to determine the best location, within the overall image, for a window of N N pixels (where N is an odd number) large enough to contain the bright spot of interest plus a small margin. (In the original application, the overall image contains 1,024 by 1,024 pixels, the image of the Sun is about 22 pixels in diameter, and N is chosen to be 29.)

  9. Super-resolution chemical imaging with dynamic placement of plasmonic hotspots

    NASA Astrophysics Data System (ADS)

    Olson, Aeli P.; Ertsgaard, Christopher T.; McKoskey, Rachel M.; Rich, Isabel S.; Lindquist, Nathan C.

    2015-08-01

    We demonstrate dynamic placement of plasmonic "hotspots" for super-resolution chemical imaging via Surface Enhanced Raman Spectroscopy (SERS). A silver nanohole array surface was coated with biological samples and illuminated with a laser. Due to the large plasmonic field enhancements, blinking behavior of the SERS hotspots was observed and processed using a Stochastic Optical Reconstruction Microscopy (STORM) algorithm enabling localization to within 10 nm. However, illumination of the sample with a single static laser beam (i.e., a slightly defocused Gaussian beam) only produced SERS hotspots in fixed locations on the surface, leaving noticeable gaps in any final image. But, by using a spatial light modulator (SLM), the illumination profile of the beam could be altered, shifting any hotspots across the nanohole array surface in sub-wavelength steps. Therefore, by properly structuring an illuminating light field with the SLM, we show the possibility of positioning plasmonic hotspots over a metallic nanohole surface on-the-fly. Using this and our SERS-STORM imaging technique, we show potential for high-resolution chemical imaging without the noticeable gaps that were present with static laser illumination. Interestingly, even illuminating the surface with randomly shifting SLM phase profiles was sufficient to completely fill in a wide field of view for super-resolution SERS imaging of a single strand of 100-nm thick collagen protein fibrils. Images were then compared to those obtained with a scanning electron microscope (SEM). Additionally, we explored alternative methods of phase shifting other than holographic illumination through the SLM to create localization of hotspots necessary for SERS-STORM imaging.

  10. An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method

    NASA Astrophysics Data System (ADS)

    Tang, J.

    2012-01-01

    Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.

  11. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  12. Single-snapshot DOA estimation by using Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin

    2014-12-01

    This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.

  13. Super-Resolution Scanning Laser Microscopy Based on Virtually Structured Detection

    PubMed Central

    Zhi, Yanan; Wang, Benquan; Yao, Xincheng

    2016-01-01

    Light microscopy plays a key role in biological studies and medical diagnosis. The spatial resolution of conventional optical microscopes is limited to approximately half the wavelength of the illumination light as a result of the diffraction limit. Several approaches—including confocal microscopy, stimulated emission depletion microscopy, stochastic optical reconstruction microscopy, photoactivated localization microscopy, and structured illumination microscopy—have been established to achieve super-resolution imaging. However, none of these methods is suitable for the super-resolution ophthalmoscopy of retinal structures because of laser safety issues and inevitable eye movements. We recently experimentally validated virtually structured detection (VSD) as an alternative strategy to extend the diffraction limit. Without the complexity of structured illumination, VSD provides an easy, low-cost, and phase artifact–free strategy to achieve super-resolution in scanning laser microscopy. In this article we summarize the basic principles of the VSD method, review our demonstrated single-point and line-scan super-resolution systems, and discuss both technical challenges and the potential of VSD-based instrumentation for super-resolution ophthalmoscopy of the retina. PMID:27480461

  14. Digital cinema system using JPEG2000 movie of 8-million pixel resolution

    NASA Astrophysics Data System (ADS)

    Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu

    2003-05-01

    We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.

  15. Global Long-Term SeaWiFS Deep Blue Aerosol Products available at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Sayer, A. M.; Bettenhausen, Corey; Wei, Jennifer C.; Ostrenga, Dana M.; Vollmer, Bruce E.; Hsu, Nai-Yung; Kempler, Steven J.

    2012-01-01

    Long-term climate data records about aerosols are needed in order to improve understanding of air quality, radiative forcing, and for many other applications. The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) provides a global well-calibrated 13- year (1997-2010) record of top-of-atmosphere radiance, suitable for use in retrieval of atmospheric aerosol optical depth (AOD). Recently, global aerosol products derived from SeaWiFS with Deep Blue algorithm (SWDB) have become available for the entire mission, as part of the NASA Making Earth Science data records for Use in Research for Earth Science (MEaSUREs) program. The latest Deep Blue algorithm retrieves aerosol properties not only over bright desert surfaces, but also vegetated surfaces, oceans, and inland water bodies. Comparisons with AERONET observations have shown that the data are suitable for quantitative scientific use [1],[2]. The resolution of Level 2 pixels is 13.5x13.5 km2 at the center of the swath. Level 3 daily and monthly data are composed by using best quality level 2 pixels at resolution of both 0.5ox0.5o and 1.0ox1.0o. Focusing on the southwest Asia region, this presentation shows seasonal variations of AOD, and the result of comparisons of 5-years (2003- 2007) of AOD from SWDB (Version 3) and MODIS Aqua (Version 5.1) for Dark Target (MYD-DT) and Deep Blue (MYD-DB) algorithms.

  16. Functional Analysis of Internal Moving Organs Using Super-Resolution Echography

    NASA Astrophysics Data System (ADS)

    Masuda, Kohji; Ishihara, Ken; Nagakura, Toshiaki; Tsuda, Takao; Furukawa, Toshiyuki; Maeda, Hajime; Kumagai, Sadatoshi; Kodama, Shinzo

    1994-05-01

    We have developed super-resolution echography to visualize instantaneous velocity and acceleration of internal organs from time-series echograms recorded by a high-frame-rate echograph. The algorithm for this method involves subtraction of two echograms, dividing the difference by the brightness gradient of the first echogram, and normalization of that result by the time interval between the two echograms. Velocity or acceleration is classified into some suitable colors and superimposed on the original B-mode image. Functional diagnosis of moving organs can be made by visualizing instantaneous velocity. In the case of the heart, hypokinesis can be distinguished from a normal heart by the value and the variation of colored parts representing instantaneous velocity. This can also be applied to the liver to observe pulsatile motion. By visualizing instantaneous acceleration, increase or decrease of velocity can be detected. Throb timing and the location of arrhythmia in a heart can be observed. This method has the possibility of contributing to noninvasive functional and characteristic evaluation.

  17. View-sharing PROPELLER with pixel-based optimal blade selection: application on dynamic contrast-enhanced imaging.

    PubMed

    Chuang, Tzu-Chao; Huang, Hsuan-Hung; Chang, Hing-Chiu; Wu, Ming-Ting

    2014-06-01

    To achieve better spatial and temporal resolution of dynamic contrast-enhanced MR imaging, the concept of k-space data sharing, or view sharing, can be implemented for PROPELLER acquisition. As found in other view-sharing methods, the loss of high-resolution dynamics is possible for view-sharing PROPELLER (VS-Prop) due to the temporal smoothing effect. The degradation can be more severe when a narrow blade with less phase encoding steps is chosen in the acquisition for higher frame rate. In this study, an iterative algorithm termed pixel-based optimal blade selection (POBS) is proposed to allow spatially dependent selection of the rotating blades, to generate high-resolution dynamic images with minimal reconstruction artifacts. In the reconstruction of VS-Prop, the central k-space which dominates the image contrast is only provided by the target blade with the peripheral k-space contributed by a minimal number of consecutive rotating blades. To reduce the reconstruction artifacts, the set of neighboring blades exhibiting the closest image contrast with the target blade is picked by POBS algorithm. Numerical simulations and phantom experiments were conducted in this study to investigate the dynamic response and spatial profiles of images generated using our proposed method. In addition, dynamic contrast-enhanced cardiovascular imaging of healthy subjects was performed to demonstrate the feasibility and advantages. The simulation results show that POBS VS-Prop can provide timely dynamic response to rapid signal change, especially for a small region of interest or with the use of narrow blades. The POBS algorithm also demonstrates its capability to capture nonsimultaneous signal changes over the entire FOV. In addition, both phantom and in vivo experiments show that the temporal smoothing effect can be avoided by means of POBS, leading to higher wash-in slope of contrast enhancement after the bolus injection. With the satisfactory reconstruction quality provided by the POBS algorithm, VS-Prop acquisition technique may find useful clinical applications in DCE MR imaging studies where both spatial and temporal resolutions play important roles.

  18. The benefit of limb cloud imaging for tropospheric infrared limb sounding

    NASA Astrophysics Data System (ADS)

    Adams, S.; Spang, R.; Preusse, P.; Heinemann, G.

    2009-03-01

    Advances in detector technology enable a new generation of infrared limb sounders to measure 2-D images of the atmosphere. A proposed limb cloud imager (LCI) mode will measure clouds with very high spatial resolution. For the inference of temperature and trace gas distributions, detector pixels of the LCI have to be combined into super-pixels which provide the required signal-to-noise ratio and information content for the retrievals. This study examines the extent to which tropospheric coverage can be improved in comparison to limb sounding using a fixed field of view with the size of the super-pixels, as in conventional limb sounders. The study is based on cloud topographies derived from (a) IR brightness temperatures (BT) of geostationary weather satellites in conjunction with ECMWF temperature profiles and (b) ice and liquid water content data of the Consortium for Small-scale Modeling-Europe (COSMO-EU) of the German Weather Service. Limb cloud images are simulated by matching the cloud topography with the limb sounding line of sight (LOS). The analysis of the BT data shows that the reduction of the spatial sampling along the track has hardly any effect on the gain in information. The comparison between BT and COSMO-EU data identifies the strength of both data sets, which are the representation of the horizontal cloud extent for the BT data and the reproduction of the cloud amount for the COSMO-EU data. The results of the analysis of both data sets show the great advantage of the cloud imager. However, because both cloud data sets do not present the complete fine structure of the real cloud fields in the atmosphere it is assumed that the results tend to underestimate the increase in information. In conclusion, real measurements by such an instrument may result in an even higher benefit for tropospheric limb retrievals.

  19. The benefit of limb cloud imaging for infrared limb sounding of tropospheric trace gases

    NASA Astrophysics Data System (ADS)

    Adams, S.; Spang, R.; Preusse, P.; Heinemann, G.

    2009-06-01

    Advances in detector technology enable a new generation of infrared limb sounders to measure 2-D images of the atmosphere. A proposed limb cloud imager (LCI) mode will detect clouds with a spatial resolution unprecedented for limb sounding. For the inference of temperature and trace gas distributions, detector pixels of the LCI have to be combined into super-pixels which provide the required signal-to-noise and information content for the retrievals. This study examines the extent to which tropospheric coverage can be improved in comparison to limb sounding using a fixed field of view with the size of the super-pixels, as in conventional limb sounders. The study is based on cloud topographies derived from (a) IR brightness temperatures (BT) of geostationary weather satellites in conjunction with ECMWF temperature profiles and (b) ice and liquid water content data of the Consortium for Small-scale Modeling-Europe (COSMO-EU) of the German Weather Service. Limb cloud images are simulated by matching the cloud topography with the limb sounding line of sight (LOS). The analysis of the BT data shows that the reduction of the spatial sampling along the track has hardly any effect on the gain in information. The comparison between BT and COSMO-EU data identifies the strength of both data sets, which are the representation of the horizontal cloud extent for the BT data and the reproduction of the cloud amount for the COSMO-EU data. The results of the analysis of both data sets show the great advantage of the cloud imager. However, because both cloud data sets do not present the complete fine structure of the real cloud fields in the atmosphere it is assumed that the results tend to underestimate the increase in information. In conclusion, real measurements by such an instrument may result in an even higher benefit for tropospheric limb retrievals.

  20. Analyzing blinking effects in super resolution localization microscopy with single-photon SPAD imagers

    NASA Astrophysics Data System (ADS)

    Antolovic, Ivan Michel; Burri, Samuel; Bruschini, Claudio; Hoebe, Ron; Charbon, Edoardo

    2016-02-01

    For many scientific applications, electron multiplying charge coupled devices (EMCCDs) have been the sensor of choice because of their high quantum efficiency and built-in electron amplification. Lately, many researchers introduced scientific complementary metal-oxide semiconductor (sCMOS) imagers in their instrumentation, so as to take advantage of faster readout and the absence of excess noise. Alternatively, single-photon avalanche diode (SPAD) imagers can provide even faster frame rates and zero readout noise. SwissSPAD is a 1-bit 512×128 SPAD imager, one of the largest of its kind, featuring a frame duration of 6.4 μs. Additionally, a gating mechanism enables photosensitive windows as short as 5 ns with a skew better than 150 ps across the entire array. The SwissSPAD photon detection efficiency (PDE) uniformity is very high, thanks on one side to a photon-to-digital conversion and on the other to a reduced fraction of "hot pixels" or "screamers", which would pollute the image with noise. A low native fill factor was recovered to a large extent using a microlens array, leading to a maximum PDE increase of 12×. This enabled us to detect single fluorophores, as required by ground state depletion followed by individual molecule return imaging microscopy (GSDIM). We show the first super resolution results obtained with a SPAD imager, with an estimated localization uncertainty of 30 nm and resolution of 100 nm. The high time resolution of 6.4 μs can be utilized to explore the dye's photophysics or for dye optimization. We also present the methodology for the blinking analysis on experimental data.

  1. Spatial Upscaling of Long-term In Situ LAI Measurements from Global Network Sites for Validation of Remotely Sensed Products

    NASA Astrophysics Data System (ADS)

    Xu, B.; Jing, L.; Qinhuo, L.; Zeng, Y.; Yin, G.; Fan, W.; Zhao, J.

    2015-12-01

    Leaf area index (LAI) is a key parameter in terrestrial ecosystem models, and a series of global LAI products have been derived from satellite data. To effectively apply these LAI products, it is necessary to evaluate their accuracy reasonablely. The long-term LAI measurements from the global network sites are an important supplement to the product validation dataset. However, the spatial scale mismatch between the site measurement and the pixel grid hinders the utilization of these measurements in LAI product validation. In this study, a pragmatic approach based on the Bayesian linear regression between long-term LAI measurements and high-resolution images is presented for upscaling the point-scale measurements to the pixel-scale. The algorithm was evaluated using high-resolution LAI reference maps provided by the VALERI project at the Järvselja site and was implemented to upscale the long-term LAI measurements at the global network sites. Results indicate that the spatial scaling algorithm can reduce the root mean square error (RMSE) from 0.42 before upscaling to 0.21 after upscaling compared with the aggregated LAI reference maps at the pixel-scale. Meanwhile, the algorithm shows better reliability and robustness than the ordinary least square (OLS) method for upscaling some LAI measurements acquired at specific dates without high-resolution images. The upscaled LAI measurements were employed to validate three global LAI products, including MODIS, GLASS and GEOV1. Results indicate that (i) GLASS and GEOV1 show consistent temporal profiles over most sites, while MODIS exhibits temporal instability over a few forest sites. The RMSE of seasonality between products and upscaled LAI measurement is 0.25-1.72 for MODIS, 0.17-1.29 for GLASS and 0.36-1.35 for GEOV1 along with different sites. (ii) The uncertainty for products varies over different months. The lowest and highest uncertainty for MODIS are 0.67 in March and 1.53 in August, for GLASS are 0.67 in November and 0.99 in July, and for GEOV1 are 0.61 in March and 1.23 in August, respectively. (iii) The overall uncertainty for MODIS, GLASS and GEOV1 is 1.36, 0.90 and 0.99, respectively. According to this study, the long-term LAI measurements can be used to validate time series remote sensing products by spatial upscaling from point-scale to pixel-scale.

  2. Generalized pixel profiling and comparative segmentation with application to arteriovenous malformation segmentation.

    PubMed

    Babin, D; Pižurica, A; Bellens, R; De Bock, J; Shang, Y; Goossens, B; Vansteenkiste, E; Philips, W

    2012-07-01

    Extraction of structural and geometric information from 3-D images of blood vessels is a well known and widely addressed segmentation problem. The segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, with a special application in diagnostics and surgery on arteriovenous malformations (AVM). However, the techniques addressing the problem of the AVM inner structure segmentation are rare. In this work we present a novel method of pixel profiling with the application to segmentation of the 3-D angiography AVM images. Our algorithm stands out in situations with low resolution images and high variability of pixel intensity. Another advantage of our method is that the parameters are set automatically, which yields little manual user intervention. The results on phantoms and real data demonstrate its effectiveness and potentials for fine delineation of AVM structure. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. SMV⊥: Simplex of maximal volume based upon the Gram-Schmidt process

    NASA Astrophysics Data System (ADS)

    Salazar-Vazquez, Jairo; Mendez-Vazquez, Andres

    2015-10-01

    In recent years, different algorithms for Hyperspectral Image (HI) analysis have been introduced. The high spectral resolution of these images allows to develop different algorithms for target detection, material mapping, and material identification for applications in Agriculture, Security and Defense, Industry, etc. Therefore, from the computer science's point of view, there is fertile field of research for improving and developing algorithms in HI analysis. In some applications, the spectral pixels of a HI can be classified using laboratory spectral signatures. Nevertheless, for many others, there is no enough available prior information or spectral signatures, making any analysis a difficult task. One of the most popular algorithms for the HI analysis is the N-FINDR because it is easy to understand and provides a way to unmix the original HI in the respective material compositions. The N-FINDR is computationally expensive and its performance depends on a random initialization process. This paper proposes a novel idea to reduce the complexity of the N-FINDR by implementing a bottom-up approach based in an observation from linear algebra and the use of the Gram-Schmidt process. Therefore, the Simplex of Maximal Volume Perpendicular (SMV⊥) algorithm is proposed for fast endmember extraction in hyperspectral imagery. This novel algorithm has complexity O(n) with respect to the number of pixels. In addition, the evidence shows that SMV⊥ calculates a bigger volume, and has lower computational time complexity than other poular algorithms on synthetic and real scenarios.

  4. Bayesian aerosol retrieval algorithm for MODIS AOD retrieval over land

    NASA Astrophysics Data System (ADS)

    Lipponen, Antti; Mielonen, Tero; Pitkänen, Mikko R. A.; Levy, Robert C.; Sawyer, Virginia R.; Romakkaniemi, Sami; Kolehmainen, Ville; Arola, Antti

    2018-03-01

    We have developed a Bayesian aerosol retrieval (BAR) algorithm for the retrieval of aerosol optical depth (AOD) over land from the Moderate Resolution Imaging Spectroradiometer (MODIS). In the BAR algorithm, we simultaneously retrieve all dark land pixels in a granule, utilize spatial correlation models for the unknown aerosol parameters, use a statistical prior model for the surface reflectance, and take into account the uncertainties due to fixed aerosol models. The retrieved parameters are total AOD at 0.55 µm, fine-mode fraction (FMF), and surface reflectances at four different wavelengths (0.47, 0.55, 0.64, and 2.1 µm). The accuracy of the new algorithm is evaluated by comparing the AOD retrievals to Aerosol Robotic Network (AERONET) AOD. The results show that the BAR significantly improves the accuracy of AOD retrievals over the operational Dark Target (DT) algorithm. A reduction of about 29 % in the AOD root mean square error and decrease of about 80 % in the median bias of AOD were found globally when the BAR was used instead of the DT algorithm. Furthermore, the fraction of AOD retrievals inside the ±(0.05+15 %) expected error envelope increased from 55 to 76 %. In addition to retrieving the values of AOD, FMF, and surface reflectance, the BAR also gives pixel-level posterior uncertainty estimates for the retrieved parameters. The BAR algorithm always results in physical, non-negative AOD values, and the average computation time for a single granule was less than a minute on a modern personal computer.

  5. High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei; Shabbir, Faizan; Gong, Chao

    2015-04-13

    We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processingmore » units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.« less

  6. Clouds and the Earth's Radiant Energy System (CERES) Algorithm Theoretical Basis Document. Volume 3; Cloud Analyses and Determination of Improved Top of Atmosphere Fluxes (Subsystem 4)

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 3 details the advanced CERES methods for performing scene identification and inverting each CERES scanner radiance to a top-of-the-atmosphere (TOA) flux. CERES determines cloud fraction, height, phase, effective particle size, layering, and thickness from high-resolution, multispectral imager data. CERES derives cloud properties for each pixel of the Tropical Rainfall Measuring Mission (TRMM) visible and infrared scanner and the Earth Observing System (EOS) moderate-resolution imaging spectroradiometer. Cloud properties for each imager pixel are convolved with the CERES footprint point spread function to produce average cloud properties for each CERES scanner radiance. The mean cloud properties are used to determine an angular distribution model (ADM) to convert each CERES radiance to a TOA flux. The TOA fluxes are used in simple parameterization to derive surface radiative fluxes. This state-of-the-art cloud-radiation product will be used to substantially improve our understanding of the complex relationship between clouds and the radiation budget of the Earth-atmosphere system.

  7. Alerts of forest disturbance from MODIS imagery

    NASA Astrophysics Data System (ADS)

    Hammer, Dan; Kraft, Robin; Wheeler, David

    2014-12-01

    This paper reports the methodology and computational strategy for a forest cover disturbance alerting system. Analytical techniques from time series econometrics are applied to imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to detect temporal instability in vegetation indices. The characteristics from each MODIS pixel's spectral history are extracted and compared against historical data on forest cover loss to develop a geographically localized classification rule that can be applied across the humid tropical biome. The final output is a probability of forest disturbance for each 500 m pixel that is updated every 16 days. The primary objective is to provide high-confidence alerts of forest disturbance, while minimizing false positives. We find that the alerts serve this purpose exceedingly well in Pará, Brazil, with high probability alerts garnering a user accuracy of 98 percent over the training period and 93 percent after the training period (2000-2005) when compared against the PRODES deforestation data set, which is used to assess spatial accuracy. Implemented in Clojure and Java on the Hadoop distributed data processing platform, the algorithm is a fast, automated, and open source system for detecting forest disturbance. It is intended to be used in conjunction with higher-resolution imagery and data products that cannot be updated as quickly as MODIS-based data products. By highlighting hotspots of change, the algorithm and associated output can focus high-resolution data acquisition and aid in efforts to enforce local forest conservation efforts.

  8. Toward global crop type mapping using a hybrid machine learning approach and multi-sensor imagery

    NASA Astrophysics Data System (ADS)

    Wang, S.; Le Bras, S.; Azzari, G.; Lobell, D. B.

    2017-12-01

    Current global scale datasets on agricultural land use do not have sufficient spatial or temporal resolution to meet the needs of many applications. The recent rapid increase in public availability of fine- to moderate-resolution satellite imagery from Landsat OLI and Copernicus Sentinel-2 provides a unique opportunity to improve agricultural land use datasets. This project leverages these new satellite data streams, existing census data, and a novel training approach to develop global, annual maps that indicate the presence of (i) cropland and (ii) specific crops at a 20m resolution. Our machine learning methodology consists of two steps. The first is a supervised classifier trained with explicitly labelled data to distinguish between crop and non-crop pixels, creating a binary mask. For ground truth, we use labels collected by previous mapping efforts (e.g. IIASA's crowdsourced data (Fritz et al. 2015) and AFSIS's geosurvey data) in combination with new data collected manually. The crop pixels output by the binary mask are input to the second step: a semi-supervised clustering algorithm to resolve different crop types and generate a crop type map. We do not use field-level information on crop type to train the algorithm, making this approach scalable spatially and temporally. We instead incorporate size constraints on clusters based on aggregated agricultural land use statistics and other, more generalizable domain knowledge. We employ field-level data from the U.S., Southern Europe, and Eastern Africa to validate crop-to-cluster assignments.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreyev, A.

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less

  10. Mapping paddy rice planting area in cold temperate climate region through analysis of time series Landsat 8 (OLI), Landsat 7 (ETM+) and MODIS imagery

    NASA Astrophysics Data System (ADS)

    Qin, Yuanwei; Xiao, Xiangming; Dong, Jinwei; Zhou, Yuting; Zhu, Zhe; Zhang, Geli; Du, Guoming; Jin, Cui; Kou, Weili; Wang, Jie; Li, Xiangping

    2015-07-01

    Accurate and timely rice paddy field maps with a fine spatial resolution would greatly improve our understanding of the effects of paddy rice agriculture on greenhouse gases emissions, food and water security, and human health. Rice paddy field maps were developed using optical images with high temporal resolution and coarse spatial resolution (e.g., Moderate Resolution Imaging Spectroradiometer (MODIS)) or low temporal resolution and high spatial resolution (e.g., Landsat TM/ETM+). In the past, the accuracy and efficiency for rice paddy field mapping at fine spatial resolutions were limited by the poor data availability and image-based algorithms. In this paper, time series MODIS and Landsat ETM+/OLI images, and the pixel- and phenology-based algorithm are used to map paddy rice planting area. The unique physical features of rice paddy fields during the flooding/open-canopy period are captured with the dynamics of vegetation indices, which are then used to identify rice paddy fields. The algorithm is tested in the Sanjiang Plain (path/row 114/27) in China in 2013. The overall accuracy of the resulted map of paddy rice planting area generated by both Landsat ETM+ and OLI is 97.3%, when evaluated with areas of interest (AOIs) derived from geo-referenced field photos. The paddy rice planting area map also agrees reasonably well with the official statistics at the level of state farms (R2 = 0.94). These results demonstrate that the combination of fine spatial resolution images and the phenology-based algorithm can provide a simple, robust, and automated approach to map the distribution of paddy rice agriculture in a year.

  11. Mapping paddy rice planting area in cold temperate climate region through analysis of time series Landsat 8 (OLI), Landsat 7 (ETM+) and MODIS imagery.

    PubMed

    Qin, Yuanwei; Xiao, Xiangming; Dong, Jinwei; Zhou, Yuting; Zhu, Zhe; Zhang, Geli; Du, Guoming; Jin, Cui; Kou, Weili; Wang, Jie; Li, Xiangping

    2015-07-01

    Accurate and timely rice paddy field maps with a fine spatial resolution would greatly improve our understanding of the effects of paddy rice agriculture on greenhouse gases emissions, food and water security, and human health. Rice paddy field maps were developed using optical images with high temporal resolution and coarse spatial resolution (e.g., Moderate Resolution Imaging Spectroradiometer (MODIS)) or low temporal resolution and high spatial resolution (e.g., Landsat TM/ETM+). In the past, the accuracy and efficiency for rice paddy field mapping at fine spatial resolutions were limited by the poor data availability and image-based algorithms. In this paper, time series MODIS and Landsat ETM+/OLI images, and the pixel- and phenology-based algorithm are used to map paddy rice planting area. The unique physical features of rice paddy fields during the flooding/open-canopy period are captured with the dynamics of vegetation indices, which are then used to identify rice paddy fields. The algorithm is tested in the Sanjiang Plain (path/row 114/27) in China in 2013. The overall accuracy of the resulted map of paddy rice planting area generated by both Landsat ETM+ and OLI is 97.3%, when evaluated with areas of interest (AOIs) derived from geo-referenced field photos. The paddy rice planting area map also agrees reasonably well with the official statistics at the level of state farms ( R 2 = 0.94). These results demonstrate that the combination of fine spatial resolution images and the phenology-based algorithm can provide a simple, robust, and automated approach to map the distribution of paddy rice agriculture in a year.

  12. Microsphere-aided optical microscopy and its applications for super-resolution imaging

    NASA Astrophysics Data System (ADS)

    Upputuri, Paul Kumar; Pramanik, Manojit

    2017-12-01

    The spatial resolution of a standard optical microscope (SOM) is limited by diffraction. In visible spectrum, SOM can provide ∼ 200 nm resolution. To break the diffraction limit several approaches were developed including scanning near field microscopy, metamaterial super-lenses, nanoscale solid immersion lenses, super-oscillatory lenses, confocal fluorescence microscopy, techniques that exploit non-linear response of fluorophores like stimulated emission depletion microscopy, stochastic optical reconstruction microscopy, etc. Recently, photonic nanojet generated by a dielectric microsphere was used to break the diffraction limit. The microsphere-approach is simple, cost-effective and can be implemented under a standard microscope, hence it has gained enormous attention for super-resolution imaging. In this article, we briefly review the microsphere approach and its applications for super-resolution imaging in various optical imaging modalities.

  13. Integrated Arrays on Silicon at Terahertz Frequencies

    NASA Technical Reports Server (NTRS)

    Chattopadhayay, Goutam; Lee, Choonsup; Jung, Cecil; Lin, Robert; Peralta, Alessandro; Mehdi, Imran; Llombert, Nuria; Thomas, Bertrand

    2011-01-01

    In this paper we explore various receiver font-end and antenna architecture for use in integrated arrays at terahertz frequencies. Development of wafer-level integrated terahertz receiver front-end by using advanced semiconductor fabrication technologies and use of novel integrated antennas with silicon micromachining are reported. We report novel stacking of micromachined silicon wafers which allows for the 3-dimensional integration of various terahertz receiver components in extremely small packages which easily leads to the development of 2- dimensioanl multi-pixel receiver front-ends in the terahertz frequency range. We also report an integrated micro-lens antenna that goes with the silicon micro-machined front-end. The micro-lens antenna is fed by a waveguide that excites a silicon lens antenna through a leaky-wave or electromagnetic band gap (EBG) resonant cavity. We utilized advanced semiconductor nanofabrication techniques to design, fabricate, and demonstrate a super-compact, low-mass submillimeter-wave heterodyne frontend. When the micro-lens antenna is integrated with the receiver front-end we will be able to assemble integrated heterodyne array receivers for various applications such as multi-pixel high resolution spectrometer and imaging radar at terahertz frequencies.

  14. Ultrafast photon counting applied to resonant scanning STED microscopy.

    PubMed

    Wu, Xundong; Toro, Ligia; Stefani, Enrico; Wu, Yong

    2015-01-01

    To take full advantage of fast resonant scanning in super-resolution stimulated emission depletion (STED) microscopy, we have developed an ultrafast photon counting system based on a multigiga sample per second analogue-to-digital conversion chip that delivers an unprecedented 450 MHz pixel clock (2.2 ns pixel dwell time in each scan). The system achieves a large field of view (∼50 × 50 μm) with fast scanning that reduces photobleaching, and advances the time-gated continuous wave STED technology to the usage of resonant scanning with hardware-based time-gating. The assembled system provides superb signal-to-noise ratio and highly linear quantification of light that result in superior image quality. Also, the system design allows great flexibility in processing photon signals to further improve the dynamic range. In conclusion, we have constructed a frontier photon counting image acquisition system with ultrafast readout rate, excellent counting linearity, and with the capacity of realizing resonant-scanning continuous wave STED microscopy with online time-gated detection. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.

  15. VizieR Online Data Catalog: New planetary nebulae in LMC (Reid+, 2006)

    NASA Astrophysics Data System (ADS)

    Reid, W. A.; Parker, Q. A.

    2006-05-01

    Over the last few years, we have specially constructed additional deep, homogeneous, narrow-band H and matching broad-band 'SR' (Short Red) maps of the entire central 25deg2 of the LMC. These unique maps were obtained from co-adding 12 well-matched UKST 2-h Hα exposures and six 15-min equivalent SR-band exposures on the same field using high-resolution Tech-Pan film. The 'SuperCOSMOS' plate-measuring machine at the Royal Observatory Edinburgh (Hambly et al., 2001MNRAS.326.1279) has scanned, co-added and pixel-matched these exposures, creating 10-m (0.67-arcsec) pixel data which goes 1.35 and 1mag deeper than individual exposures, achieving the full canonical Poissonian depth gain, e.g. Bland-Hawthorn, Shopbell & Malin (1993AJ....106.2154B). This gives a depth ~21.5 for the SR images and Requiv~22 for Hα (4.5x10-17erg/cm2/s/{AA}) which is at least 1-mag deeper than the best wide-field narrow-band LMC images currently available. (2 data files).

  16. [Improvement of Digital Capsule Endoscopy System and Image Interpolation].

    PubMed

    Zhao, Shaopeng; Yan, Guozheng; Liu, Gang; Kuang, Shuai

    2016-01-01

    Traditional capsule image collects and transmits analog image, with weak anti-interference ability, low frame rate, low resolution. This paper presents a new digital image capsule, which collects and transmits digital image, with frame rate up to 30 frames/sec and pixels resolution of 400 x 400. The image is compressed in the capsule, and is transmitted to the outside of the capsule for decompression and interpolation. A new type of interpolation algorithm is proposed, which is based on the relationship between the image planes, to obtain higher quality colour images. capsule endoscopy, digital image, SCCB protocol, image interpolation

  17. Smithsonian Astrophysical Observatory Ozone Mapping and Profiler Suite (SAO OMPS) formaldehyde retrieval

    NASA Astrophysics Data System (ADS)

    González Abad, Gonzalo; Vasilkov, Alexander; Seftor, Colin; Liu, Xiong; Chance, Kelly

    2016-07-01

    This paper presents our new formaldehyde (H2CO) retrievals, obtained from spectra recorded by the nadir instrument of the Ozone Mapping and Profiler Suite (OMPS) flown on board NASA's Suomi National Polar-orbiting Partnership (SUOMI-NPP) satellite. Our algorithm is similar to the one currently in place for the production of NASA's Ozone Monitoring Instrument (OMI) operational H2CO product. We are now able to produce a set of long-term data from two different instruments that share a similar concept and a similar retrieval approach. The ongoing overlap period between OMI and OMPS offers a perfect opportunity to study the consistency between both data sets. The different spatial and spectral resolution of the instruments is a source of discrepancy in the retrievals despite the similarity of the physic assumptions of the algorithm. We have concluded that the reduced spectral resolution of OMPS in comparison with OMI is not a significant obstacle in obtaining good-quality retrievals. Indeed, the improved signal-to-noise ratio of OMPS with respect to OMI helps to reduce the noise of the retrievals performed using OMPS spectra. However, the size of OMPS spatial pixels imposes a limitation in the capability to distinguish particular features of H2CO that are discernible with OMI. With root mean square (RMS) residuals ˜ 5 × 10-4 for individual pixels we estimate the detection limit to be about 7.5 × 1015 molecules cm-2. Total vertical column density (VCD) errors for individual pixels range between 40 % for pixels with high concentrations to 100 % or more for pixels with concentrations at or below the detection limit. We compare different OMI products (SAO OMI v3.0.2 and BIRA OMI v14) with our OMPS product using 1 year of data, between September 2012 and September 2013. The seasonality of the retrieved slant columns is captured similarly by all products but there are discrepancies in the values of the VCDs. The mean biases among the two OMI products and our OMPS product are 23 % between OMI SAO and OMPS SAO and 28 % between OMI BIRA and OMPS SAO for eight selected regions.

  18. Enhancing Analytical Separations Using Super-Resolution Microscopy

    NASA Astrophysics Data System (ADS)

    Moringo, Nicholas A.; Shen, Hao; Bishop, Logan D. C.; Wang, Wenxiao; Landes, Christy F.

    2018-04-01

    Super-resolution microscopy is becoming an invaluable tool to investigate structure and dynamics driving protein interactions at interfaces. In this review, we highlight the applications of super-resolution microscopy for quantifying the physics and chemistry that occur between target proteins and stationary-phase supports during chromatographic separations. Our discussion concentrates on the newfound ability of super-resolved single-protein spectroscopy to inform theoretical parameters via quantification of adsorption-desorption dynamics, protein unfolding, and nanoconfined transport.

  19. 3D imaging of optically cleared tissue using a simplified CLARITY method and on-chip microscopy

    PubMed Central

    Zhang, Yibo; Shin, Yoonjung; Sung, Kevin; Yang, Sam; Chen, Harrison; Wang, Hongda; Teng, Da; Rivenson, Yair; Kulkarni, Rajan P.; Ozcan, Aydogan

    2017-01-01

    High-throughput sectioning and optical imaging of tissue samples using traditional immunohistochemical techniques can be costly and inaccessible in resource-limited areas. We demonstrate three-dimensional (3D) imaging and phenotyping in optically transparent tissue using lens-free holographic on-chip microscopy as a low-cost, simple, and high-throughput alternative to conventional approaches. The tissue sample is passively cleared using a simplified CLARITY method and stained using 3,3′-diaminobenzidine to target cells of interest, enabling bright-field optical imaging and 3D sectioning of thick samples. The lens-free computational microscope uses pixel super-resolution and multi-height phase recovery algorithms to digitally refocus throughout the cleared tissue and obtain a 3D stack of complex-valued images of the sample, containing both phase and amplitude information. We optimized the tissue-clearing and imaging system by finding the optimal illumination wavelength, tissue thickness, sample preparation parameters, and the number of heights of the lens-free image acquisition and implemented a sparsity-based denoising algorithm to maximize the imaging volume and minimize the amount of the acquired data while also preserving the contrast-to-noise ratio of the reconstructed images. As a proof of concept, we achieved 3D imaging of neurons in a 200-μm-thick cleared mouse brain tissue over a wide field of view of 20.5 mm2. The lens-free microscope also achieved more than an order-of-magnitude reduction in raw data compared to a conventional scanning optical microscope imaging the same sample volume. Being low cost, simple, high-throughput, and data-efficient, we believe that this CLARITY-enabled computational tissue imaging technique could find numerous applications in biomedical diagnosis and research in low-resource settings. PMID:28819645

  20. Image resolution enhancement via image restoration using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangteng; Lu, Yihong

    2011-04-01

    Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.

  1. Transiting Planet Search in the Kepler Pipeline

    NASA Technical Reports Server (NTRS)

    Jenkins, Jon M.; Chandrasekaran, Hema; McCauliff, Sean D.; Caldwell, Douglas A.; Tenebaum, Peter; Li, Jie; Klaus, Todd C.; Cote, Mile T.; Middour, Christopher

    2010-01-01

    The Kepler Mission simultaneously measures the brightness of more than 160,000 stars every 29.4 minutes over a 3.5-year mission to search for transiting planets. Detecting transits is a signal-detection problem where the signal of interest is a periodic pulse train and the predominant noise source is non-white, non-stationary (1/f) type process of stellar variability. Many stars also exhibit coherent or quasi-coherent oscillations. The detection algorithm first identifies and removes strong oscillations followed by an adaptive, wavelet-based matched filter. We discuss how we obtain super-resolution detection statistics and the effectiveness of the algorithm for Kepler flight data.

  2. A Knowledge Discovery Approach to Diagnosing Intracranial Hematomas on Brain CT: Recognition, Measurement and Classification

    NASA Astrophysics Data System (ADS)

    Liao, Chun-Chih; Xiao, Furen; Wong, Jau-Min; Chiang, I.-Jen

    Computed tomography (CT) of the brain is preferred study on neurological emergencies. Physicians use CT to diagnose various types of intracranial hematomas, including epidural, subdural and intracerebral hematomas according to their locations and shapes. We propose a novel method that can automatically diagnose intracranial hematomas by combining machine vision and knowledge discovery techniques. The skull on the CT slice is located and the depth of each intracranial pixel is labeled. After normalization of the pixel intensities by their depth, the hyperdense area of intracranial hematoma is segmented with multi-resolution thresholding and region-growing. We then apply C4.5 algorithm to construct a decision tree using the features of the segmented hematoma and the diagnoses made by physicians. The algorithm was evaluated on 48 pathological images treated in a single institute. The two discovered rules closely resemble those used by human experts, and are able to make correct diagnoses in all cases.

  3. Research on compressive sensing reconstruction algorithm based on total variation model

    NASA Astrophysics Data System (ADS)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  4. Characterization of Pixelated Cadmium-Zinc-Telluride Detectors for Astrophysical Applications

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    Comparisons of charge sharing and charge loss measurements between two pixelated Cadmium-Zinc-Telluride (CdZnTe) detectors are discussed. These properties along with the detector geometry help to define the limiting energy resolution and spatial resolution of the detector in question. The first detector consists of a 1-mm-thick piece of CdZnTe sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). Signal readout is via discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). This crystal is bonded to a custom-built readout chip (ASIC) providing all front-end electronics to each of the 256 independent pixels. These detectors act as precursors to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation.

  5. NASA/GEWEX Surface Radiation Budget: Integrated Data Product With Reprocessed Radiance, Cloud, and Meteorology Inputs, and New Surface Albedo Treatment

    NASA Technical Reports Server (NTRS)

    Cox, Stephen J.; Stackhouse, Paul W., Jr.; Gupta, Shashi K.; Mikovitz, J. Colleen; Zhang, Taiping

    2016-01-01

    The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The current release 3.0 (available at gewex-srb.larc.nasa.gov) uses the International Satellite Cloud Climatology Project (ISCCP) DX product for pixel level radiance and cloud information. This product is subsampled to 30 km. ISCCP is currently recalibrating and recomputing their entire data series, to be released as the H product, at 10km resolution. The ninefold increase in pixel number will allow SRB a higher resolution gridded product (e.g. 0.5 degree), as well as the production of pixel-level fluxes. In addition to the input data improvements, several important algorithm improvements have been made. Most notable has been the adaptation of Angular Distribution Models (ADMs) from CERES to improve the initial calculation of shortwave TOA fluxes, from which the surface flux calculations follow. Other key input improvements include a detailed aerosol history using the Max Planck Institut Aerosol Climatology (MAC), temperature and moisture profiles from HIRS, and new topography, surface type, and snow/ice. Here we present results for the improved GEWEX Shortwave and Longwave algorithm (GSW and GLW) with new ISCCP data, the various other improved input data sets and the incorporation of many additional internal SRB model improvements. As of the time of abstract submission, results from 2007 have been produced with ISCCP H availability the limiting factor. More SRB data will be produced as ISCCP reprocessing continues. The SRB data produced will be released as part of the Release 4.0 Integrated Product, recognizing the interdependence of the radiative fluxes with other GEWEX products providing estimates of the Earth's global water and energy cycle (I.e., ISCCP, SeaFlux, LandFlux, NVAP, etc.).

  6. Computational microscopy: illumination coding and nonlinear optimization enables gigapixel 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Waller, Laura

    2017-05-01

    Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.

  7. Restoration of motion blurred image with Lucy-Richardson algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jing; Liu, Zhao Hui; Zhou, Liang

    2015-10-01

    Images will be blurred by relative motion between the camera and the object of interest. In this paper, we analyzed the process of motion-blurred image, and demonstrated a restoration method based on Lucy-Richardson algorithm. The blur extent and angle can be estimated by Radon transform algorithm and auto-correlation function, respectively, and then the point spread function (PSF) of the motion-blurred image can be obtained. Thus with the help of the obtained PSF, the Lucy-Richardson restoration algorithm is used for experimental analysis on the motion-blurred images that have different blur extents, spatial resolutions and signal-to-noise ratios (SNR's). Further, its effectiveness is also evaluated by structural similarity (SSIM). Further studies show that, at first, for the image with a spatial frequency of 0.2 per pixel, the modulation transfer function (MTF) of the restored images can maintains above 0.7 when the blur extent is no bigger than 13 pixels. That means the method compensates low frequency information of the image, while attenuates high frequency information. At second, we fund that the method is more effective on condition that the product of the blur extent and spatial frequency is smaller than 3.75. Finally, the Lucy-Richardson algorithm is found insensitive to the Gaussian noise (of which the variance is not bigger than 0.1) by calculating the MTF of the restored image.

  8. MKID digital readout tuning with deep learning

    NASA Astrophysics Data System (ADS)

    Dodkins, R.; Mahashabde, S.; O'Brien, K.; Thatte, N.; Fruitwala, N.; Walter, A. B.; Meeker, S. R.; Szypryt, P.; Mazin, B. A.

    2018-04-01

    Microwave Kinetic Inductance Detector (MKID) devices offer inherent spectral resolution, simultaneous read out of thousands of pixels, and photon-limited sensitivity at optical wavelengths. Before taking observations the readout power and frequency of each pixel must be individually tuned, and if the equilibrium state of the pixels change, then the readout must be retuned. This process has previously been performed through manual inspection, and typically takes one hour per 500 resonators (20 h for a ten-kilo-pixel array). We present an algorithm based on a deep convolution neural network (CNN) architecture to determine the optimal bias power for each resonator. The bias point classifications from this CNN model, and those from alternative automated methods, are compared to those from human decisions, and the accuracy of each method is assessed. On a test feed-line dataset, the CNN achieves an accuracy of 90% within 1 dB of the designated optimal value, which is equivalent accuracy to a randomly selected human operator, and superior to the highest scoring alternative automated method by 10%. On a full ten-kilopixel array, the CNN performs the characterization in a matter of minutes - paving the way for future mega-pixel MKID arrays.

  9. Direct measurement and calibration of the Kepler CCD Pixel Response Function for improved photometry and astrometry

    NASA Astrophysics Data System (ADS)

    Ninkov, Zoran

    Stellar images taken with telescopes and detectors in space are usually undersampled, and to correct for this, an accurate pixel response function is required. The standard approach for HST and KEPLER has been to measure the telescope PSF combined ("convolved") with the actual pixel response function, super-sampled by taking into account dithered or offset observed images of many stars (Lauer [1999]). This combined response function has been called the "PRF" (Bryson et al. [2011]). However, using such results has not allowed astrometry from KEPLER to reach its full potential (Monet et al. [2010], [2014]). Given the precision of KEPLER photometry, it should be feasible to use a pre-determined detector pixel response function (PRF) and an optical point spread function (PSF) as separable quantities to more accurately correct photometry and astrometry for undersampling. Wavelength (i.e. stellar color) and instrumental temperature should be affecting each of these differently. Discussion of the PRF in the "KEPLER Instrument Handbook" is limited to an ad-hoc extension of earlier measurements on a quite different CCD. It is known that the KEPLER PSF typically has a sharp spike in the middle, and the main bulk of the PSF is still small enough to be undersampled, so that any substructure in the pixel may interact significantly with the optical PSF. Both the PSF and PRF are probably asymmetric. We propose to measure the PRF for an example of the CCD sensors used on KEPLER at sufficient sampling resolution to allow significant improvement of KEPLER photometry and astrometry, in particular allowing PSF fitting techniques to be used on the data archive.

  10. Automatic SAR/optical cross-matching for GCP monograph generation

    NASA Astrophysics Data System (ADS)

    Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa

    2016-10-01

    Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.

  11. Evaluation of computational endomicroscopy architectures for minimally-invasive optical biopsy

    NASA Astrophysics Data System (ADS)

    Dumas, John P.; Lodhi, Muhammad A.; Bajwa, Waheed U.; Pierce, Mark C.

    2017-02-01

    We are investigating compressive sensing architectures for applications in endomicroscopy, where the narrow diameter probes required for tissue access can limit the achievable spatial resolution. We hypothesize that the compressive sensing framework can be used to overcome the fundamental pixel number limitation in fiber-bundle based endomicroscopy by reconstructing images with more resolvable points than fibers in the bundle. An experimental test platform was assembled to evaluate and compare two candidate architectures, based on introducing a coded amplitude mask at either a conjugate image or Fourier plane within the optical system. The benchtop platform consists of a common illumination and object path followed by separate imaging arms for each compressive architecture. The imaging arms contain a digital micromirror device (DMD) as a reprogrammable mask, with a CCD camera for image acquisition. One arm has the DMD positioned at a conjugate image plane ("IP arm"), while the other arm has the DMD positioned at a Fourier plane ("FP arm"). Lenses were selected and positioned within each arm to achieve an element-to-pixel ratio of 16 (230,400 mask elements mapped onto 14,400 camera pixels). We discuss our mathematical model for each system arm and outline the importance of accounting for system non-idealities. Reconstruction of a 1951 USAF resolution target using optimization-based compressive sensing algorithms produced images with higher spatial resolution than bicubic interpolation for both system arms when system non-idealities are included in the model. Furthermore, images generated with image plane coding appear to exhibit higher spatial resolution, but more noise, than images acquired through Fourier plane coding.

  12. Signal Characteristics of Super-Resolution Near-Field Structure Disks with 100 GB Capacity

    NASA Astrophysics Data System (ADS)

    Kim, Jooho; Hwang, Inoh; Kim, Hyunki; Park, Insik; Tominaga, Junji

    2005-05-01

    We report the basic characteristics of super resolution near-field structure (Super-RENS) media at a blue laser optical system (laser wavelength 405 nm, numerical aperture 0.85). Using a novel write once read many (WORM) structure for a blue laser system, we obtained a carrier-to-noise ratio (CNR) above 33 dB from the signal of the 37.5 nm mark length, which is equivalent to a 100 GB capacity with a 0.32 micrometer track pitch, and an eye pattern for 50 GB (2T: 75 nm) capacity using a patterned signal. Using a novel super-resolution material (tellurium, Te) with low super-resolution readout power, we also improved the read stability.

  13. Dynamics of Kilauea's Magmatic System Imaged Using a Joint Analysis of Geodetic and Seismic Data

    NASA Astrophysics Data System (ADS)

    Wauthier, C.; Roman, D. C.; Poland, M. P.; Fukushima, Y.; Hooper, A. J.

    2012-12-01

    Nowadays, Interferometric Synthetic Aperture Radar (InSAR) is commonly used to study a wide range of active volcanic areas. InSAR provides high-spatial-resolution measurements of surface deformation with centimeter-scale accuracy. At Kilauea Volcano, Hawai'i, InSAR shows complex processes that are not well constrained by GPS data (which have relatively poor spatial resolution). However, GPS data have higher temporal resolution than InSAR data. Both datasets are thus complementary. To overcome some of the limitations of conventional InSAR, which are mainly induced by temporal decorrelation, topographic, orbital and atmospheric delays, a Multi-Temporal InSAR (MT-InSAR) approach can be used. MT-InSAR techniques involve the processing of multiple SAR acquisitions over the same area. Two classes of MT-InSAR algorithms are defined: the persistent scatterers (PS) and small baseline (SBAS) methods. Each method is designed for a specific type of scattering mechanism. A PS pixel is a pixel in which a single scatterer dominates, while the contributions from other scatterers are negligible. A SBAS pixel is a pixel that includes distributed scatterers, which have a phase with little decorrelation over short time periods. Here, we apply the "StaMPS" ("Stanford Method for Permanent Scatterers") technique, which incorporates both a PS and SBAS approach, on ENVISAT and ALOS datasets acquired from 2003 to 2010 at Kilauea. In particular, we focus our InSAR analysis on the time period before the June 2007 "Father's Day" dike intrusion and eruption, and also incorporate seismic and GPS data in our models. Our goal is to identify any precursors to the Father's Day event within Kilauea's summit magma system, east rift zone, and/or southwest rift zone.

  14. Dances with Membranes: Breakthroughs from Super-resolution Imaging

    PubMed Central

    Curthoys, Nikki M.; Parent, Matthew; Mlodzianoski, Michael; Nelson, Andrew J.; Lilieholm, Jennifer; Butler, Michael B.; Valles, Matthew; Hess, Samuel T.

    2017-01-01

    Biological membrane organization mediates numerous cellular functions and has also been connected with an immense number of human diseases. However, until recently, experimental methodologies have been unable to directly visualize the nanoscale details of biological membranes, particularly in intact living cells. Numerous models explaining membrane organization have been proposed, but testing those models has required indirect methods; the desire to directly image proteins and lipids in living cell membranes is a strong motivation for the advancement of technology. The development of super-resolution microscopy has provided powerful tools for quantification of membrane organization at the level of individual proteins and lipids, and many of these tools are compatible with living cells. Previously inaccessible questions are now being addressed, and the field of membrane biology is developing rapidly. This chapter discusses how the development of super-resolution microscopy has led to fundamental advances in the field of biological membrane organization. We summarize the history and some models explaining how proteins are organized in cell membranes, and give an overview of various super-resolution techniques and methods of quantifying super-resolution data. We discuss the application of super-resolution techniques to membrane biology in general, and also with specific reference to the fields of actin and actin-binding proteins, virus infection, mitochondria, immune cell biology, and phosphoinositide signaling. Finally, we present our hopes and expectations for the future of super-resolution microscopy in the field of membrane biology. PMID:26015281

  15. Oblique reconstructions in tomosynthesis. II. Super-resolution

    PubMed Central

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2013-01-01

    Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes. Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system. Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest detectable frequency on pitch followed the same trend as the analytical model. It was demonstrated that super-resolution is not achievable if the pitch of the object approaches 90°, corresponding to the case in which the test frequency is perpendicular to the breast support. Only low frequency objects are detectable at pitches close to 90°. Conclusions: This work provides a platform for investigating super-resolution in oblique reconstructions for tomosynthesis. In breast imaging, this study should have applications in visualizing microcalcifications and other subtle signs of cancer. PMID:24320445

  16. Real-time traffic sign detection and recognition

    NASA Astrophysics Data System (ADS)

    Herbschleb, Ernst; de With, Peter H. N.

    2009-01-01

    The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput

  17. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture.

    PubMed

    Yamamoto, Kyosuke; Togami, Takashi; Yamaguchi, Norio

    2017-11-06

    Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture-in cooperation with image processing technologies-for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis.

  18. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture

    PubMed Central

    Togami, Takashi; Yamaguchi, Norio

    2017-01-01

    Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture—in cooperation with image processing technologies—for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis. PMID:29113104

  19. Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays.

    PubMed

    Orżanowski, Tomasz

    2016-01-01

    This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.

  20. True Ortho Generation of Urban Area Using High Resolution Aerial Photos

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Stanley, David; Xin, Yubin

    2016-06-01

    The pros and cons of existing methods for true ortho generation are analyzed based on a critical literature review for its two major processing stages: visibility analysis and occlusion compensation. They process frame and pushbroom images using different algorithms for visibility analysis due to the need of perspective centers used by the z-buffer (or alike) techniques. For occlusion compensation, the pixel-based approach likely results in excessive seamlines in the ortho-rectified images due to the use of a quality measure on the pixel-by-pixel rating basis. In this paper, we proposed innovative solutions to tackle the aforementioned problems. For visibility analysis, an elevation buffer technique is introduced to employ the plain elevations instead of the distances from perspective centers by z-buffer, and has the advantage of sensor independency. A segment oriented strategy is developed to evaluate a plain cost measure per segment for occlusion compensation instead of the tedious quality rating per pixel. The cost measure directly evaluates the imaging geometry characteristics in ground space, and is also sensor independent. Experimental results are demonstrated using aerial photos acquired by UltraCam camera.

  1. Pixel-based absolute surface metrology by three flat test with shifted and rotated maps

    NASA Astrophysics Data System (ADS)

    Zhai, Dede; Chen, Shanyong; Xue, Shuai; Yin, Ziqiang

    2018-03-01

    In traditional three flat test, it only provides the absolute profile along one surface diameter. In this paper, an absolute testing algorithm based on shift-rotation with three flat test has been proposed to reconstruct two-dimensional surface exactly. Pitch and yaw error during shift procedure is analyzed and compensated in our method. Compared with multi-rotation method proposed before, it only needs a 90° rotation and a shift, which is easy to carry out especially in condition of large size surface. It allows pixel level spatial resolution to be achieved without interpolation or assumption to the test surface. In addition, numerical simulations and optical tests are implemented and show the high accuracy recovery capability of the proposed method.

  2. Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning

    NASA Astrophysics Data System (ADS)

    Li, Jun-Bao; Liu, Jing; Pan, Jeng-Shyang; Yao, Hongxun

    2017-06-01

    Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.

  3. Super-resolution mapping using multi-viewing CHRIS/PROBA data

    NASA Astrophysics Data System (ADS)

    Dwivedi, Manish; Kumar, Vinay

    2016-04-01

    High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.

  4. Achieving subpixel resolution with time-correlated transient signals in pixelated CdZnTe gamma-ray sensors using a focused laser beam (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ocampo Giraldo, Luis A.; Bolotnikov, Aleksey E.; Camarda, Giuseppe S.; Cui, Yonggang; De Geronimo, Gianluigi; Gul, Rubi; Fried, Jack; Hossain, Anwar; Unlu, Kenan; Vernon, Emerson; Yang, Ge; James, Ralph B.

    2017-05-01

    High-resolution position-sensitive detectors have been proposed to correct response non-uniformities in Cadmium Zinc Telluride (CZT) crystals by virtually subdividing the detectors area into small voxels and equalizing responses from each voxel. 3D pixelated detectors coupled with multichannel readout electronics are the most advanced type of CZT devices offering many options in signal processing and enhancing detector performance. One recent innovation proposed for pixelated detectors is to use the induced (transient) signals from neighboring pixels to achieve high sub-pixel position resolution while keeping large pixel sizes. The main hurdle in achieving this goal is the relatively low signal induced on the neighboring pixels because of the electrostatic shielding effect caused by the collecting pixel. In addition, to achieve high position sensitivity one should rely on time-correlated transient signals, which means that digitized output signals must be used. We present the results of our studies to measure the amplitude of the pixel signals so that these can be used to measure positions of the interaction points. This is done with the processing of digitized correlated time signals measured from several adjacent pixels taking into account rise-time and charge-sharing effects. In these measurements we used a focused pulsed laser to generate a 10-micron beam at one milliwatt (650-nm wavelength) over the detector surface while the collecting pixel was moved in cardinal directions. The results include measurements that present the benefits of combining conventional pixel geometry with digital pulse processing for the best approach in achieving sub-pixel position resolution with the pixel dimensions of approximately 2 mm. We also present the sub-pixel resolution measurements at comparable energies from various gamma emitting isotopes.

  5. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites

    NASA Astrophysics Data System (ADS)

    Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.

    2016-07-01

    The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.

  6. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  7. Detecting breast microcalcifications using super-resolution ultrasound imaging: a clinical study

    NASA Astrophysics Data System (ADS)

    Huang, Lianjie; Labyed, Yassin; Hanson, Kenneth; Sandoval, Daniel; Pohl, Jennifer; Williamson, Michael

    2013-03-01

    Imaging breast microcalcifications is crucial for early detection and diagnosis of breast cancer. It is challenging for current clinical ultrasound to image breast microcalcifications. However, new imaging techniques using data acquired with a synthetic-aperture ultrasound system have the potential to significantly improve ultrasound imaging. We recently developed a super-resolution ultrasound imaging method termed the phase-coherent multiple-signal classification (PC-MUSIC). This signal subspace method accounts for the phase response of transducer elements to improve image resolution. In this paper, we investigate the clinical feasibility of our super-resolution ultrasound imaging method for detecting breast microcalcifications. We use our custom-built, real-time synthetic-aperture ultrasound system to acquire breast ultrasound data for 40 patients whose mammograms show the presence of breast microcalcifications. We apply our super-resolution ultrasound imaging method to the patient data, and produce clear images of breast calcifications. Our super-resolution ultrasound PC-MUSIC imaging with synthetic-aperture ultrasound data can provide a new imaging modality for detecting breast microcalcifications in clinic without using ionizing radiation.

  8. Super-Resolution Microscopy: Shedding Light on the Cellular Plasma Membrane.

    PubMed

    Stone, Matthew B; Shelby, Sarah A; Veatch, Sarah L

    2017-06-14

    Lipids and the membranes they form are fundamental building blocks of cellular life, and their geometry and chemical properties distinguish membranes from other cellular environments. Collective processes occurring within membranes strongly impact cellular behavior and biochemistry, and understanding these processes presents unique challenges due to the often complex and myriad interactions between membrane components. Super-resolution microscopy offers a significant gain in resolution over traditional optical microscopy, enabling the localization of individual molecules even in densely labeled samples and in cellular and tissue environments. These microscopy techniques have been used to examine the organization and dynamics of plasma membrane components, providing insight into the fundamental interactions that determine membrane functions. Here, we broadly introduce the structure and organization of the mammalian plasma membrane and review recent applications of super-resolution microscopy to the study of membranes. We then highlight some inherent challenges faced when using super-resolution microscopy to study membranes, and we discuss recent technical advancements that promise further improvements to super-resolution microscopy and its application to the plasma membrane.

  9. Steganography on quantum pixel images using Shannon entropy

    NASA Astrophysics Data System (ADS)

    Laurel, Carlos Ortega; Dong, Shi-Hai; Cruz-Irisson, M.

    2016-07-01

    This paper presents a steganographical algorithm based on least significant bit (LSB) from the most significant bit information (MSBI) and the equivalence of a bit pixel image to a quantum pixel image, which permits to make the information communicate secretly onto quantum pixel images for its secure transmission through insecure channels. This algorithm offers higher security since it exploits the Shannon entropy for an image.

  10. Validation of the USGS Landsat Burned Area Essential Climate Variable (BAECV) across the conterminous United States

    USGS Publications Warehouse

    Vanderhoof, Melanie; Fairaux, Nicole; Beal, Yen-Ju G.; Hawbaker, Todd J.

    2017-01-01

    The Landsat Burned Area Essential Climate Variable (BAECV), developed by the U.S. Geological Survey (USGS), capitalizes on the long temporal availability of Landsat imagery to identify burned areas across the conterminous United States (CONUS) (1984–2015). Adequate validation of such products is critical for their proper usage and interpretation. Validation of coarse-resolution products often relies on independent data derived from moderate-resolution sensors (e.g., Landsat). Validation of Landsat products, in turn, is challenging because there is no corresponding source of high-resolution, multispectral imagery that has been systematically collected in space and time over the entire temporal extent of the Landsat archive. Because of this, comparison between high-resolution images and Landsat science products can help increase user's confidence in the Landsat science products, but may not, alone, be adequate. In this paper, we demonstrate an approach to systematically validate the Landsat-derived BAECV product. Burned area extent was mapped for Landsat image pairs using a manually trained semi-automated algorithm that was manually edited across 28 path/rows and five different years (1988, 1993, 1998, 2003, 2008). Three datasets were independently developed by three analysts and the datasets were integrated on a pixel by pixel basis in which at least one to all three analysts were required to agree a pixel was burned. We found that errors within our Landsat reference dataset could be minimized by using the rendition of the dataset in which pixels were mapped as burned if at least two of the three analysts agreed. BAECV errors of omission and commission for the detection of burned pixels averaged 42% and 33%, respectively for CONUS across all five validation years. Errors of omission and commission were lowest across the western CONUS, for example in the shrub and scrublands of the Arid West (31% and 24%, respectively), and highest in the grasslands and agricultural lands of the Great Plains in central CONUS (62% and 57%, respectively). The BAECV product detected most (> 65%) fire events > 10 ha across the western CONUS (Arid and Mountain West ecoregions). Our approach and results demonstrate that a thorough validation of Landsat science products can be completed with independent Landsat-derived reference data, but could be strengthened by the use of complementary sources of high-resolution data.

  11. Mapping shorelines to subpixel accuracy using Landsat imagery

    NASA Astrophysics Data System (ADS)

    Abileah, Ron; Vignudelli, Stefano; Scozzari, Andrea

    2013-04-01

    A promising method to accurately map the shoreline of oceans, lakes, reservoirs, and rivers is proposed and verified in this work. The method is applied to multispectral satellite imagery in two stages. The first stage is a classification of each image pixel into land/water categories using the conventional 'dark pixel' method. The approach presented here, makes use of a single shortwave IR image band (SWIR), if available. It is well known that SWIR has the least water leaving radiance and relatively little sensitivity to water pollutants and suspended sediments. It is generally the darkest (over water) and most reliable single band for land-water discrimination. The boundary of the water cover map determined in stage 1 underestimates the water cover and often misses the true shoreline by a quantity up to one pixel. A more accurate shoreline would be obtained by connecting the center point of pixels with exactly 50-50 mix of water and land. Then, stage 2 finds the 50-50 mix points. According to the method proposed, image data is interpolated and up-sampled to ten times the original resolution. The local gradient in radiance is used to find the direction to the shore, thus searching along that path for the interpolated pixel closest to a 50-50 mix. Landsat images with 30m resolution, processed by this method, may thus provide the shoreline accurate to 3m. Compared to similar approaches available in the literature, the method proposed discriminates sub-pixels crossed by the shoreline by using a criteria based on the absolute value of radiance, rather than its gradient. Preliminary experimentation of the algorithm shows that 10m resolution accuracy is easily achieved and in some cases is often better than 5m. The proposed method can be used to study long term shoreline changes by exploiting the 30 years of archived world-wide coverage Landsat imagery. Landsat imagery is free and easily accessible for downloading. Some applications that exploit the Landsat dataset and the new method are discussed in the companion poster: "Case-studies of potential applications for highly resolved shorelines."

  12. Landslide movement mapping by sub-pixel amplitude offset tracking - case study from Corvara landslide

    NASA Astrophysics Data System (ADS)

    Darvishi, Mehdi; Schlögel, Romy; Cuozzo, Giovanni; Callegari, Mattia; Thiebes, Benni; Bruzzone, Lorenzo; Mulas, Marco; Corsini, Alessandro; Mair, Volkmar

    2016-04-01

    Despite the advantages of Differential Synthetic Aperture Radar Interferometry (DInSAR) methods for quantifying landslide deformation over large areas, some limitations remain. These include for example geometric distortions, atmospheric artefacts, geometric and temporal decorrelations, data and scale constraints, and the restriction that only 1-dimentional line-of-sight (LOS) deformations can be measured. At local scale, the major limitations are dense vegetation, as well as large displacement rates which can lead to decorrelation between SAR acquisitions also for high resolution images and temporal baselines. Sub-pixel offset tracking was proposed to overcome some of these limitations. Two of the most important advantages of this technique are the mapping of 2-D displacements (azimuth and range directions), and the fact that there is no need for complex phase unwrapping algorithms which could give wrong results or fail in case of decorrelation or fast ground deformations. As sub-pixel offset tracking is highly sensitive to the spatial resolution of the data, latest generations of SAR sensors such as TerraSAR-X and COSMO-SkyMed providing high resolution data (up to 1m) have great potential to become established methods in the field of ground deformation monitoring. In this study, sub-pixel offset tracking was applied to COSMO SkyMed X-band imagery in order to quantify ground displacements and to evaluate the feasibility of offset tracking for landslide movement mapping and monitoring. The study area is the active Corvara landslide located in the Italian Alps, described as a slow-moving and deep-seated landslide with annual displacement rates of up to 20 m. Corner reflectors specifically designed for X-band were installed on the landslide and used as reference points for sub-pixel offset tracking. Satellite images covering the period from 2013 to 2015 were analyzed with an amplitude tracking tool for calculating the offsets and extracting 2-D displacements. Sub-pixel offset tracking outputs were integrated with DInSAR results and correlated to differential GPS measurements recorded at the same time as the SAR data acquisitions.

  13. Thallium Bromide as an Alternative Material for Room-Temperature Gamma-Ray Spectroscopy and Imaging

    NASA Astrophysics Data System (ADS)

    Koehler, William

    Thallium bromide is an attractive material for room-temperature gamma-ray spectroscopy and imaging because of its high atomic number (Tl: 81, Br: 35), high density (7.56 g/cm3), and a wide bandgap (2.68 eV). In this work, 5 mm thick TlBr detectors achieved 0.94% FWHM at 662 keV for all single-pixel events and 0.72% FWHM at 662 keV from the best pixel and depth using three-dimensional position sensing technology. However, these results were limited to stable operation at -20°C. After days to months of room-temperature operation, ionic conduction caused these devices to fail. Depth-dependent signal analysis was used to isolate room-temperature degradation effects to within 0.5 mm of the anode surface. This was verified by refabricating the detectors after complete failure at room temperature; after refabrication, similar performance and functionality was recovered. As part of this work, the improvement in electron drift velocity and energy resolution during conditioning at -20°C was quantified. A new method was developed to measure the impurity concentration without changing the gamma ray measurement setup. The new method was used to show that detector conditioning was likely the result of charged impurities drifting out of the active volume. This space charge reduction then caused a more stable and uniform electric field. Additionally, new algorithms were developed to remove hole contributions in high-hole-mobility detectors to improve depth reconstruction. These algorithms improved the depth reconstruction (accuracy) without degrading the depth uncertainty (precision). Finally, spectroscopic and imaging performance of new 11 x 11 pixelated-anode TlBr detectors was characterized. The larger detectors were used to show that energy resolution can be improved by identifying photopeak events from their Tl characteristic x-rays.

  14. Endmember identification from EO-1 Hyperion L1_R hyperspectral data to build saltmarsh spectral library in Hunter Wetland, NSW, Australia

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Ralph, Tim; Saintilan, Neil

    2015-10-01

    Saltmarsh is one of the important communities of wetlands, however, due to a range of pressures, it has been declared as an EEC (Ecological Endangered Community) in Australia. In order to correctly identify different saltmarsh species, development of spectral libraries of saltmarsh species is essential to monitor this EEC. Hyperspectral remote sensing, can explore the area of wetland monitoring and mapping. The benefits of Hyperion data to wetland monitoring have been studied at Hunter Wetland Park, NSW, Australia. After exclusion of bad bands from the original data, an atmospheric correction model was applied to minimize atmospheric effect and to retrieve apparent surface reflectance for different land cover. Large data dimensionality was reduced by Forward Minimum Noise Fraction (MNF) algorithm. It was found that first 32 MNF band contains more than 80% information of the image. Pixel Purity Index (PPI) algorithm worked properly to extract pure pixel for water, builtup area and three vegetation Casuarina sp., Phragmitis sp. and green grass. The result showed it was challenging to extract extreme pure pixel for Sporobolus and Sarcocornia from the data due to coarse resolution (30 m) and small patch size (<3 m) of those vegetation on the ground . Spectral Angle Mapper, classified the image into five classes: Casuarina, Saltmarsh (Phragmitis), Green grass, Water and Builtup area with 43.55 % accuracy. This classification also failed to classify Sporobolus as a distinct group due to the same reason. A high spatial resolution airborne hyperspectral data and a new study site with a bigger patch of Sporobolus and Sarcocornia is proposed to overcome the issue.

  15. Finding Blackbody Temperature and Emissivity on a Sub-Pixel Scale

    NASA Astrophysics Data System (ADS)

    Bernstein, D. J.; Bausell, J.; Grigsby, S.; Kudela, R. M.

    2015-12-01

    Surface temperature and emissivity provide important insight into the ecosystem being remotely sensed. Dozier (1981) proposed a an algorithm to solve for percent coverage and temperatures of two different surface types (e.g. sea surface, cloud cover, etc.) within a given pixel, with a constant value for emissivity assumed. Here we build on Dozier (1981) by proposing an algorithm that solves for both temperature and emissivity of a water body within a satellite pixel by assuming known percent coverage of surface types within the pixel. Our algorithm generates thermal infrared (TIR) and emissivity end-member spectra for the two surface types. Our algorithm then superposes these end-member spectra on emissivity and TIR spectra emitted from four pixels with varying percent coverage of different surface types. The algorithm was tested preliminarily (48 iterations) using simulated pixels containing more than one surface type, with temperature and emissivity percent errors of ranging from 0 to 1.071% and 2.516 to 15.311% respectively[1]. We then tested the algorithm using a MASTER image from MASTER collected as part of the NASA Student Airborne Research Program (NASA SARP). Here the temperature of water was calculated to be within 0.22 K of in situ data. The algorithm calculated emissivity of water with an accuracy of 0.13 to 1.53% error for Salton Sea pixels collected with MASTER, also collected as part of NASA SARP. This method could improve retrievals for the HyspIRI sensor. [1] Percent error for emissivity was generated by averaging percent error across all selected bands widths.

  16. 3D near-infrared imaging based on a single-photon avalanche diode array sensor

    NASA Astrophysics Data System (ADS)

    Mata Pavia, Juan; Charbon, Edoardo; Wolf, Martin

    2011-07-01

    An imager for optical tomography was designed based on a detector with 128×128 single-photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contact-less setup has been conceived in which scanning of the object is not necessary. This enables one to perform high-resolution optical tomography with much higher acquisition rate, which is fundamental in clinical applications. The setup has a resolution of 97ps and operates with a laser source with an average power of 3mW. This new imaging system generated a high amount of data that could not be processed by established methods, therefore new concepts and algorithms were developed to take full advantage of it. Images were generated using a new reconstruction algorithm that combined general inverse problem methods with Fourier transforms in order to reduce the complexity of the problem. Simulations show that the potential resolution of the new setup is in the order of millimeters. Experiments have been performed to confirm this potential. Images derived from the measurements demonstrate that we have already reached a resolution of 5mm.

  17. Arcsecond and Sub-arcsedond Imaging with X-ray Multi-Image Interferometer and Imager for (very) small sattelites

    NASA Astrophysics Data System (ADS)

    Hayashida, K.; Kawabata, T.; Nakajima, H.; Inoue, S.; Tsunemi, H.

    2017-10-01

    The best angular resolution of 0.5 arcsec is realized with the X-ray mirror onborad the Chandra satellite. Nevertheless, further better or comparable resolution is anticipated to be difficult in near future. In fact, the goal of ATHENA telescope is 5 arcsec in the angular resolution. We propose a new type of X-ray interferometer consisting simply of an X-ray absorption grating and an X-ray spectral imaging detector, such as X-ray CCDs or new generation CMOS detectors, by stacking the multi images created with the Talbot interferenece (Hayashida et al. 2016). This system, now we call Multi Image X-ray Interferometer Module (MIXIM) enables arcseconds resolution with very small satellites of 50cm size, and sub-arcseconds resolution with small sattellites. We have performed ground experiments, in which a micro-focus X-ray source, grating with pitch of 4.8μm, and 30 μm pixel detector placed about 1m from the source. We obtained the self-image (interferometirc fringe) of the grating for wide band pass around 10keV. This result corresponds to about 2 arcsec resolution for parrallel beam incidence. The MIXIM is usefull for high angular resolution imaging of relatively bright sources. Search for super massive black holes and resolving AGN torus would be the targets of this system.

  18. Comparison between different thickness umbrella-shaped expandable radiofrequency electrodes (SuperSlim and CoAccess): Experimental and clinical study

    PubMed Central

    KODA, MASAHIKO; TOKUNAGA, SHIHO; MATONO, TOMOMITSU; SUGIHARA, TAKAAKI; NAGAHARA, TAKAKAZU; MURAWAKI, YOSHIKAZU

    2011-01-01

    The purpose of the present study was to compare the size and configuration of the ablation zones created by SuperSlim and CoAccess electrodes, using various ablation algorithms in ex vivo bovine liver and in clinical cases. In the experimental study, we ablated explanted bovine liver using 2 types of electrodes and 4 ablation algorithms (combinations of incremental power supply, stepwise expansion and additional low-power ablation) and evaluated the ablation area and time. In the clinical study, we compared the ablation volume and the shape of the ablation zone between both electrodes in 23 hepatocellular carcinoma (HCC) cases with the best algorithm (incremental power supply, stepwise expansion and additional low-power ablation) as derived from the experimental study. In the experimental study, the ablation area and time by the CoAccess electrode were significantly greater compared to those by the SuperSlim electrode for the single-step (algorithm 1, p=0.0209 and 0.0325, respectively) and stepwise expansion algorithms (algorithm 2, p=0.0002 and <0.0001, respectively; algorithm 3, p= 0.006 and 0.0407, respectively). However, differences were not significant for the additional low-power ablation algorithm. In the clinical study, the ablation volume and time in the CoAccess group were significantly larger and longer, respectively, compared to those in the SuperSlim group (p=0.0242 and 0.009, respectively). Round ablation zones were acquired in 91.7% of the CoAccess group, while irregular ablation zones were obtained in 45.5% of the SuperSlim group (p=0.0428). In conclusion, the CoAccess electrode achieves larger and more uniform ablation zones compared with the SuperSlim electrode, though it requires longer ablation times in experimental and clinical studies. PMID:22977647

  19. Synthetic aperture radar images with composite azimuth resolution

    DOEpatents

    Bielek, Timothy P; Bickel, Douglas L

    2015-03-31

    A synthetic aperture radar (SAR) image is produced by using all phase histories of a set of phase histories to produce a first pixel array having a first azimuth resolution, and using less than all phase histories of the set to produce a second pixel array having a second azimuth resolution that is coarser than the first azimuth resolution. The first and second pixel arrays are combined to produce a third pixel array defining a desired SAR image that shows distinct shadows of moving objects while preserving detail in stationary background clutter.

  20. Dynamic full-field infrared imaging with multiple synchrotron beams

    PubMed Central

    Stavitski, Eli; Smith, Randy J.; Bourassa, Megan W.; Acerbo, Alvin S.; Carr, G. L.; Miller, Lisa M.

    2013-01-01

    Microspectroscopic imaging in the infrared (IR) spectral region allows for the examination of spatially resolved chemical composition on the microscale. More than a decade ago, it was demonstrated that diffraction limited spatial resolution can be achieved when an apertured, single pixel IR microscope is coupled to the high brightness of a synchrotron light source. Nowadays, many IR microscopes are equipped with multi-pixel Focal Plane Array (FPA) detectors, which dramatically improve data acquisition times for imaging large areas. Recently, progress been made toward efficiently coupling synchrotron IR beamlines to multi-pixel detectors, but they utilize expensive and highly customized optical schemes. Here we demonstrate the development and application of a simple optical configuration that can be implemented on most existing synchrotron IR beamlines in order to achieve full-field IR imaging with diffraction-limited spatial resolution. Specifically, the synchrotron radiation fan is extracted from the bending magnet and split into four beams that are combined on the sample, allowing it to fill a large section of the FPA. With this optical configuration, we are able to oversample an image by more than a factor of two, even at the shortest wavelengths, making image restoration through deconvolution algorithms possible. High chemical sensitivity, rapid acquisition times, and superior signal-to-noise characteristics of the instrument are demonstrated. The unique characteristics of this setup enabled the real time study of heterogeneous chemical dynamics with diffraction-limited spatial resolution for the first time. PMID:23458231

Top