Sample records for depth image based

  1. Inverse scattering pre-stack depth imaging and it's comparison to some depth migration methods for imaging rich fault complex structure

    NASA Astrophysics Data System (ADS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Mubarok, Syahrul; Deny, Agus; Widowati, Sri; Kurniadi, Rizal

    2012-06-01

    Migration is important issue for seismic imaging in complex structure. In this decade, depth imaging becomes important tools for producing accurate image in depth imaging instead of time domain imaging. The challenge of depth migration method, however, is in revealing the complex structure of subsurface. There are many methods of depth migration with their advantages and weaknesses. In this paper, we show our propose method of pre-stack depth migration based on time domain inverse scattering wave equation. Hopefully this method can be as solution for imaging complex structure in Indonesia, especially in rich thrusting fault zones. In this research, we develop a recent advance wave equation migration based on time domain inverse scattering wave which use more natural wave propagation using scattering wave. This wave equation pre-stack depth migration use time domain inverse scattering wave equation based on Helmholtz equation. To provide true amplitude recovery, an inverse of divergence procedure and recovering transmission loss are considered of pre-stack migration. Benchmarking the propose inverse scattering pre-stack depth migration with the other migration methods are also presented, i.e.: wave equation pre-stack depth migration, waveequation depth migration, and pre-stack time migration method. This inverse scattering pre-stack depth migration could image successfully the rich fault zone which consist extremely dip and resulting superior quality of seismic image. The image quality of inverse scattering migration is much better than the others migration methods.

  2. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  3. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  4. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    PubMed

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  5. Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.

    PubMed

    Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso

    2018-07-01

    There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.

  6. Three-photon tissue imaging using moxifloxacin.

    PubMed

    Lee, Seunghun; Lee, Jun Ho; Wang, Taejun; Jang, Won Hyuk; Yoon, Yeoreum; Kim, Bumju; Jun, Yong Woong; Kim, Myoung Joon; Kim, Ki Hean

    2018-06-20

    Moxifloxacin is an antibiotic used in clinics and has recently been used as a clinically compatible cell-labeling agent for two-photon (2P) imaging. Although 2P imaging with moxifloxacin labeling visualized cells inside tissues using enhanced fluorescence, the imaging depth was quite limited because of the relatively short excitation wavelength (<800 nm) used. In this study, the feasibility of three-photon (3P) excitation of moxifloxacin using a longer excitation wavelength and moxifloxacin-based 3P imaging were tested to increase the imaging depth. Moxifloxacin fluorescence via 3P excitation was detected at a >1000 nm excitation wavelength. After obtaining the excitation and emission spectra of moxifloxacin, moxifloxacin-based 3P imaging was applied to ex vivo mouse bladder and ex vivo mouse small intestine tissues and compared with moxifloxacin-based 2P imaging by switching the excitation wavelength of a Ti:sapphire oscillator between near 1030 and 780 nm. Both moxifloxacin-based 2P and 3P imaging visualized cellular structures in the tissues via moxifloxacin labeling, but the image contrast was better with 3P imaging than with 2P imaging at the same imaging depths. The imaging speed and imaging depth of moxifloxacin-based 3P imaging using a Ti:sapphire oscillator were limited by insufficient excitation power. Therefore, we constructed a new system for moxifloxacin-based 3P imaging using a high-energy Yb fiber laser at 1030 nm and used it for in vivo deep tissue imaging of a mouse small intestine. Moxifloxacin-based 3P imaging could be useful for clinical applications with enhanced imaging depth.

  7. Distance-based over-segmentation for single-frame RGB-D images

    NASA Astrophysics Data System (ADS)

    Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao

    2017-11-01

    Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.

  8. Spectrally-Based Bathymetric Mapping of a Dynamic, Sand-Bedded Channel: Niobrara River, Nebraska, USA

    NASA Astrophysics Data System (ADS)

    Dilbone, Elizabeth K.

    Methods for spectrally-based bathymetric mapping of rivers mainly have been developed and tested on clear-flowing, gravel bedded channels, with limited application to turbid, sand-bedded rivers. Using hyperspectral images of the Niobrara River, Nebraska, and field-surveyed depth data, this study evaluated three methods of retrieving depth from remotely sensed data in a dynamic, sand-bedded channel. The first regression-based approach paired in situ depth measurements and image pixel values to predict depth via Optimal Band Ratio Analysis (OBRA). The second approach used ground-based reflectance measurements to calibrate an OBRA relationship. For this approach, CASI images were atmospherically corrected to units of apparent surface reflectance using an empirical line calibration. For the final technique, we used Image-to-Depth Quantile Transformation (IDQT) to predict depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image derived variable. OBRA yielded the lowest overall depth retrieval error (0.0047 m) and highest observed versus predicted R2 (0.81). Although misalignment between field and image data were not problematic to OBRA's performance in this study, such issues present potential limitations to standard regression-based approaches like OBRA in dynamic, sand-bedded rivers. Field spectroscopy-based maps exhibited a slight shallow bias (0.0652 m) but provided reliable depth estimates for most of the study reach. IDQT had a strong deep bias, but still provided informative relative depth maps that portrayed general patterns of shallow and deep areas of the channel. The over-prediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the CDF of depth. While each of the techniques tested in this study demonstrated the potential to provide accurate depth estimates in sand-bedded rivers, each method also was subject to certain constraints and limitations.

  9. Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens.

    PubMed

    Shen, Xin; Javidi, Bahram

    2018-03-01

    We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.

  10. A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion

    NASA Astrophysics Data System (ADS)

    Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen

    2017-09-01

    In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.

  11. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. Application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.

  12. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.

  13. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.

  14. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  15. An efficient hole-filling method based on depth map in 3D view generation

    NASA Astrophysics Data System (ADS)

    Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.

  16. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    USGS Publications Warehouse

    Legleiter, Carl

    2016-01-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  17. Cloud Optical Depth Measured with Ground-Based, Uncooled Infrared Imagers

    NASA Technical Reports Server (NTRS)

    Shaw, Joseph A.; Nugent, Paul W.; Pust, Nathan J.; Redman, Brian J.; Piazzolla, Sabino

    2012-01-01

    Recent advances in uncooled, low-cost, long-wave infrared imagers provide excellent opportunities for remotely deployed ground-based remote sensing systems. However, the use of these imagers in demanding atmospheric sensing applications requires that careful attention be paid to characterizing and calibrating the system. We have developed and are using several versions of the ground-based "Infrared Cloud Imager (ICI)" instrument to measure spatial and temporal statistics of clouds and cloud optical depth or attenuation for both climate research and Earth-space optical communications path characterization. In this paper we summarize the ICI instruments and calibration methodology, then show ICI-derived cloud optical depths that are validated using a dual-polarization cloud lidar system for thin clouds (optical depth of approximately 4 or less).

  18. A depth enhancement strategy for kinect depth image

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Li, Hua; Han, Cheng; Xue, Yaohong; Zhang, Chao; Hu, Hanping; Jiang, Zhengang

    2018-03-01

    Kinect is a motion sensing input device which is widely used in computer vision and other related fields. However, there are many inaccurate depth data in Kinect depth images even Kinect v2. In this paper, an algorithm is proposed to enhance Kinect v2 depth images. According to the principle of its depth measuring, the foreground and the background are considered separately. As to the background, the holes are filled according to the depth data in the neighborhood. And as to the foreground, a filling algorithm, based on the color image concerning about both space and color information, is proposed. An adaptive joint bilateral filtering method is used to reduce noise. Experimental results show that the processed depth images have clean background and clear edges. The results are better than ones of traditional Strategies. It can be applied in 3D reconstruction fields to pretreat depth image in real time and obtain accurate results.

  19. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    PubMed Central

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933

  20. No scanning depth imaging system based on TOF

    NASA Astrophysics Data System (ADS)

    Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo

    2016-03-01

    To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.

  1. Depth profile measurement with lenslet images of the plenoptic camera

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei

    2018-03-01

    An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.

  2. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  3. Depth image enhancement using perceptual texture priors

    NASA Astrophysics Data System (ADS)

    Bang, Duhyeon; Shim, Hyunjung

    2015-03-01

    A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.

  4. Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Marchuk, V. I.; Fisunov, A. V.; Tokareva, S. V.; Egiazarian, K. O.

    2015-03-01

    RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.

  5. Depth resolved hyperspectral imaging spectrometer based on structured light illumination and Fourier transform interferometry

    PubMed Central

    Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.

    2014-01-01

    A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367

  6. Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.

    2018-04-01

    RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.

  7. Time-of-flight depth image enhancement using variable integration time

    NASA Astrophysics Data System (ADS)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  8. Spectrally based bathymetric mapping of a dynamic, sand‐bedded channel: Niobrara River, Nebraska, USA

    USGS Publications Warehouse

    Dilbone, Elizabeth; Legleiter, Carl; Alexander, Jason S.; McElroy, Brandon

    2018-01-01

    Methods for spectrally based mapping of river bathymetry have been developed and tested in clear‐flowing, gravel‐bed channels, with limited application to turbid, sand‐bed rivers. This study used hyperspectral images and field surveys from the dynamic, sandy Niobrara River to evaluate three depth retrieval methods. The first regression‐based approach, optimal band ratio analysis (OBRA), paired in situ depth measurements with image pixel values to estimate depth. The second approach used ground‐based field spectra to calibrate an OBRA relationship. The third technique, image‐to‐depth quantile transformation (IDQT), estimated depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image‐derived variable. OBRA yielded the lowest depth retrieval mean error (0.005 m) and highest observed versus predicted R2 (0.817). Although misalignment between field and image data did not compromise the performance of OBRA in this study, poor georeferencing could limit regression‐based approaches such as OBRA in dynamic, sand‐bedded rivers. Field spectroscopy‐based depth maps exhibited a mean error with a slight shallow bias (0.068 m) but provided reliable estimates for most of the study reach. IDQT had a strong deep bias but provided informative relative depth maps. Overprediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the depth CDF. Although each of the techniques we tested demonstrated potential to provide accurate depth estimates in sand‐bed rivers, each method also was subject to certain constraints and limitations.

  9. Remote measurement of river discharge using thermal particle image velocimetry (PIV) and various sources of bathymetric information

    USGS Publications Warehouse

    Legleiter, Carl; Kinzel, Paul J.; Nelson, Jonathan M.

    2017-01-01

    Although river discharge is a fundamental hydrologic quantity, conventional methods of streamgaging are impractical, expensive, and potentially dangerous in remote locations. This study evaluated the potential for measuring discharge via various forms of remote sensing, primarily thermal imaging of flow velocities but also spectrally-based depth retrieval from passive optical image data. We acquired thermal image time series from bridges spanning five streams in Alaska and observed strong agreement between velocities measured in situ and those inferred by Particle Image Velocimetry (PIV), which quantified advection of thermal features by the flow. The resulting surface velocities were converted to depth-averaged velocities by applying site-specific, calibrated velocity indices. Field spectra from three clear-flowing streams provided strong relationships between depth and reflectance, suggesting that, under favorable conditions, spectrally-based bathymetric mapping could complement thermal PIV in a hybrid approach to remote sensing of river discharge; this strategy would not be applicable to larger, more turbid rivers, however. A more flexible and efficient alternative might involve inferring depth from thermal data based on relationships between depth and integral length scales of turbulent fluctuations in temperature, captured as variations in image brightness. We observed moderately strong correlations for a site-aggregated data set that reduced station-to-station variability but encompassed a broad range of depths. Discharges calculated using thermal PIV-derived velocities were within 15% of in situ measurements when combined with depths measured directly in the field or estimated from field spectra and within 40% when the depth information also was derived from thermal images. The results of this initial, proof-of-concept investigation suggest that remote sensing techniques could facilitate measurement of river discharge.

  10. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  11. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    NASA Astrophysics Data System (ADS)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  12. Approach for scene reconstruction from the analysis of a triplet of still images

    NASA Astrophysics Data System (ADS)

    Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle

    1997-03-01

    Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.

  13. Monte Carlo simulation of the spatial resolution and depth sensitivity of two-dimensional optical imaging of the brain

    PubMed Central

    Tian, Peifang; Devor, Anna; Sakadžić, Sava; Dale, Anders M.; Boas, David A.

    2011-01-01

    Absorption or fluorescence-based two-dimensional (2-D) optical imaging is widely employed in functional brain imaging. The image is a weighted sum of the real signal from the tissue at different depths. This weighting function is defined as “depth sensitivity.” Characterizing depth sensitivity and spatial resolution is important to better interpret the functional imaging data. However, due to light scattering and absorption in biological tissues, our knowledge of these is incomplete. We use Monte Carlo simulations to carry out a systematic study of spatial resolution and depth sensitivity for 2-D optical imaging methods with configurations typically encountered in functional brain imaging. We found the following: (i) the spatial resolution is <200 μm for NA ≤0.2 or focal plane depth ≤300 μm. (ii) More than 97% of the signal comes from the top 500 μm of the tissue. (iii) For activated columns with lateral size larger than spatial resolution, changing numerical aperature (NA) and focal plane depth does not affect depth sensitivity. (iv) For either smaller columns or large columns covered by surface vessels, increasing NA and∕or focal plane depth may improve depth sensitivity at deeper layers. Our results provide valuable guidance for the optimization of optical imaging systems and data interpretation. PMID:21280912

  14. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    PubMed

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  15. High resolution axicon-based endoscopic FD OCT imaging with a large depth range

    NASA Astrophysics Data System (ADS)

    Lee, Kye-Sung; Hurley, William; Deegan, John; Dean, Scott; Rolland, Jannick P.

    2010-02-01

    Endoscopic imaging in tubular structures, such as the tracheobronchial tree, could benefit from imaging optics with an extended depth of focus (DOF). This optics could accommodate for varying sizes of tubular structures across patients and along the tree within a single patient. In the paper, we demonstrate an extended DOF without sacrificing resolution showing rotational images in biological tubular samples with 2.5 μm axial resolution, 10 ìm lateral resolution, and > 4 mm depth range using a custom designed probe.

  16. Action recognition using multi-scale histograms of oriented gradients based depth motion trail Images

    NASA Astrophysics Data System (ADS)

    Wang, Guanxi; Tie, Yun; Qi, Lin

    2017-07-01

    In this paper, we propose a novel approach based on Depth Maps and compute Multi-Scale Histograms of Oriented Gradient (MSHOG) from sequences of depth maps to recognize actions. Each depth frame in a depth video sequence is projected onto three orthogonal Cartesian planes. Under each projection view, the absolute difference between two consecutive projected maps is accumulated through a depth video sequence to form a Depth Map, which is called Depth Motion Trail Images (DMTI). The MSHOG is then computed from the Depth Maps for the representation of an action. In addition, we apply L2-Regularized Collaborative Representation (L2-CRC) to classify actions. We evaluate the proposed approach on MSR Action3D dataset and MSRGesture3D dataset. Promising experimental result demonstrates the effectiveness of our proposed method.

  17. Three-dimensional fluorescence-enhanced optical tomography using a hand-held probe based imaging system

    PubMed Central

    Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha

    2008-01-01

    Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5×10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (∼650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1–2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1–2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms. PMID:18697559

  18. Three-dimensional fluorescence-enhanced optical tomography using a hand-held probe based imaging system.

    PubMed

    Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha

    2008-07-01

    Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5 x 10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (approximately 650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1-2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1-2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms.

  19. Off-axis holographic laser speckle contrast imaging of blood vessels in tissues

    NASA Astrophysics Data System (ADS)

    Abdurashitov, Arkady; Bragina, Olga; Sindeeva, Olga; Sergey, Sindeev; Semyachkina-Glushkovskaya, Oxana V.; Tuchin, Valery V.

    2017-09-01

    Laser speckle contrast imaging (LSCI) has become one of the most common tools for functional imaging in tissues. Incomplete theoretical description and sophisticated interpretation of measurement results are completely sidelined by a low-cost and simple hardware, fastness, consistent results, and repeatability. In addition to the relatively low measuring volume with around 700 μm of the probing depth for the visible spectral range of illumination, there is no depth selectivity in conventional LSCI configuration; furthermore, in a case of high NA objective, the actual penetration depth of light in tissues is greater than depth of field (DOF) of an imaging system. Thus, the information about these out-of-focus regions persists in the recorded frames but cannot be retrieved due to intensity-based registration method. We propose a simple modification of LSCI system based on the off-axis holography to introduce after-registration refocusing ability to overcome both depth-selectivity and DOF problems as well as to get the potential possibility of producing a cross-section view of the specimen.

  20. Depth-encoded all-fiber swept source polarization sensitive OCT

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Lee, ByungKun; Choi, WooJhon; Potsaid, Benjamin; Liu, Jonathan; Jayaraman, Vijaysekhar; Cable, Alex; Kraus, Martin F.; Liang, Kaicheng; Hornegger, Joachim; Fujimoto, James G.

    2014-01-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of conventional OCT and can assess depth-resolved tissue birefringence in addition to intensity. Most existing PS-OCT systems are relatively complex and their clinical translation remains difficult. We present a simple and robust all-fiber PS-OCT system based on swept source technology and polarization depth-encoding. Polarization multiplexing was achieved using a polarization maintaining fiber. Polarization sensitive signals were detected using fiber based polarization beam splitters and polarization controllers were used to remove the polarization ambiguity. A simplified post-processing algorithm was proposed for speckle noise reduction relaxing the demand for phase stability. We demonstrated systems design for both ophthalmic and catheter-based PS-OCT. For ophthalmic imaging, we used an optical clock frequency doubling method to extend the imaging range of a commercially available short cavity light source to improve polarization depth-encoding. For catheter based imaging, we demonstrated 200 kHz PS-OCT imaging using a MEMS-tunable vertical cavity surface emitting laser (VCSEL) and a high speed micromotor imaging catheter. The system was demonstrated in human retina, finger and lip imaging, as well as ex vivo swine esophagus and cardiovascular imaging. The all-fiber PS-OCT is easier to implement and maintain compared to previous PS-OCT systems and can be more easily translated to clinical applications due to its robust design. PMID:25401008

  1. Volumetric 3D display with multi-layered active screens for enhanced the depth perception (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook

    2016-09-01

    Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea

  2. Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0.

    PubMed

    Jin, Xin; Liu, Li; Chen, Yanqin; Dai, Qionghai

    2017-05-01

    This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object's depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.

  3. Depth extraction method with high accuracy in integral imaging based on moving array lenslet technique

    NASA Astrophysics Data System (ADS)

    Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing

    2018-03-01

    In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.

  4. Telecentric 3D profilometry based on phase-shifting fringe projection.

    PubMed

    Li, Dong; Liu, Chunyang; Tian, Jindong

    2014-12-29

    Three dimensional shape measurement in the microscopic range becomes increasingly important with the development of micro manufacturing technology. Microscopic fringe projection techniques offer a fast, robust, and full-field measurement for field sizes from approximately 1 mm2 to several cm2. However, the depth of field is very small due to the imaging of non-telecentric microscope, which is often not sufficient to measure the complete depth of a 3D-object. And the calibration of phase-to-depth conversion is complicated which need a precision translation stage and a reference plane. In this paper, we propose a novel telecentric phase-shifting projected fringe profilometry for small and thick objects. Telecentric imaging extends the depth of field approximately to millimeter order, which is much larger than that of microscopy. To avoid the complicated phase-to-depth conversion in microscopic fringe projection, we develop a new system calibration method of camera and projector based on telecentric imaging model. Based on these, a 3D reconstruction of telecentric imaging is presented with stereovision aided by fringe phase maps. Experiments demonstrated the feasibility and high measurement accuracy of the proposed system for thick object.

  5. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  6. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  7. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  8. An approach of point cloud denoising based on improved bilateral filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  9. A method to generate soft shadows using a layered depth image and warping.

    PubMed

    Im, Yeon-Ho; Han, Chang-Young; Kim, Lee-Sup

    2005-01-01

    We present an image-based method for propagating area light illumination through a Layered Depth Image (LDI) to generate soft shadows from opaque and nonrefractive transparent objects. In our approach, using the depth peeling technique, we render an LDI from a reference light sample on a planar light source. Light illumination of all pixels in an LDI is then determined for all the other sample points via warping, an image-based rendering technique, which approximates ray tracing in our method. We use an image-warping equation and McMillan's warp ordering algorithm to find the intersections between rays and polygons and to find the order of intersections. Experiments for opaque and nonrefractive transparent objects are presented. Results indicate our approach generates soft shadows fast and effectively. Advantages and disadvantages of the proposed method are also discussed.

  10. Study on super-resolution three-dimensional range-gated imaging technology

    NASA Astrophysics Data System (ADS)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  11. RGB-D depth-map restoration using smooth depth neighborhood supports

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Xue, Haoyang; Yu, Zhongjie; Wu, Qiang; Yang, Jie

    2015-05-01

    A method to restore the depth map of an RGB-D image using smooth depth neighborhood (SDN) supports is presented. The SDN supports are computed based on the corresponding color image of the depth map. Compared with the most widely used square supports, the proposed SDN supports can well-capture the local structure of the object. Only pixels with similar depth values are allowed to be included in the support. We combine our SDN supports with the joint bilateral filter (JBF) to form the SDN-JBF and use it to restore depth maps. Experimental results show that our SDN-JBF can not only rectify the misaligned depth pixels but also preserve sharp depth discontinuities.

  12. Distance Metric Learning Using Privileged Information for Face Verification and Person Re-Identification.

    PubMed

    Xu, Xinxing; Li, Wen; Xu, Dong

    2015-12-01

    In this paper, we propose a new approach to improve face verification and person re-identification in the RGB images by leveraging a set of RGB-D data, in which we have additional depth images in the training data captured using depth cameras such as Kinect. In particular, we extract visual features and depth features from the RGB images and depth images, respectively. As the depth features are available only in the training data, we treat the depth features as privileged information, and we formulate this task as a distance metric learning with privileged information problem. Unlike the traditional face verification and person re-identification tasks that only use visual features, we further employ the extra depth features in the training data to improve the learning of distance metric in the training process. Based on the information-theoretic metric learning (ITML) method, we propose a new formulation called ITML with privileged information (ITML+) for this task. We also present an efficient algorithm based on the cyclic projection method for solving the proposed ITML+ formulation. Extensive experiments on the challenging faces data sets EUROCOM and CurtinFaces for face verification as well as the BIWI RGBD-ID data set for person re-identification demonstrate the effectiveness of our proposed approach.

  13. Comparing Yb-fiber and Ti:Sapphire lasers for depth resolved imaging of human skin (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Balu, Mihaela; Saytashev, Ilyas; Hou, Jue; Dantus, Marcos; Tromberg, Bruce J.

    2016-02-01

    We report on a direct comparison between Ti:Sapphire and Yb fiber lasers for depth-resolved label-free multimodal imaging of human skin. We found that the penetration depth achieved with the Yb laser was 80% greater than for the Ti:Sapphire. Third harmonic generation (THG) imaging with Yb laser excitation provides additional information about skin structure. Our results indicate the potential of fiber-based laser systems for moving into clinical use.

  14. Deep learning-based depth estimation from a synthetic endoscopy image training set

    NASA Astrophysics Data System (ADS)

    Mahmood, Faisal; Durr, Nicholas J.

    2018-03-01

    Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.

  15. Super-resolution for asymmetric resolution of FIB-SEM 3D imaging using AI with deep learning.

    PubMed

    Hagita, Katsumi; Higuchi, Takeshi; Jinnai, Hiroshi

    2018-04-12

    Scanning electron microscopy equipped with a focused ion beam (FIB-SEM) is a promising three-dimensional (3D) imaging technique for nano- and meso-scale morphologies. In FIB-SEM, the specimen surface is stripped by an ion beam and imaged by an SEM installed orthogonally to the FIB. The lateral resolution is governed by the SEM, while the depth resolution, i.e., the FIB milling direction, is determined by the thickness of the stripped thin layer. In most cases, the lateral resolution is superior to the depth resolution; hence, asymmetric resolution is generated in the 3D image. Here, we propose a new approach based on an image-processing or deep-learning-based method for super-resolution of 3D images with such asymmetric resolution, so as to restore the depth resolution to achieve symmetric resolution. The deep-learning-based method learns from high-resolution sub-images obtained via SEM and recovers low-resolution sub-images parallel to the FIB milling direction. The 3D morphologies of polymeric nano-composites are used as test images, which are subjected to the deep-learning-based method as well as conventional methods. We find that the former yields superior restoration, particularly as the asymmetric resolution is increased. Our super-resolution approach for images having asymmetric resolution enables observation time reduction.

  16. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.

  17. Development of Extended-Depth Swept Source Optical Coherence Tomography for Applications in Ophthalmic Imaging of the Anterior and Posterior Eye

    NASA Astrophysics Data System (ADS)

    Dhalla, Al-Hafeez Zahir

    Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for extending the imaging range of OCT systems are developed. These techniques include the use of a high spectral purity swept source laser in a full-field OCT system, as well as the use of a peculiar phenomenon known as coherence revival to resolve the complex conjugate ambiguity in swept source OCT. In addition, a technique for extending the depth of focus of OCT systems by using a polarization-encoded, dual-focus sample arm is demonstrated. Along the way, other related advances are also presented, including the development of techniques to reduce crosstalk and speckle artifacts in full-field OCT, and the use of fast optical switches to increase the imaging speed of certain low-duty cycle swept source OCT systems. Finally, the clinical utility of these techniques is demonstrated by combining them to demonstrate high-speed, high resolution, extended-depth imaging of both the anterior and posterior eye simultaneously and in vivo.

  18. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  19. An efficient method for the fusion of light field refocused images

    NASA Astrophysics Data System (ADS)

    Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei

    2018-04-01

    Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.

  20. Subpixel based defocused points removal in photon-limited volumetric dataset

    NASA Astrophysics Data System (ADS)

    Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Maraka, Harsha Vardhan R.; Ryle, James P.; Sheridan, John T.

    2017-03-01

    The asymptotic property of the maximum likelihood estimator (MLE) has been utilized to reconstruct three-dimensional (3D) sectional images in the photon counting imaging (PCI) regime. At first, multiple 2D intensity images, known as Elemental images (EI), are captured. Then the geometric ray-tracing method is employed to reconstruct the 3D sectional images at various depth cues. We note that a 3D sectional image consists of both focused and defocused regions, depending on the reconstructed depth position. The defocused portion is redundant and should be removed in order to facilitate image analysis e.g., 3D object tracking, recognition, classification and navigation. In this paper, we present a subpixel level three-step based technique (i.e. involving adaptive thresholding, boundary detection and entropy based segmentation) to discard the defocused sparse-samples from the reconstructed photon-limited 3D sectional images. Simulation results are presented demonstrating the feasibility and efficiency of the proposed method.

  1. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  2. Image recombination transform algorithm for superresolution structured illumination microscopy

    PubMed Central

    Zhou, Xing; Lei, Ming; Dan, Dan; Yao, Baoli; Yang, Yanlong; Qian, Jia; Chen, Guangde; Bianco, Piero R.

    2016-01-01

    Abstract. Structured illumination microscopy (SIM) is an attractive choice for fast superresolution imaging. The generation of structured illumination patterns made by interference of laser beams is broadly employed to obtain high modulation depth of patterns, while the polarizations of the laser beams must be elaborately controlled to guarantee the high contrast of interference intensity, which brings a more complex configuration for the polarization control. The emerging pattern projection strategy is much more compact, but the modulation depth of patterns is deteriorated by the optical transfer function of the optical system, especially in high spatial frequency near the diffraction limit. Therefore, the traditional superresolution reconstruction algorithm for interference-based SIM will suffer from many artifacts in the case of projection-based SIM that possesses a low modulation depth. Here, we propose an alternative reconstruction algorithm based on image recombination transform, which provides an alternative solution to address this problem even in a weak modulation depth. We demonstrated the effectiveness of this algorithm in the multicolor superresolution imaging of bovine pulmonary arterial endothelial cells in our developed projection-based SIM system, which applies a computer controlled digital micromirror device for fast fringe generation and multicolor light-emitting diodes for illumination. The merit of the system incorporated with the proposed algorithm allows for a low excitation intensity fluorescence imaging even less than 1  W/cm2, which is beneficial for the long-term, in vivo superresolved imaging of live cells and tissues. PMID:27653935

  3. Photoacoustic imaging with planoconcave optical microresonator sensors: feasibility studies based on phantom imaging

    NASA Astrophysics Data System (ADS)

    Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.

    2017-03-01

    The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.

  4. In-vivo, real-time cross-sectional images of retina using a GPU enhanced master slave optical coherence tomography system

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2016-03-01

    In our previous reports we demonstrated a novel Fourier domain optical coherence tomography method, Master Slave optical coherence tomography (MS-OCT), that does not require resampling of data and can deliver en-face images from several depths simultaneously. While ideally suited for delivering information from a selected depth, the MS-OCT has been so far inferior to the conventional FFT based OCT in terms of time of producing cross section images. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real-time by assembling several T-scans from different depths. We analyze the conditions that ensure a real-time B-scan imaging operation, and demonstrate in-vivo real-time images from human fovea and the optic nerve, of comparable resolution and sensitivity to those produced using the traditional Fourier domain based method.

  5. PlenoPatch: Patch-Based Plenoptic Image Manipulation.

    PubMed

    Zhang, Fang-Lue; Wang, Jue; Shechtman, Eli; Zhou, Zi-Ye; Shi, Jia-Xin; Hu, Shi-Min

    2017-05-01

    Patch-based image synthesis methods have been successfully applied for various editing tasks on still images, videos and stereo pairs. In this work we extend patch-based synthesis to plenoptic images captured by consumer-level lenselet-based devices for interactive, efficient light field editing. In our method the light field is represented as a set of images captured from different viewpoints. We decompose the central view into different depth layers, and present it to the user for specifying the editing goals. Given an editing task, our method performs patch-based image synthesis on all affected layers of the central view, and then propagates the edits to all other views. Interaction is done through a conventional 2D image editing user interface that is familiar to novice users. Our method correctly handles object boundary occlusion with semi-transparency, thus can generate more realistic results than previous methods. We demonstrate compelling results on a wide range of applications such as hole-filling, object reshuffling and resizing, changing object depth, light field upscaling and parallax magnification.

  6. Performance evaluation of extended depth of field microscopy in the presence of spherical aberration and noise

    NASA Astrophysics Data System (ADS)

    King, Sharon V.; Yuan, Shuai; Preza, Chrysanthe

    2018-03-01

    Effectiveness of extended depth of field microscopy (EDFM) implementation with wavefront encoding methods is reduced by depth-induced spherical aberration (SA) due to reliance of this approach on a defined point spread function (PSF). Evaluation of the engineered PSF's robustness to SA, when a specific phase mask design is used, is presented in terms of the final restored image quality. Synthetic intermediate images were generated using selected generalized cubic and cubic phase mask designs. Experimental intermediate images were acquired using the same phase mask designs projected from a liquid crystal spatial light modulator. Intermediate images were restored using the penalized space-invariant expectation maximization and the regularized linear least squares algorithms. In the presence of depth-induced SA, systems characterized by radially symmetric PSFs, coupled with model-based computational methods, achieve microscope imaging performance with fewer deviations in structural fidelity (e.g., artifacts) in simulation and experiment and 50% more accurate positioning of 1-μm beads at 10-μm depth in simulation than those with radially asymmetric PSFs. Despite a drop in the signal-to-noise ratio after processing, EDFM is shown to achieve the conventional resolution limit when a model-based reconstruction algorithm with appropriate regularization is used. These trends are also found in images of fixed fluorescently labeled brine shrimp, not adjacent to the coverslip, and fluorescently labeled mitochondria in live cells.

  7. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  8. Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.

    PubMed

    Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun

    2017-07-01

    In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.

  9. Fiber-optic annular detector array for large depth of field photoacoustic macroscopy.

    PubMed

    Bauer-Marschallinger, Johannes; Höllinger, Astrid; Jakoby, Bernhard; Burgholzer, Peter; Berer, Thomas

    2017-03-01

    We report on a novel imaging system for large depth of field photoacoustic scanning macroscopy. Instead of commonly used piezoelectric transducers, fiber-optic based ultrasound detection is applied. The optical fibers are shaped into rings and mainly receive ultrasonic signals stemming from the ring symmetry axes. Four concentric fiber-optic rings with varying diameters are used in order to increase the image quality. Imaging artifacts, originating from the off-axis sensitivity of the rings, are reduced by coherence weighting. We discuss the working principle of the system and present experimental results on tissue mimicking phantoms. The lateral resolution is estimated to be below 200 μm at a depth of 1.5 cm and below 230 μm at a depth of 4.5 cm. The minimum detectable pressure is in the order of 3 Pa. The introduced method has the potential to provide larger imaging depths than acoustic resolution photoacoustic microscopy and an imaging resolution similar to that of photoacoustic computed tomography.

  10. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.

  11. High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.

    PubMed

    Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S

    2018-03-05

    A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.

  12. The selection of the optimal baseline in the front-view monocular vision system

    NASA Astrophysics Data System (ADS)

    Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.

  13. Depth-Based Detection of Standing-Pigs in Moving Noise Environments.

    PubMed

    Kim, Jinseong; Chung, Yeonwoo; Choi, Younchang; Sa, Jaewon; Kim, Heegon; Chung, Yongwha; Park, Daihee; Kim, Hakjae

    2017-11-29

    In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with "moving noises", which appear every night in a commercial pig farm, but have not been reported yet. We first apply a spatiotemporal interpolation technique to remove the moving noises occurring in the depth images. Then, we detect the standing-pigs by utilizing the undefined depth values around them. Our experimental results show that this method is effective for detecting standing-pigs at night, in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (i.e., 94.47%), even with severe moving noises occluding up to half of an input depth image. Furthermore, without any time-consuming technique, the proposed method can be executed in real-time.

  14. Deep Tissue Photoacoustic Imaging Using a Miniaturized 2-D Capacitive Micromachined Ultrasonic Transducer Array

    PubMed Central

    Kothapalli, Sri-Rajasekhar; Ma, Te-Jen; Vaithilingam, Srikant; Oralkan, Ömer

    2014-01-01

    In this paper, we demonstrate 3-D photoacoustic imaging (PAI) of light absorbing objects embedded as deep as 5 cm inside strong optically scattering phantoms using a miniaturized (4 mm × 4 mm × 500 µm), 2-D capacitive micromachined ultrasonic transducer (CMUT) array of 16 × 16 elements with a center frequency of 5.5 MHz. Two-dimensional tomographic images and 3-D volumetric images of the objects placed at different depths are presented. In addition, we studied the sensitivity of CMUT-based PAI to the concentration of indocyanine green dye at 5 cm depth inside the phantom. Under optimized experimental conditions, the objects at 5 cm depth can be imaged with SNR of about 35 dB and a spatial resolution of approximately 500 µm. Results demonstrate that CMUTs with integrated front-end amplifier circuits are an attractive choice for achieving relatively high depth sensitivity for PAI. PMID:22249594

  15. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography).

    PubMed

    Siegel, Nisan; Storrie, Brian; Bruce, Marc; Brooker, Gary

    2015-02-07

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called "CINCH". An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution.

  16. Joint optic disc and cup boundary extraction from monocular fundus images.

    PubMed

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Development and evaluation of a hand tracker using depth images captured from an overhead perspective.

    PubMed

    Czarnuch, Stephen; Mihailidis, Alex

    2015-03-27

    We present the development and evaluation of a robust hand tracker based on single overhead depth images for use in the COACH, an assistive technology for people with dementia. The new hand tracker was designed to overcome limitations experienced by the COACH in previous clinical trials. We train a random decision forest classifier using ∼5000 manually labeled, unbalanced, training images. Hand positions from the classifier are translated into task actions based on proximity to environmental objects. Tracker performance is evaluated using a large set of ∼24 000 manually labeled images captured from 41 participants in a fully-functional washroom, and compared to the system's previous colour-based hand tracker. Precision and recall were 0.994 and 0.938 for the depth tracker compared to 0.981 and 0.822 for the colour tracker with the current data, and 0.989 and 0.466 in the previous study. The improved tracking performance supports integration of the depth-based tracker into the COACH toward unsupervised, real-world trials. Implications for Rehabilitation The COACH is an intelligent assistive technology that can enable people with cognitive disabilities to stay at home longer, supporting the concept of aging-in-place. Automated prompting systems, a type of intelligent assistive technology, can help to support the independent completion of activities of daily living, increasing the independence of people with cognitive disabilities while reducing the burden of care experienced by caregivers. Robust motion tracking using depth imaging supports the development of intelligent assistive technologies like the COACH. Robust motion tracking also has application to other forms of assistive technologies including gaming, human-computer interaction and automated assessments.

  18. Axial resolution improvement in spectral domain optical coherence tomography using a depth-adaptive maximum-a-posterior framework

    NASA Astrophysics Data System (ADS)

    Boroomand, Ameneh; Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka

    2015-03-01

    The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.

  19. The implementation of depth measurement and related algorithms based on binocular vision in embedded AM5728

    NASA Astrophysics Data System (ADS)

    Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan

    2018-01-01

    Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.

  20. Extended depth of field integral imaging using multi-focus fusion

    NASA Astrophysics Data System (ADS)

    Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua

    2018-03-01

    In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.

  1. Self-interference fluorescence microscopy with three-phase detection for depth-resolved confocal epi-fluorescence imaging.

    PubMed

    Braaf, Boy; de Boer, Johannes F

    2017-03-20

    Three-dimensional confocal fluorescence imaging of in vivo tissues is challenging due to sample motion and limited imaging speeds. In this paper a novel method is therefore presented for scanning confocal epi-fluorescence microscopy with instantaneous depth-sensing based on self-interference fluorescence microscopy (SIFM). A tabletop epi-fluorescence SIFM setup was constructed with an annular phase plate in the emission path to create a spectral self-interference signal that is phase-dependent on the axial position of a fluorescent sample. A Mach-Zehnder interferometer based on a 3 × 3 fiber-coupler was developed for a sensitive phase analysis of the SIFM signal with three photon-counter detectors instead of a spectrometer. The Mach-Zehnder interferometer created three intensity signals that alternately oscillated as a function of the SIFM spectral phase and therefore encoded directly for the axial sample position. Controlled axial translation of fluorescent microsphere layers showed a linear dependence of the SIFM spectral phase with sample depth over axial image ranges of 500 µm and 80 µm (3.9 × Rayleigh range) for 4 × and 10 × microscope objectives respectively. In addition, SIFM was in good agreement with optical coherence tomography depth measurements on a sample with indocyanine green dye filled capillaries placed at multiple depths. High-resolution SIFM imaging applications are demonstrated for fluorescence angiography on a dye-filled capillary blood vessel phantom and for autofluorescence imaging on an ex vivo fly eye.

  2. Full range line-field parallel swept source imaging utilizing digital refocusing

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-12-01

    We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.

  3. Contour sensitive saliency and depth application in image retargeting

    NASA Astrophysics Data System (ADS)

    Lu, Hongju; Yue, Pengfei; Zhao, Yanhui; Liu, Rui; Fu, Yuanbin; Zheng, Yuanjie; Cui, Jia

    2018-04-01

    Image retargeting technique requires important information preservation and less edge distortion during increasing/decreasing image size. The major existed content-aware methods perform well. However, there are two problems should be improved: the slight distortion appeared at the object edges and the structure distortion in the nonsalient area. According to psychological theories, people evaluate image quality based on multi-level judgments and comparison between different areas, both image content and image structure. The paper proposes a new standard: the structure preserving in non-salient area. After observation and image analysis, blur (slight blur) is generally existed at the edge of objects. The blur feature is used to estimate the depth cue, named blur depth descriptor. It can be used in the process of saliency computation for balanced image retargeting result. In order to keep the structure information in nonsalient area, the salient edge map is presented in Seam Carving process, instead of field-based saliency computation. The derivative saliency from x- and y-direction can avoid the redundant energy seam around salient objects causing structure distortion. After the comparison experiments between classical approaches and ours, the feasibility of our algorithm is proved.

  4. Wavelet-based statistical classification of skin images acquired with reflectance confocal microscopy

    PubMed Central

    Halimi, Abdelghafour; Batatia, Hadj; Le Digabel, Jimmy; Josse, Gwendal; Tourneret, Jean Yves

    2017-01-01

    Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50μm and 60μm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction. PMID:29296480

  5. Method and apparatus to measure the depth of skin burns

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.

    2002-01-01

    A new device for measuring the depth of surface tissue burns based on the rate at which the skin temperature responds to a sudden differential temperature stimulus. This technique can be performed without physical contact with the burned tissue. In one implementation, time-dependent surface temperature data is taken from subsequent frames of a video signal from an infrared-sensitive video camera. When a thermal transient is created, e.g., by turning off a heat lamp directed at the skin surface, the following time-dependent surface temperature data can be used to determine the skin burn depth. Imaging and non-imaging versions of this device can be implemented, thereby enabling laboratory-quality skin burn depth imagers for hospitals as well as hand-held skin burn depth sensors the size of a small pocket flashlight for field use and triage.

  6. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography)

    PubMed Central

    Siegel, Nisan; Storrie, Brian; Bruce, Marc

    2016-01-01

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called “CINCH”. An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution. PMID:26839443

  7. Three-dimensional image acquisition and reconstruction system on a mobile device based on computer-generated integral imaging.

    PubMed

    Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam

    2017-10-01

    A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.

  8. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  9. Pixel-based parametric source depth map for Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Altabella, L.; Boschi, F.; Spinelli, A. E.

    2016-01-01

    Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.

  10. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  11. Long-range and depth-selective imaging of macroscopic targets using low-coherence and wide-field interferometry (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Choi, Wonshik

    2016-03-01

    With the advancement of 3D display technology, 3D imaging of macroscopic objects has drawn much attention as they provide the contents to display. The most widely used imaging methods include a depth camera, which measures time of flight for the depth discrimination, and various structured illumination techniques. However, these existing methods have poor depth resolution, which makes imaging complicated structures a difficult task. In order to resolve this issue, we propose an imaging system based upon low-coherence interferometry and off-axis digital holographic imaging. By using light source with coherence length of 200 micro, we achieved the depth resolution of 100 micro. In order to map the macroscopic objects with this high axial resolution, we installed a pair of prisms in the reference beam path for the long-range scanning of the optical path length. Specifically, one prism was fixed in position, and the other prism was mounted on a translation stage and translated in parallel to the first prism. Due to the multiple internal reflections between the two prisms, the overall path length was elongated by a factor of 50. In this way, we could cover a depth range more than 1 meter. In addition, we employed multiple speckle illuminations and incoherent averaging of the acquired holographic images for reducing the specular reflections from the target surface. Using this newly developed system, we performed imaging targets with multiple different layers and demonstrated imaging targets hidden behind the scattering layers. The method was also applied to imaging targets located around the corner.

  12. Image Restoration for Fluorescence Planar Imaging with Diffusion Model

    PubMed Central

    Gong, Yuzhu; Li, Yang

    2017-01-01

    Fluorescence planar imaging (FPI) is failure to capture high resolution images of deep fluorochromes due to photon diffusion. This paper presents an image restoration method to deal with this kind of blurring. The scheme of this method is conceived based on a reconstruction method in fluorescence molecular tomography (FMT) with diffusion model. A new unknown parameter is defined through introducing the first mean value theorem for definite integrals. System matrix converting this unknown parameter to the blurry image is constructed with the elements of depth conversion matrices related to a chosen plane named focal plane. Results of phantom and mouse experiments show that the proposed method is capable of reducing the blurring of FPI image caused by photon diffusion when the depth of focal plane is chosen within a proper interval around the true depth of fluorochrome. This method will be helpful to the estimation of the size of deep fluorochrome. PMID:29279843

  13. Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging.

    PubMed

    Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho

    2004-12-01

    A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.

  14. Depth-resolved birefringence and differential optical axis orientation measurements with fiber-based polarization-sensitive optical coherence tomography.

    PubMed

    Guo, Shuguang; Zhang, Jun; Wang, Lei; Nelson, J Stuart; Chen, Zhongping

    2004-09-01

    Conventional polarization-sensitive optical coherence tomography (PS-OCT) can provide depth-resolved Stokes parameter measurements of light reflected from turbid media. A new algorithm that takes into account changes in the optical axis is introduced to provide depth-resolved birefringence and differential optical axis orientation images by use of fiber-based PS-OCT. Quaternion, a convenient mathematical tool, is used to represent an optical element and simplify the algorithm. Experimental results with beef tendon and rabbit tendon and muscle show that this technique has promising potential for imaging the birefringent structure of multiple-layer samples with varying optical axes.

  15. Depth perception based 3D holograms enabled with polarization-independent metasurfaces.

    PubMed

    Deng, Juan; Li, Zile; Zheng, Guoxing; Tao, Jin; Dai, Qi; Deng, Liangui; He, Ping'an; Deng, Qiling; Mao, Qingzhou

    2018-04-30

    Metasurfaces consist of dielectric nanobrick arrays with different dimensions in the long and short axes can be used to generate different phase delays, predicting a new way to manipulate an incident beam in the two orthogonal directions separately. Here we demonstrate the concept of depth perception based three-dimensional (3D) holograms with polarization-independent metasurfaces. 4-step dielectric metasurfaces-based fan-out optical elements and holograms operating at 658 nm were designed and simulated. Two different holographic images with high fidelity were generated at the same plane in the far field for different polarization states. One can observe the 3D effect of target objects with polarized glasses. With the advantages of ultracompactness, flexibility and replicability, the polarization-independent metasurfaces open up depth perception based stereoscopic imaging in a holographic way.

  16. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  17. Differential standard deviation of log-scale intensity based optical coherence tomography angiography.

    PubMed

    Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D

    2017-12-01

    In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Time multiplexing based extended depth of focus imaging.

    PubMed

    Ilovitsh, Asaf; Zalevsky, Zeev

    2016-01-01

    We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.

  19. Micromachined array tip for multifocus fiber-based optical coherence tomography.

    PubMed

    Yang, Victor X D; Munce, Nigel; Pekar, Julius; Gordon, Maggie L; Lo, Stewart; Marcon, Norman E; Wilson, Brian C; Vitkin, I Alex

    2004-08-01

    High-resolution optical coherence tomography demands a large detector bandwidth and a high numerical aperture for real-time imaging, which is difficult to achieve over a large imaging depth. To resolve these conflicting requirements we propose a novel multifocus fiber-based optical coherence tomography system with a micromachined array tip. We demonstrate the fabrication of a prototype four-channel tip that maintains a 9-14-microm spot diameter with more than 500 microm of imaging depth. Images of a resolution target and a human tooth were obtained with this tip by use of a four-channel cascaded Michelson fiber-optic interferometer, scanned simultaneously at 8 kHz with geometric power distribution across the four channels.

  20. An alternative approach to depth of field which avoids the blur circle and uses the pixel pitch

    NASA Astrophysics Data System (ADS)

    Schuster, Norbert

    2015-09-01

    Modern thermal imaging systems apply more and more uncooled detectors. High volume applications work with detectors which have a reduced pixel count (typical between 200x150 and 640x480). This shrinks the application of modern image treatment procedures like wave front coding. On the other hand side, uncooled detectors demand lenses with fast F-numbers near 1.0. Which are the limits on resolution if the target to analyze changes its distance to the camera system? The aim to implement lens arrangements without any focusing mechanism demands a deeper quantification of the Depth of Field problem. The proposed Depth of Field approach avoids the classic "accepted image blur circle". It bases on a camera specific depth of focus which is transformed in the object space by paraxial relations. The traditional RAYLEIGH's -criterion bases on the unaberrated Point Spread Function and delivers a first order relation for the depth of focus. Hence, neither the actual lens resolution neither the detector impact is considered. The camera specific depth of focus respects a lot of camera properties: Lens aberrations at actual F-number, detector size and pixel pitch. The through focus MTF is the base of the camera specific depth of focus. It has a nearly symmetric course around the maximum of sharp imaging. The through focus MTF is considered at detector's Nyquist frequency. The camera specific depth of focus is this the axial distance in front and behind of sharp image plane where the through focus MTF is <0.25. This camera specific depth of focus is transferred in the object space by paraxial relations. It follows a general applicable Depth of Field diagram which could be applied to lenses realizing a lateral magnification range -0.05…0. Easy to handle formulas are provided between hyperfocal distance and the borders of the Depth of Field in dependence on sharp distances. These relations are in line with the classical Depth of Field-theory. Thermal pictures, taken by different IR-camera cores, illustrate the new approach. The quite often requested graph "MTF versus distance" choses the half Nyquist frequency as reference. The paraxial transfer of the through focus MTF in object space distorts the MTF-curve: hard drop at closer distances than sharp distance, smooth drop at further distances. The formula of a general Diffraction-Limited-Through-Focus-MTF (DLTF) is deducted. Arbitrary detector-lens combinations could be discussed. Free variables in this analysis are waveband, aperture based F-number (lens) and pixel pitch (detector). The DLTF- discussion provides physical limits and technical requirements. The detector development with pixel pitches smaller than captured wavelength in the LWIR-region generates a special challenge for optical design.

  1. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  2. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  3. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information

    PubMed Central

    Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun

    2016-01-01

    In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256

  4. Application of simple all-sky imagers for the estimation of aerosol optical depth

    NASA Astrophysics Data System (ADS)

    Kazantzidis, Andreas; Tzoumanikas, Panagiotis; Nikitidou, Efterpi; Salamalikis, Vasileios; Wilbert, Stefan; Prahl, Christoph

    2017-06-01

    Aerosol optical depth is a key atmospheric constituent for direct normal irradiance calculations at concentrating solar power plants. However, aerosol optical depth is typically not measured at the solar plants for financial reasons. With the recent introduction of all-sky imagers for the nowcasting of direct normal irradiance at the plants a new instrument is available which can be used for the determination of aerosol optical depth at different wavelengths. In this study, we are based on Red, Green and Blue intensities/radiances and calculations of the saturated area around the Sun, both derived from all-sky images taken with a low-cost surveillance camera at the Plataforma Solar de Almeria, Spain. The aerosol optical depth at 440, 500 and 675nm is calculated. The results are compared with collocated aerosol optical measurements and the mean/median difference and standard deviation are less than 0.01 and 0.03 respectively at all wavelengths.

  5. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  6. Optical-domain subsampling for data efficient depth ranging in Fourier-domain optical coherence tomography

    PubMed Central

    Siddiqui, Meena; Vakoc, Benjamin J.

    2012-01-01

    Recent advances in optical coherence tomography (OCT) have led to higher-speed sources that support imaging over longer depth ranges. Limitations in the bandwidth of state-of-the-art acquisition electronics, however, prevent adoption of these advances into the clinical applications. Here, we introduce optical-domain subsampling as a method for imaging at high-speeds and over extended depth ranges but with a lower acquisition bandwidth than that required using conventional approaches. Optically subsampled laser sources utilize a discrete set of wavelengths to alias fringe signals along an extended depth range into a bandwidth limited frequency window. By detecting the complex fringe signals and under the assumption of a depth-constrained signal, optical-domain subsampling enables recovery of the depth-resolved scattering signal without overlapping artifacts from this bandwidth-limited window. We highlight key principles behind optical-domain subsampled imaging, and demonstrate this principle experimentally using a polygon-filter based swept-source laser that includes an intra-cavity Fabry-Perot (FP) etalon. PMID:23038343

  7. Fabric pilling measurement using three-dimensional image

    NASA Astrophysics Data System (ADS)

    Ouyang, Wenbin; Wang, Rongwu; Xu, Bugao

    2013-10-01

    We introduce a stereovision system and the three-dimensional (3-D) image analysis algorithms for fabric pilling measurement. Based on the depth information available in the 3-D image, the pilling detection process starts from the seed searching at local depth maxima to the region growing around the selected seeds using both depth and distance criteria. After the pilling detection, the density, height, and area of individual pills in the image can be extracted to describe the pilling appearance. According to the multivariate regression analysis on the 3-D images of 30 cotton fabrics treated by the random-tumble and home-laundering machines, the pilling grade is highly correlated with the pilling density (R=0.923) but does not consistently change with the pilling height and area. The pilling densities measured from the 3-D images also correlate well with those counted manually from the samples (R=0.985).

  8. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  9. Fast processing of microscopic images using object-based extended depth of field.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.

  10. All-near-infrared multiphoton microscopy interrogates intact tissues at deeper imaging depths than conventional single- and two-photon near-infrared excitation microscopes

    PubMed Central

    Sarder, Pinaki; Yazdanfar, Siavash; Akers, Walter J.; Tang, Rui; Sudlow, Gail P.; Egbulefu, Christopher

    2013-01-01

    Abstract. The era of molecular medicine has ushered in the development of microscopic methods that can report molecular processes in thick tissues with high spatial resolution. A commonality in deep-tissue microscopy is the use of near-infrared (NIR) lasers with single- or multiphoton excitations. However, the relationship between different NIR excitation microscopic techniques and the imaging depths in tissue has not been established. We compared such depth limits for three NIR excitation techniques: NIR single-photon confocal microscopy (NIR SPCM), NIR multiphoton excitation with visible detection (NIR/VIS MPM), and all-NIR multiphoton excitation with NIR detection (NIR/NIR MPM). Homologous cyanine dyes provided the fluorescence. Intact kidneys were harvested after administration of kidney-clearing cyanine dyes in mice. NIR SPCM and NIR/VIS MPM achieved similar maximum imaging depth of ∼100  μm. The NIR/NIR MPM enabled greater than fivefold imaging depth (>500  μm) using the harvested kidneys. Although the NIR/NIR MPM used 1550-nm excitation where water absorption is relatively high, cell viability and histology studies demonstrate that the laser did not induce photothermal damage at the low laser powers used for the kidney imaging. This study provides guidance on the imaging depth capabilities of NIR excitation-based microscopic techniques and reveals the potential to multiplex information using these platforms. PMID:24150231

  11. An endoscopic diffuse optical tomographic method with high resolution based on the improved FOCUSS method

    NASA Astrophysics Data System (ADS)

    Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei

    2017-02-01

    Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.

  12. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  13. Direct local building inundation depth determination in 3-D point clouds generated from user-generated flood images

    NASA Astrophysics Data System (ADS)

    Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard

    2017-07-01

    In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.

  14. Focal switching of photochromic fluorescent proteins enables multiphoton microscopy with superior image contrast.

    PubMed

    Kao, Ya-Ting; Zhu, Xinxin; Xu, Fang; Min, Wei

    2012-08-01

    Probing biological structures and functions deep inside live organisms with light is highly desirable. Among the current optical imaging modalities, multiphoton fluorescence microscopy exhibits the best contrast for imaging scattering samples by employing a spatially confined nonlinear excitation. However, as the incident laser power drops exponentially with imaging depth into the sample due to the scattering loss, the out-of-focus background eventually overwhelms the in-focus signal, which defines a fundamental imaging-depth limit. Herein we significantly improve the image contrast for deep scattering samples by harnessing reversibly switchable fluorescent proteins (RSFPs) which can be cycled between bright and dark states upon light illumination. Two distinct techniques, multiphoton deactivation and imaging (MPDI) and multiphoton activation and imaging (MPAI), are demonstrated on tissue phantoms labeled with Dronpa protein. Such a focal switch approach can generate pseudo background-free images. Conceptually different from wave-based approaches that try to reduce light scattering in turbid samples, our work represents a molecule-based strategy that focused on imaging probes.

  15. Focal switching of photochromic fluorescent proteins enables multiphoton microscopy with superior image contrast

    PubMed Central

    Kao, Ya-Ting; Zhu, Xinxin; Xu, Fang; Min, Wei

    2012-01-01

    Probing biological structures and functions deep inside live organisms with light is highly desirable. Among the current optical imaging modalities, multiphoton fluorescence microscopy exhibits the best contrast for imaging scattering samples by employing a spatially confined nonlinear excitation. However, as the incident laser power drops exponentially with imaging depth into the sample due to the scattering loss, the out-of-focus background eventually overwhelms the in-focus signal, which defines a fundamental imaging-depth limit. Herein we significantly improve the image contrast for deep scattering samples by harnessing reversibly switchable fluorescent proteins (RSFPs) which can be cycled between bright and dark states upon light illumination. Two distinct techniques, multiphoton deactivation and imaging (MPDI) and multiphoton activation and imaging (MPAI), are demonstrated on tissue phantoms labeled with Dronpa protein. Such a focal switch approach can generate pseudo background-free images. Conceptually different from wave-based approaches that try to reduce light scattering in turbid samples, our work represents a molecule-based strategy that focused on imaging probes. PMID:22876358

  16. Visual saliency detection based on in-depth analysis of sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Shen, Siqiu; Ning, Chen

    2018-03-01

    Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.

  17. Deep-tissue two-photon imaging in brain and peripheral nerve with a compact high-pulse energy ytterbium fiber laser

    NASA Astrophysics Data System (ADS)

    Fontaine, Arjun K.; Kirchner, Matthew S.; Caldwell, John H.; Weir, Richard F.; Gibson, Emily A.

    2018-02-01

    Two-photon microscopy is a powerful tool of current scientific research, allowing optical visualization of structures below the surface of tissues. This is of particular value in neuroscience, where optically accessing regions within the brain is critical for the continued advancement in understanding of neural circuits. However, two-photon imaging at significant depths have typically used Ti:Sapphire based amplifiers that are prohibitively expensive and bulky. In this study, we demonstrate deep tissue two-photon imaging using a compact, inexpensive, turnkey operated Ytterbium fiber laser (Y-Fi, KM Labs). The laser is based on all-normal dispersion (ANDi) that provides short pulse durations and high pulse energies. Depth measurements obtained in ex vivo mouse cortex exceed those obtainable with standard two-photon microscopes using Ti:Sapphire lasers. In addition to demonstrating the capability of deep-tissue imaging in the brain, we investigated imaging depth in highly-scattering white matter with measurements in sciatic nerve showing limited optical penetration of heavily myelinated nerve tissue relative to grey matter.

  18. Hybrid model based unified scheme for endoscopic Cerenkov and radio-luminescence tomography: Simulation demonstration

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Cao, Xin; Ren, Qingyun; Chen, Xueli; He, Xiaowei

    2018-05-01

    Cerenkov luminescence imaging (CLI) is an imaging method that uses an optical imaging scheme to probe a radioactive tracer. Application of CLI with clinically approved radioactive tracers has opened an opportunity for translating optical imaging from preclinical to clinical applications. Such translation was further improved by developing an endoscopic CLI system. However, two-dimensional endoscopic imaging cannot identify accurate depth and obtain quantitative information. Here, we present an imaging scheme to retrieve the depth and quantitative information from endoscopic Cerenkov luminescence tomography, which can also be applied for endoscopic radio-luminescence tomography. In the scheme, we first constructed a physical model for image collection, and then a mathematical model for characterizing the luminescent light propagation from tracer to the endoscopic detector. The mathematical model is a hybrid light transport model combined with the 3rd order simplified spherical harmonics approximation, diffusion, and radiosity equations to warrant accuracy and speed. The mathematical model integrates finite element discretization, regularization, and primal-dual interior-point optimization to retrieve the depth and the quantitative information of the tracer. A heterogeneous-geometry-based numerical simulation was used to explore the feasibility of the unified scheme, which demonstrated that it can provide a satisfactory balance between imaging accuracy and computational burden.

  19. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  20. Rapid 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Harman, Philip V.; Flack, Julien; Fox, Simon; Dowley, Mark

    2002-05-01

    The conversion of existing 2D images to 3D is proving commercially viable and fulfills the growing need for high quality stereoscopic images. This approach is particularly effective when creating content for the new generation of autostereoscopic displays that require multiple stereo images. The dominant technique for such content conversion is to develop a depth map for each frame of 2D material. The use of a depth map as part of the 2D to 3D conversion process has a number of desirable characteristics: 1. The resolution of the depth may be lower than that of the associated 2D image. 2. It can be highly compressed. 3. 2D compatibility is maintained. 4. Real time generation of stereo, or multiple stereo pairs, is possible. The main disadvantage has been the laborious nature of the manual conversion techniques used to create depth maps from existing 2D images, which results in a slow and costly process. An alternative, highly productive technique has been developed based upon the use of Machine Leaning Algorithm (MLAs). This paper describes the application of MLAs to the generation of depth maps and presents the results of the commercial application of this approach.

  1. Expansion-based passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1993-01-01

    A new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases is described. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they were used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts - as well as the other parameters - can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline - resulting in a proportionately higher depth accuracy.

  2. Expansion-based passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1993-01-01

    This paper describes a new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they have been used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts--as well as the other parameters--can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline resulting in a proportionately higher depth accuracy.

  3. Underwater image enhancement through depth estimation based on random forest

    NASA Astrophysics Data System (ADS)

    Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han

    2017-11-01

    Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.

  4. Augmented reality 3D display based on integral imaging

    NASA Astrophysics Data System (ADS)

    Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua

    2017-02-01

    Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.

  5. Burn depth determination using high-speed polarization-sensitive Mueller optical coherence tomography with continuous polarization modulation

    NASA Astrophysics Data System (ADS)

    Todorović, Miloš; Ai, Jun; Pereda Cubian, David; Stoica, George; Wang, Lihong

    2006-02-01

    National Health Interview Survey (NHIS) estimates more than 1.1 million burn injuries per year in the United States, with nearly 15,000 fatalities from wounds and related complications. An imaging modality capable of evaluating burn depths non-invasively is the polarization-sensitive optical coherence tomography. We report on the use of a high-speed, fiber-based Mueller-matrix OCT system with continuous source-polarization modulation for burn depth evaluation. The new system is capable of imaging at near video-quality frame rates (8 frames per second) with resolution of 10 μm in biological tissue (index of refraction: 1.4) and sensitivity of 78 dB. The sample arm optics is integrated in a hand-held probe simplifying the in vivo experiments. The applicability of the system for burn depth determination is demonstrated using biological samples of porcine tendon and porcine skin. The results show an improved imaging depth (1 mm in tendon) and a clear localization of the thermally damaged region. The burnt area determined from OCT images compares well with the histology, thus proving the system's potential for burn depth determination.

  6. The Role of Binocular Disparity in Stereoscopic Images of Objects in the Macaque Anterior Intraparietal Area

    PubMed Central

    Romero, Maria C.; Van Dromme, Ilse C. L.; Janssen, Peter

    2013-01-01

    Neurons in the macaque Anterior Intraparietal area (AIP) encode depth structure in random-dot stimuli defined by gradients of binocular disparity, but the importance of binocular disparity in real-world objects for AIP neurons is unknown. We investigated the effect of binocular disparity on the responses of AIP neurons to images of real-world objects during passive fixation. We presented stereoscopic images of natural and man-made objects in which the disparity information was congruent or incongruent with disparity gradients present in the real-world objects, and images of the same objects where such gradients were absent. Although more than half of the AIP neurons were significantly affected by binocular disparity, the great majority of AIP neurons remained image selective even in the absence of binocular disparity. AIP neurons tended to prefer stimuli in which the depth information derived from binocular disparity was congruent with the depth information signaled by monocular depth cues, indicating that these monocular depth cues have an influence upon AIP neurons. Finally, in contrast to neurons in the inferior temporal cortex, AIP neurons do not represent images of objects in terms of categories such as animate-inanimate, but utilize representations based upon simple shape features including aspect ratio. PMID:23408970

  7. Retinal fundus imaging with a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos

    2018-02-01

    Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.

  8. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography.

    PubMed

    Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V

    2015-02-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.

  9. Restoration of distorted depth maps calculated from stereo sequences

    NASA Technical Reports Server (NTRS)

    Damour, Kevin; Kaufman, Howard

    1991-01-01

    A model-based Kalman estimator is developed for spatial-temporal filtering of noise and other degradations in velocity and depth maps derived from image sequences or cinema. As an illustration of the proposed procedures, edge information from image sequences of rigid objects is used in the processing of the velocity maps by selecting from a series of models for directional adaptive filtering. Adaptive filtering then allows for noise reduction while preserving sharpness in the velocity maps. Results from several synthetic and real image sequences are given.

  10. The impact of absorption coefficient on polarimetric determination of Berry phase based depth resolved characterization of biomedical scattering samples: a polarized Monte Carlo investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, Justin S; Koju, Vijay; John, Dwayne O

    2016-01-01

    The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scatteringmore » sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.« less

  11. Depth-estimation-enabled compound eyes

    NASA Astrophysics Data System (ADS)

    Lee, Woong-Bi; Lee, Heung-No

    2018-04-01

    Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.

  12. Thermographic imaging for high-temperature composite materials: A defect detection study

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Bodis, James R.; Bishop, Chip

    1995-01-01

    The ability of a thermographic imaging technique for detecting flat-bottom hole defects of various diameters and depths was evaluated in four composite systems (two types of ceramic matrix composites, one metal matrix composite, and one polymer matrix composite) of interest as high-temperature structural materials. The holes ranged from 1 to 13 mm in diameter and 0.1 to 2.5 mm in depth in samples approximately 2-3 mm thick. The thermographic imaging system utilized a scanning mirror optical system and infrared (IR) focusing lens in conjunction with a mercury cadmium telluride infrared detector element to obtain high resolution infrared images. High intensity flash lamps located on the same side as the infrared camera were used to heat the samples. After heating, up to 30 images were sequentially acquired at 70-150 msec intervals. Limits of detectability based on depth and diameter of the flat-bottom holes were defined for each composite material. Ultrasonic and radiographic images of the samples were obtained and compared with the thermographic images.

  13. Oriented modulation for watermarking in direct binary search halftone images.

    PubMed

    Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der

    2012-09-01

    In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.

  14. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  15. Single-channel stereoscopic ophthalmology microscope based on TRD

    NASA Astrophysics Data System (ADS)

    Radfar, Edalat; Park, Jihoon; Lee, Sangyeob; Ha, Myungjin; Yu, Sungkon; Jang, Seulki; Jung, Byungjo

    2016-03-01

    A stereoscopic imaging modality was developed for the application of ophthalmology surgical microscopes. A previous study has already introduced a single-channel stereoscopic video imaging modality based on a transparent rotating deflector (SSVIM-TRD), in which two different view angles, image disparity, are generated by imaging through a transparent rotating deflector (TRD) mounted on a stepping motor and is placed in a lens system. In this case, the image disparity is a function of the refractive index and the rotation angle of TRD. Real-time single-channel stereoscopic ophthalmology microscope (SSOM) based on the TRD is improved by real-time controlling and programming, imaging speed, and illumination method. Image quality assessments were performed to investigate images quality and stability during the TRD operation. Results presented little significant difference in image quality in terms of stability of structural similarity (SSIM). A subjective analysis was performed with 15 blinded observers to evaluate the depth perception improvement and presented significant improvement in the depth perception capability. Along with all evaluation results, preliminary results of rabbit eye imaging presented that the SSOM could be utilized as an ophthalmic operating microscopes to overcome some of the limitations of conventional ones.

  16. The morphological changes of optically cleared cochlea using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lee, Jaeyul; Song, Jaewon; Jeon, Mansik; Kim, Jeehyun

    2017-02-01

    In this study, we monitored the optical clearing effects by immersing ex vivo guinea pig cochlea samples in ethylenediaminetetraacetic acid (EDTA) to study the internal microstructures in the morphology of guinea pig cochlea. The imaging limitations due to the guinea pig cochlea structures were overcome by optical clearing technique. Subsequently, the study was carried out to confirm the required approximate immersing duration of cochlea in EDTA-based optical clearing to obtain the best optimal depth visibility for guinea pig cochlea samples. Thus, we implemented a decalcification-based optical clearing effect to guinea pig cochlea samples to enhance the depth visualization of internal microstructures using swept source optical coherence tomography (OCT). The obtained nondestructive two-dimensional OCT images successfully illustrated the feasibility of the proposed method by providing clearly visible microstructures in the depth direction as a result of decalcification. The most optimal clearing outcomes for the guinea pig cochlea were obtained after 14 consecutive days. The quantitative assessment results verified the increase of the intensity as well as the thickness measurements of the internal microstructures. Following this method, difficulties in imaging of internal cochlea microstructures of guinea pigs could be avoided. The obtained results verified that the depth visibility of the decalcified ex vivo guinea pig cochlea samples was enhanced. Therefore, the proposed EDTA-based optical clearing method for guinea pig can be considered as a potential application for depth-enhanced OCT visualization.

  17. Fast range estimation based on active range-gated imaging for coastal surveillance

    NASA Astrophysics Data System (ADS)

    Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang

    2012-11-01

    Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.

  18. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  19. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  20. Coherent diffraction surface imaging in reflection geometry.

    PubMed

    Marathe, Shashidhara; Kim, S S; Kim, S N; Kim, Chan; Kang, H C; Nickles, P V; Noh, D Y

    2010-03-29

    We present a reflection based coherent diffraction imaging method which can be used to reconstruct a non periodic surface image from a diffraction amplitude measured in reflection geometry. Using a He-Ne laser, we demonstrated that a surface image can be reconstructed solely from the reflected intensity from a surface without relying on any prior knowledge of the sample object or the object support. The reconstructed phase image of the exit wave is particularly interesting since it can be used to obtain quantitative information of the surface depth profile or the phase change during the reflection process. We believe that this work will broaden the application areas of coherent diffraction imaging techniques using light sources with limited penetration depth.

  1. Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2018-05-01

    To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  2. Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Wimmer, Michael

    2016-02-01

    With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.

  3. Overcoming sampling depth variations in the analysis of broadband hyperspectral images of breast tissue (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kho, Esther; de Boer, Lisanne L.; Van de Vijver, Koen K.; Sterenborg, Henricus J. C. M.; Ruers, Theo J. M.

    2017-02-01

    Worldwide, up to 40% of the breast conserving surgeries require additional operations due to positive resection margins. We propose to reduce this percentage by using hyperspectral imaging for resection margin assessment during surgery. Spectral hypercubes were collected from 26 freshly excised breast specimens with a pushbroom camera (900-1700nm). Computer simulations of the penetration depth in breast tissue suggest a strong variation in sampling depth ( 0.5-10 mm) over this wavelength range. This was confirmed with a breast tissue mimicking phantom study. Smaller penetration depths are observed in wavelength regions with high water and/or fat absorption. Consequently, tissue classification based on spectral analysis over the whole wavelength range becomes complicated. This is especially a problem in highly inhomogeneous human tissue. We developed a method, called derivative imaging, which allows accurate tissue analysis, without the impediment of dissimilar sampling volumes. A few assumptions were made based on previous research. First, the spectra acquired with our camera from breast tissue are mainly shaped by fat and water absorption. Second, tumor tissue contains less fat and more water than healthy tissue. Third, scattering slopes of different tissue types are assumed to be alike. In derivative imaging, the derivatives are calculated of wavelengths a few nanometers apart; ensuring similar penetration depths. The wavelength choice determines the accuracy of the method and the resolution. Preliminary results on 3 breast specimens indicate a classification accuracy of 93% when using wavelength regions characterized by water and fat absorption. The sampling depths at these regions are 1mm and 5mm.

  4. Modeling the depth-sectioning effect in reflection-mode dynamic speckle-field interferometric microscopy

    PubMed Central

    Zhou, Renjie; Jin, Di; Hosseini, Poorya; Singh, Vijay Raj; Kim, Yang-hyo; Kuang, Cuifang; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Unlike most optical coherence microscopy (OCM) systems, dynamic speckle-field interferometric microscopy (DSIM) achieves depth sectioning through the spatial-coherence gating effect. Under high numerical aperture (NA) speckle-field illumination, our previous experiments have demonstrated less than 1 μm depth resolution in reflection-mode DSIM, while doubling the diffraction limited resolution as under structured illumination. However, there has not been a physical model to rigorously describe the speckle imaging process, in particular explaining the sectioning effect under high illumination and imaging NA settings in DSIM. In this paper, we develop such a model based on the diffraction tomography theory and the speckle statistics. Using this model, we calculate the system response function, which is used to further obtain the depth resolution limit in reflection-mode DSIM. Theoretically calculated depth resolution limit is in an excellent agreement with experiment results. We envision that our physical model will not only help in understanding the imaging process in DSIM, but also enable better designing such systems for depth-resolved measurements in biological cells and tissues. PMID:28085800

  5. In vivo quantitative imaging of point-like bioluminescent and fluorescent sources: Validation studies in phantoms and small animals post mortem

    NASA Astrophysics Data System (ADS)

    Comsa, Daria Craita

    2008-10-01

    There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.

  6. High resolution crustal image of South California Continental Borderland: Reverse time imaging including multiples

    NASA Astrophysics Data System (ADS)

    Bian, A.; Gantela, C.

    2014-12-01

    Strong multiples were observed in marine seismic data of Los Angeles Regional Seismic Experiment (LARSE).It is crucial to eliminate these multiples in conventional ray-based or one-way wave-equation based depth image methods. As long as multiples contain information of target zone along travelling path, it's possible to use them as signal, to improve the illumination coverage thus enhance the image quality of structural boundaries. Reverse time migration including multiples is a two-way wave-equation based prestack depth image method that uses both primaries and multiples to map structural boundaries. Several factors, including source wavelet, velocity model, back ground noise, data acquisition geometry and preprocessing workflow may influence the quality of image. The source wavelet is estimated from direct arrival of marine seismic data. Migration velocity model is derived from integrated model building workflow, and the sharp velocity interfaces near sea bottom needs to be preserved in order to generate multiples in the forward and backward propagation steps. The strong amplitude, low frequency marine back ground noise needs to be removed before the final imaging process. High resolution reverse time image sections of LARSE Lines 1 and Line 2 show five interfaces: depth of sea-bottom, base of sedimentary basins, top of Catalina Schist, a deep layer and a possible pluton boundary. Catalina Schist shows highs in the San Clemente ridge, Emery Knoll, Catalina Ridge, under Catalina Basin on both the lines, and a minor high under Avalon Knoll. The high of anticlinal fold in Line 1 is under the north edge of Emery Knoll and under the San Clemente fault zone. An area devoid of any reflection features are interpreted as sides of an igneous plume.

  7. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    NASA Astrophysics Data System (ADS)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.

  8. In vivo microwave-based thermoacoustic tomography of rats (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lin, Li; Zhou, Yong; Wang, Lihong V.

    2016-03-01

    Microwave-based thermoacoustic tomography (TAT), based on the measurement of ultrasonic waves induced by microwave pulses, can reveal tissue dielectric properties that may be closely related to the physiological and pathological status of the tissues. Using microwaves as the excitation source improved imaging depth because of their deep penetration into biological tissues. We demonstrate, for the first time, in vivo microwave-based thermoacoustic imaging in rats. The transducer is rotated around the rat in a full circle, providing a full two-dimensional view. Instead of a flat ultrasonic transducer, we used a virtual line detector based on a cylindrically focused transducer. A 3 GHz microwave source with 0.6 µs pulse width and an electromagnetically shielded transducer with 2.25 MHz central frequency provided clear cross-sectional images of the rat's body. The high imaging contrast, based on the tissue's rate of absorption, and the ultrasonically defined spatial resolution combine to reveal the spine, kidney, muscle, and other deeply seated anatomical features in the rat's abdominal cavity. This non-invasive and non-ionizing imaging modality achieved an imaging depth beyond 6 cm in the rat's tissue. Cancer diagnosis based on information about tissue properties from microwave band TAT can potentially be more accurate than has previously been achievable.

  9. Vibration-based photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Li, Rui; Rajian, Justin R.; Wang, Pu; Slipchenko, Mikhail N.; Cheng, Ji-Xin

    2013-03-01

    Photoacoustic imaging employing molecular overtone vibration as contrast mechanism opens a new avenue for deep tissue imaging with chemical bond selectivity. Here, we demonstrate vibration-based photoacoustic tomography with an imaging depth on the centimeter scale. To provide sufficient pulse energy at the overtone transition wavelengths, we constructed a compact, barium nitrite crystal-based Raman laser for excitation of 2nd overtone of C-H bond. Using a 5-ns Nd:YAG laser as pumping source, up to 105 mJ pulse energy at 1197 nm was generated. Vibrational photoacoutic spectroscopy and tomography of phantom (polyethylene tube) immersed in whole milk was performed. With a pulse energy of 47 mJ on the milk surface, up to 2.5 cm penetration depth was reached with a signal-to-noise ratio of 12.

  10. Effective Vehicle-Based Kangaroo Detection for Collision Warning Systems Using Region-Based Convolutional Networks.

    PubMed

    Saleh, Khaled; Hossny, Mohammed; Nahavandi, Saeid

    2018-06-12

    Traffic collisions between kangaroos and motorists are on the rise on Australian roads. According to a recent report, it was estimated that there were more than 20,000 kangaroo vehicle collisions that occurred only during the year 2015 in Australia. In this work, we are proposing a vehicle-based framework for kangaroo detection in urban and highway traffic environment that could be used for collision warning systems. Our proposed framework is based on region-based convolutional neural networks (RCNN). Given the scarcity of labeled data of kangaroos in traffic environments, we utilized our state-of-the-art data generation pipeline to generate 17,000 synthetic depth images of traffic scenes with kangaroo instances annotated in them. We trained our proposed RCNN-based framework on a subset of the generated synthetic depth images dataset. The proposed framework achieved a higher average precision (AP) score of 92% over all the testing synthetic depth image datasets. We compared our proposed framework against other baseline approaches and we outperformed it with more than 37% in AP score over all the testing datasets. Additionally, we evaluated the generalization performance of the proposed framework on real live data and we achieved a resilient detection accuracy without any further fine-tuning of our proposed RCNN-based framework.

  11. Expanding the Detection of Traversable Area with RealSense for the Visually Impaired

    PubMed Central

    Yang, Kailun; Wang, Kaiwei; Hu, Weijian; Bai, Jian

    2016-01-01

    The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers. PMID:27879634

  12. Selectivity analysis of an incoherent grating imaged in a photorefractive crystal

    NASA Astrophysics Data System (ADS)

    Tebaldi, Myrian; Forte, Gustavo; Bolognini, Nestor; Lasprilla A., Maria del Carmen

    2018-04-01

    In this work, the diffraction efficiency of a volume phase grating incoherently stored in a photorefractive BSO crystal is theoretically and experimentally analyzed. The results confirm the theoretical proposal based on the coupled wave theory adopting a new grating depth parameter associated to the write-in incoherent optical system. The selectivity behavior is governed by the exit pupil diameter of the imaging recording system that controls the depth of the tridimensional image distribution along the propagation direction. Two incoherent gratings are multiplexed in a single crystal and reconstructed without cross-talk.

  13. Automatic Focusing for a 675 GHz Imaging Radar with Target Standoff Distances from 14 to 34 Meters

    NASA Technical Reports Server (NTRS)

    Tang, Adrian; Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria; Siegel, Peter H.

    2013-01-01

    This paper dicusses the issue of limited focal depth for high-resolution imaging radar operating over a wide range of standoff distances. We describe a technique for automatically focusing a THz imaging radar system using translational optics combined with range estimation based on a reduced chirp bandwidth setting. The demonstarted focusing algorithm estimates the correct focal depth for desired targets in the field of view at unknown standoffs and in the presence of clutter to provide good imagery at 14 to 30 meters of standoff.

  14. 3D imaging of translucent media with a plenoptic sensor based on phase space optics

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun

    2015-05-01

    Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.

  15. Temporal Surface Reconstruction

    DTIC Science & Technology

    1991-05-03

    and the convergence cannot be guaranteed. Maybank [68] investigated alternative incremental schemes for the estimation of feature locations from a...depth from image sequences. International Journal of Computer Vision, 3, 1989. [68] S. J. Maybank . Filter based estimates of depth. In Proceedings of the

  16. Multimode nonlinear optical imaging of the dermis in ex vivo human skin based on the combination of multichannel mode and Lambda mode.

    PubMed

    Zhuo, Shuangmu; Chen, Jianxin; Luo, Tianshu; Zou, Dingsong

    2006-08-21

    A Multimode nonlinear optical imaging technique based on the combination of multichannel mode and Lambda mode is developed to investigate human dermis. Our findings show that this technique not only improves the image contrast of the structural proteins of extracellular matrix (ECM) but also provides an image-guided spectral analysis method to identify both cellular and ECM intrinsic components including collagen, elastin, NAD(P)H and flavin. By the combined use of multichannel mode and Lambda mode in tandem, the obtained in-depth two photon-excited fluorescence (TPEF) and second-harmonic generation (SHG) imaging and TPEF/SHG signals depth-dependence decay can offer a sensitive tool for obtaining quantitative tissue structural and biochemical information. These results suggest that the technique has the potential to provide more accurate information for determining tissue physiological and pathological states.

  17. Full ocular biometry through dual-depth whole-eye optical coherence tomography

    PubMed Central

    Kim, Hyung-Jin; Kim, Minji; Hyeon, Min Gyu; Choi, Youngwoon; Kim, Beop-Min

    2018-01-01

    We propose a new method of determining the optical axis (OA), pupillary axis (PA), and visual axis (VA) of the human eye by using dual-depth whole-eye optical coherence tomography (OCT). These axes, as well as the angles “α” between the OA and VA and “κ” between PA and VA, are important in many ophthalmologic applications, especially in refractive surgery. Whole-eye images are reconstructed based on simultaneously acquired images of the anterior segment and retina. The light from a light source is split into two orthogonal polarization components for imaging the anterior segment and retina, respectively. The OA and PA are identified based on their geometric definitions by using the anterior segment image only, while the VA is detected through accurate correlation between the two images. The feasibility of our approach was tested using a model eye and human subjects. PMID:29552378

  18. Use of laser range finders and range image analysis in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A proposition to study the effect of filtering processes on range images and to evaluate the performance of two different laser range mappers is made. Median filtering was utilized to remove noise from the range images. First and second order derivatives are then utilized to locate the similarities and dissimilarities between the processed and the original images. Range depth information is converted into spatial coordinates, and a set of coefficients which describe 3-D objects is generated using the algorithm developed in the second phase of this research. Range images of spheres and cylinders are used for experimental purposes. An algorithm was developed to compare the performance of two different laser range mappers based upon the range depth information of surfaces generated by each of the mappers. Furthermore, an approach based on 2-D analytic geometry is also proposed which serves as a basis for the recognition of regular 3-D geometric objects.

  19. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    NASA Astrophysics Data System (ADS)

    Zheng, Yipeng; Tan, Wenjiang; Si, Jinhai; Ren, YuHu; Xu, Shichao; Tong, Junyi; Hou, Xun

    2016-09-01

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. This imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.

  20. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yipeng; Tan, Wenjiang, E-mail: tanwenjiang@mail.xjtu.edu.cn; Si, Jinhai

    2016-09-07

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. Thismore » imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.« less

  1. Digital tomosynthesis (DTS) with a Circular X-ray tube: Its image reconstruction based on total-variation minimization and the image characteristics

    NASA Astrophysics Data System (ADS)

    Park, Y. O.; Hong, D. K.; Cho, H. S.; Je, U. K.; Oh, J. E.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Jang, W. S.; Cho, H. M.; Choi, S. I.; Koo, Y. S.

    2013-09-01

    In this paper, we introduce an effective imaging system for digital tomosynthesis (DTS) with a circular X-ray tube, the so-called circular-DTS (CDTS) system, and its image reconstruction algorithm based on the total-variation (TV) minimization method for low-dose, high-accuracy X-ray imaging. Here, the X-ray tube is equipped with a series of cathodes distributed around a rotating anode, and the detector remains stationary throughout the image acquisition. We considered a TV-based reconstruction algorithm that exploited the sparsity of the image with substantially high image accuracy. We implemented the algorithm for the CDTS geometry and successfully reconstructed images of high accuracy. The image characteristics were investigated quantitatively by using some figures of merit, including the universal-quality index (UQI) and the depth resolution. For selected tomographic angles of 20, 40, and 60°, the corresponding UQI values in the tomographic view were estimated to be about 0.94, 0.97, and 0.98, and the depth resolutions were about 4.6, 3.1, and 1.2 voxels in full width at half maximum (FWHM), respectively. We expect the proposed method to be applicable to developing a next-generation dental or breast X-ray imaging system.

  2. Diffuse optical microscopy for quantification of depth-dependent epithelial backscattering in the cervix

    NASA Astrophysics Data System (ADS)

    Bodenschatz, Nico; Lam, Sylvia; Carraro, Anita; Korbelik, Jagoda; Miller, Dianne M.; McAlpine, Jessica N.; Lee, Marette; Kienle, Alwin; MacAulay, Calum

    2016-06-01

    A fiber optic imaging approach is presented using structured illumination for quantification of almost pure epithelial backscattering. We employ multiple spatially modulated projection patterns and camera-based reflectance capture to image depth-dependent epithelial scattering. The potential diagnostic value of our approach is investigated on cervical ex vivo tissue specimens. Our study indicates a strong backscattering increase in the upper part of the cervical epithelium caused by dysplastic microstructural changes. Quantization of relative depth-dependent backscattering is confirmed as a potentially useful diagnostic feature for detection of precancerous lesions in cervical squamous epithelium.

  3. Thermal-depth matching in dynamic scene based on affine projection and feature registration

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Jia, Tong; Wu, Chengdong; Li, Yongqiang

    2018-03-01

    This paper aims to study the construction of 3D temperature distribution reconstruction system based on depth and thermal infrared information. Initially, a traditional calibration method cannot be directly used, because the depth and thermal infrared camera is not sensitive to the color calibration board. Therefore, this paper aims to design a depth and thermal infrared camera calibration board to complete the calibration of the depth and thermal infrared camera. Meanwhile a local feature descriptors in thermal and depth images is proposed. The belief propagation matching algorithm is also investigated based on the space affine transformation matching and local feature matching. The 3D temperature distribution model is built based on the matching of 3D point cloud and 2D thermal infrared information. Experimental results show that the method can accurately construct the 3D temperature distribution model, and has strong robustness.

  4. Total variation based image deconvolution for extended depth-of-field microscopy images

    NASA Astrophysics Data System (ADS)

    Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.

    2015-03-01

    One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.

  5. Optical cryptography with biometrics for multi-depth objects.

    PubMed

    Yan, Aimin; Wei, Yang; Hu, Zhijuan; Zhang, Jingtao; Tsang, Peter Wai Ming; Poon, Ting-Chung

    2017-10-11

    We propose an optical cryptosystem for encrypting images of multi-depth objects based on the combination of optical heterodyne technique and fingerprint keys. Optical heterodyning requires two optical beams to be mixed. For encryption, each optical beam is modulated by an optical mask containing either the fingerprint of the person who is sending, or receiving the image. The pair of optical masks are taken as the encryption keys. Subsequently, the two beams are used to scan over a multi-depth 3-D object to obtain an encrypted hologram. During the decryption process, each sectional image of the 3-D object is recovered by convolving its encrypted hologram (through numerical computation) with the encrypted hologram of a pinhole image that is positioned at the same depth as the sectional image. Our proposed method has three major advantages. First, the lost-key situation can be avoided with the use of fingerprints as the encryption keys. Second, the method can be applied to encrypt 3-D images for subsequent decrypted sectional images. Third, since optical heterodyning scanning is employed to encrypt a 3-D object, the optical system is incoherent, resulting in negligible amount of speckle noise upon decryption. To the best of our knowledge, this is the first time optical cryptography of 3-D object images has been demonstrated in an incoherent optical system with biometric keys.

  6. Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information

    DOEpatents

    Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert

    2015-12-08

    Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.

  7. Upper mantle structure across the Trans-European Suture Zone imaged by S-receiver functions

    NASA Astrophysics Data System (ADS)

    Knapmeyer-Endrun, Brigitte; Krüger, Frank; Geissler, Wolfram H.; Passeq Working Group

    2017-01-01

    We present a high-resolution study of the upper mantle structure of Central Europe, including the western part of the East European Platform, based on S-receiver functions of 345 stations. A distinct contrast is found between Phanerozoic Europe and the East European Craton across the Trans-European Suture Zone. To the west, a pronounced velocity reduction with depth interpreted as lithosphere-asthenosphere boundary (LAB) is found at an average depth of 90 km. Beneath the craton, no strong and continuous LAB conversion is observed. Instead we find a distinct velocity reduction within the lithosphere, at 80-120 km depth. This mid-lithospheric discontinuity (MLD) is attributed to a compositional boundary between depleted and more fertile lithosphere created by late Proterozoic metasomatism. A potential LAB phase beneath the craton is very weak and varies in depth between 180 and 250 km, consistent with a reduced velocity contrast between the lower lithosphere and the asthenosphere. Within the Trans-European Suture Zone, lithospheric structure is characterized by strong heterogeneity. A dipping or step-wise increase to LAB depth of 150 km is imaged from Phanerozoic Europe to 20-22° E, whereas no direct connection to the cratonic LAB or MLD to the east is apparent. At larger depths, a positive conversion associated with the lower boundary of the asthenosphere is imaged at 210-250 km depth beneath Phanerozoic Europe, continuing down to 300 km depth beneath the craton. Conversions from both 410 km and 660 km discontinuities are found at their nominal depth beneath Phanerozoic Europe, and the discontinuity at 410 km depth can also be traced into the craton. A potential negative conversion on top of the 410 km discontinuity found in migrated images is analyzed by modeling and attributed to interference with other converted phases.

  8. Optimising probe holder design for sentinel lymph node imaging using clinical photoacoustic system with Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sivasubramanian, Kathyayini; Periyasamy, Vijitha; Wen, Kew Kok; Pramanik, Manojit

    2017-03-01

    Photoacoustic tomography is a hybrid imaging modality that combines optical and ultrasound imaging. It is rapidly gaining attention in the field of medical imaging. The challenge is to translate it into a clinical setup. In this work, we report the development of a handheld clinical photoacoustic imaging system. A clinical ultrasound imaging system is modified to integrate photoacoustic imaging with the ultrasound imaging. Hence, light delivery has been integrated with the ultrasound probe. The angle of light delivery is optimized in this work with respect to the depth of imaging. Optimization was performed based on Monte Carlo simulation for light transport in tissues. Based on the simulation results, the probe holders were fabricated using 3D printing. Similar results were obtained experimentally using phantoms. Phantoms were developed to mimic sentinel lymph node imaging scenario. Also, in vivo sentinel lymph node imaging was done using the same system with contrast agent methylene blue up to a depth of 1.5 cm. The results validate that one can use Monte Carlo simulation as a tool to optimize the probe holder design depending on the imaging needs. This eliminates a trial and error approach generally used for designing a probe holder.

  9. Electrical resistivity imaging in transmission between surface and underground tunnel for fault characterization

    NASA Astrophysics Data System (ADS)

    Lesparre, N.; Boyle, A.; Grychtol, B.; Cabrera, J.; Marteau, J.; Adler, A.

    2016-05-01

    Electrical resistivity images supply information on sub-surface structures and are classically performed to characterize faults geometry. Here we use the presence of a tunnel intersecting a regional fault to inject electrical currents between surface and the tunnel to improve the image resolution at depth. We apply an original methodology for defining the inversion parametrization based on pilot points to better deal with the heterogeneous sounding of the medium. An increased region of high spatial resolution is shown by analysis of point spread functions as well as inversion of synthetics. Such evaluations highlight the advantages of using transmission measurements by transferring a few electrodes from the main profile to increase the sounding depth. Based on the resulting image we propose a revised structure for the medium surrounding the Cernon fault supported by geological observations and muon flux measurements.

  10. Demonstration of a plenoptic microscope based on laser optical feedback imaging.

    PubMed

    Glastre, Wilfried; Hugon, Olivier; Jacquin, Olivier; Guillet de Chatellus, Hugues; Lacot, Eric

    2013-03-25

    A new kind of plenoptic imaging system based on Laser Optical Feedback Imaging (LOFI) is presented and is compared to another previously existing device based on microlens array. Improved photometric performances, resolution and depth of field are obtained at the price of a slow point by point scanning. Main properties of plenoptic microscopes such as numerical refocusing on any curved surface or aberrations compensation are both theoretically and experimentally demonstrated with a LOFI-based device.

  11. Observation of wave celerity evolution in the nearshore using digital video imagery

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.

    2008-12-01

    Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.

  12. Non-Parametric Blur Map Regression for Depth of Field Extension.

    PubMed

    D'Andres, Laurent; Salvador, Jordi; Kochale, Axel; Susstrunk, Sabine

    2016-04-01

    Real camera systems have a limited depth of field (DOF) which may cause an image to be degraded due to visible misfocus or too shallow DOF. In this paper, we present a blind deblurring pipeline able to restore such images by slightly extending their DOF and recovering sharpness in regions slightly out of focus. To address this severely ill-posed problem, our algorithm relies first on the estimation of the spatially varying defocus blur. Drawing on local frequency image features, a machine learning approach based on the recently introduced regression tree fields is used to train a model able to regress a coherent defocus blur map of the image, labeling each pixel by the scale of a defocus point spread function. A non-blind spatially varying deblurring algorithm is then used to properly extend the DOF of the image. The good performance of our algorithm is assessed both quantitatively, using realistic ground truth data obtained with a novel approach based on a plenoptic camera, and qualitatively with real images.

  13. Stereo matching algorithm based on double components model

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Ou, Kejun; Zhao, Jianxin; Mou, Xingang

    2018-03-01

    The tiny wires are the great threat to the safety of the UAV flight. Because they have only several pixels isolated far from the background, while most of the existing stereo matching methods require a certain area of the support region to improve the robustness, or assume the depth dependence of the neighboring pixels to meet requirement of global or semi global optimization method. So there will be some false alarms even failures when images contains tiny wires. A new stereo matching algorithm is approved in the paper based on double components model. According to different texture types the input image is decomposed into two independent component images. One contains only sparse wire texture image and another contains all remaining parts. Different matching schemes are adopted for each component image pairs. Experiment proved that the algorithm can effectively calculate the depth image of complex scene of patrol UAV, which can detect tiny wires besides the large size objects. Compared with the current mainstream method it has obvious advantages.

  14. High-speed image processing system and its micro-optics application

    NASA Astrophysics Data System (ADS)

    Ohba, Kohtaro; Ortega, Jesus C. P.; Tanikawa, Tamio; Tanie, Kazuo; Tajima, Kenji; Nagai, Hiroshi; Tsuji, Masataka; Yamada, Shigeru

    2003-07-01

    In this paper, a new application system with high speed photography, i.e. an observational system for the tele-micro-operation, has been proposed with a dynamic focusing system and a high-speed image processing system using the "Depth From Focus (DFF)" criteria. In micro operation, such as for the microsurgery, DNA operation and etc., the small depth of a focus on the microscope makes bad observation. For example, if the focus is on the object, the actuator cannot be seen with the microscope. On the other hand, if the focus is on the actuator, the object cannot be observed. In this sense, the "all-in-focus image," which holds the in-focused texture all over the image, is useful to observe the microenvironments on the microscope. It is also important to obtain the "depth map" which could show the 3D micro virtual environments in real-time to actuate the micro objects, intuitively. To realize the real-time micro operation with DFF criteria, which has to integrate several images to obtain "all-in-focus image" and "depth map," at least, the 240 frames par second based image capture and processing system should be required. At first, this paper briefly reviews the criteria of "depth from focus" to achieve the all-in-focus image and the 3D microenvironments' reconstruction, simultaneously. After discussing the problem in our past system, a new frame-rate system is constructed with the high-speed video camera and FPGA hardware with 240 frames par second. To apply this system in the real microscope, a new criterion "ghost filtering" technique to reconstruct the all-in-focus image is proposed. Finally, the micro observation shows the validity of this system.

  15. Adding polarimetric imaging to depth map using improved light field camera 2.0 structure

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Yang, Yi; Du, Shaojun; Cao, Yu

    2017-06-01

    Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an "eyes array", with 3 or more polarization imaging "glasses" in front of each "eye". Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring "eyes", while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.

  16. Salient object detection based on multi-scale contrast.

    PubMed

    Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long

    2018-05-01

    Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  18. Mapping the opacity of paint layers in paintings with coloured grounds using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, Ping; Hall-Aquitania, Moorea; Hermens, Erma; Groves, Roger M.

    2017-07-01

    Optical diagnostics techniques are becoming important for technical art history (TAH) as well as for heritage conservation. In recent years, optical coherence tomography (OCT) has been increasingly used as a novel technique for the inspection of artwork, revealing the stratigraphy of paintings. It has also shown to be an effective tool for vanish layer inspection. OCT is a contactless and non-destructive technique for microstructural imaging of turbid media, originally developed for medical applications. However current OCT instruments have difficulty in paint layer inspection due to the opacity of most pigments. This paper explores the potential of OCT for the investigation of paintings with coloured grounds. Depth scans were processed to determine the light penetration depth at the optical wavelength based on a 1/e light attenuation calculation. The variation in paint opacity was mapped based on the microstructural images and 3D penetration depth profiles was calculated and related back to the construction of the artwork. By determining the light penetration depth over a range of wavelengths the 3D depth perception of a painting with coloured grounds can be characterized optically.

  19. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation

    PubMed Central

    Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei

    2017-01-01

    Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027

  20. Functional imaging and assessment of the glucose diffusion rate in epithelial tissues in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Larin, K. V.; Tuchin, V. V.

    2008-06-01

    Functional imaging, monitoring and quantitative description of glucose diffusion in epithelial and underlying stromal tissues in vivo and controlling of the optical properties of tissues are extremely important for many biomedical applications including the development of noninvasive or minimally invasive glucose sensors as well as for therapy and diagnostics of various diseases, such as cancer, diabetic retinopathy, and glaucoma. Recent progress in the development of a noninvasive molecular diffusion biosensor based on optical coherence tomography (OCT) is described. The diffusion of glucose was studied in several epithelial tissues both in vitro and in vivo. Because OCT provides depth-resolved imaging of tissues with high in-depth resolution, the glucose diffusion is described not only as a function of time but also as a function of depth.

  1. Sampling strategies to improve passive optical remote sensing of river bathymetry

    USGS Publications Warehouse

    Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.

    2018-01-01

    Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.

  2. Spatio-thermal depth correction of RGB-D sensors based on Gaussian processes in real-time

    NASA Astrophysics Data System (ADS)

    Heindl, Christoph; Pönitz, Thomas; Stübl, Gernot; Pichler, Andreas; Scharinger, Josef

    2018-04-01

    Commodity RGB-D sensors capture color images along with dense pixel-wise depth information in real-time. Typical RGB-D sensors are provided with a factory calibration and exhibit erratic depth readings due to coarse calibration values, ageing and thermal influence effects. This limits their applicability in computer vision and robotics. We propose a novel method to accurately calibrate depth considering spatial and thermal influences jointly. Our work is based on Gaussian Process Regression in a four dimensional Cartesian and thermal domain. We propose to leverage modern GPUs for dense depth map correction in real-time. For reproducibility we make our dataset and source code publicly available.

  3. Synthesized view comparison method for no-reference 3D image quality assessment

    NASA Astrophysics Data System (ADS)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  4. Lip boundary detection techniques using color and depth information

    NASA Astrophysics Data System (ADS)

    Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek

    2002-01-01

    This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.

  5. Time-of-flight camera via a single-pixel correlation image sensor

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua

    2018-04-01

    A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.

  6. Enhancing depth of focus in tilted microfluidics channels by digital holography.

    PubMed

    Matrecano, Marcella; Paturzo, Melania; Finizio, Andrea; Ferraro, Pietro

    2013-03-15

    In this Letter we propose a method to enhance the limited depth of field (DOF) in optical imaging systems, through digital holography. The proposed approach is based on the introduction of a cubic phase plate into the diffraction integral, analogous to what occurs in white-light imaging systems. By this approach we show that it is possible to improve the DOF and to recover the extended focus image of a tilted object in a single reconstruction step. Moreover, we demonstrate the possibility of obtaining well-focused biological cells flowing into a tilted microfluidic channel.

  7. Three-dimensional optoacoustic mesoscopy of the tumor heterogeneity in vivo using high depth-to-resolution multispectral optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Li, Jiao; Zhang, Songhe; Chekkoury, Andrei; Glasl, Sarah; Vetschera, Paul; Koberstein-Schwarz, Benno; Omar, Murad; Ntziachristos, Vasilis

    2017-03-01

    Multispectral optoacoustic mesoscopy (MSOM) has been recently introduced for cancer imaging, it has the potential for high resolution imaging of cancer development in vivo, at depths beyond the diffusion limit. Based on spectral features, optoacoustic imaging is capable of visualizing angiogenesis and imaging cancer heterogeneity of malignant tumors through endogenous hemoglobin. However, high-resolution structural and functional imaging of whole tumor mass is limited by modest penetration and image quality, due to the insufficient capability of ultrasound detectors and the twodimensional scan geometry. In this study, we introduce a novel multi-spectral optoacoustic mesoscopy (MSOM) for imaging subcutaneous or orthotopic tumors implanted in lab mice, with the high-frequency ultrasound linear array and a conical scanning geometry. Detailed volumetric images of vasculature and oxygen saturation of tissue in the entire tumors are obtained in vivo, at depths up to 10 mm with the desirable spatial resolutions approaching 70μm. This unprecedented performance enables the visualization of vasculature morphology and hypoxia conditions has been verified with ex vivo studies. These findings demonstrate the potential of MSOM for preclinical oncological studies in deep solid tumors to facilitate the characterization of tumor's angiogenesis and the evaluation of treatment strategies.

  8. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  9. Use of kurtosis for locating deep blood vessels in raw speckle imaging using a homogeneity representation.

    PubMed

    Peregrina-Barreto, Hayde; Perez-Corona, Elizabeth; Rangel-Magdaleno, Jose; Ramos-Garcia, Ruben; Chiu, Roger; Ramirez-San-Juan, Julio C

    2017-06-01

    Visualization of deep blood vessels in speckle images is an important task as it is used to analyze the dynamics of the blood flow and the health status of biological tissue. Laser speckle imaging is a wide-field optical technique to measure relative blood flow speed based on the local speckle contrast analysis. However, it has been reported that this technique is limited to certain deep blood vessels (about ? = 300 ?? ? m ) because of the high scattering of the sample; beyond this depth, the quality of the vessel’s image decreases. The use of a representation based on homogeneity values, computed from the co-occurrence matrix, is proposed as it provides an improved vessel definition and its corresponding diameter. Moreover, a methodology is proposed for automatic blood vessel location based on the kurtosis analysis. Results were obtained from the different skin phantoms, showing that it is possible to identify the vessel region for different morphologies, even up to 900 ?? ? m in depth.

  10. Generating porosity spectrum of carbonate reservoirs using ultrasonic imaging log

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Nie, Xin; Xiao, Suyun; Zhang, Chong; Zhang, Chaomo; Zhang, Zhansong

    2018-03-01

    Imaging logging tools can provide us the borehole wall image. The micro-resistivity imaging logging has been used to obtain borehole porosity spectrum. However, the resistivity imaging logging cannot cover the whole borehole wall. In this paper, we propose a method to calculate the porosity spectrum using ultrasonic imaging logging data. Based on the amplitude attenuation equation, we analyze the factors affecting the propagation of wave in drilling fluid and formation and based on the bulk-volume rock model, Wyllie equation and Raymer equation, we establish various conversion models between the reflection coefficient β and porosity ϕ. Then we use the ultrasonic imaging logging and conventional wireline logging data to calculate the near-borehole formation porosity distribution spectrum. The porosity spectrum result obtained from ultrasonic imaging data is compared with the one from the micro-resistivity imaging data, and they turn out to be similar, but with discrepancy, which is caused by the borehole coverage and data input difference. We separate the porosity types by performing threshold value segmentation and generate porosity-depth distribution curves by counting with equal depth spacing on the porosity image. The practice result is good and reveals the efficiency of our method.

  11. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  12. Evaluation of multi-resolution satellite sensors for assessing water quality and bottom depth of Lake Garda.

    PubMed

    Giardino, Claudia; Bresciani, Mariano; Cazzaniga, Ilaria; Schenk, Karin; Rieger, Patrizia; Braga, Federica; Matta, Erica; Brando, Vittorio E

    2014-12-15

    In this study we evaluate the capabilities of three satellite sensors for assessing water composition and bottom depth in Lake Garda, Italy. A consistent physics-based processing chain was applied to Moderate Resolution Imaging Spectroradiometer (MODIS), Landsat-8 Operational Land Imager (OLI) and RapidEye. Images gathered on 10 June 2014 were corrected for the atmospheric effects with the 6SV code. The computed remote sensing reflectance (Rrs) from MODIS and OLI were converted into water quality parameters by adopting a spectral inversion procedure based on a bio-optical model calibrated with optical properties of the lake. The same spectral inversion procedure was applied to RapidEye and to OLI data to map bottom depth. In situ measurements of Rrs and of concentrations of water quality parameters collected in five locations were used to evaluate the models. The bottom depth maps from OLI and RapidEye showed similar gradients up to 7 m (r = 0.72). The results indicate that: (1) the spatial and radiometric resolutions of OLI enabled mapping water constituents and bottom properties; (2) MODIS was appropriate for assessing water quality in the pelagic areas at a coarser spatial resolution; and (3) RapidEye had the capability to retrieve bottom depth at high spatial resolution. Future work should evaluate the performance of the three sensors in different bio-optical conditions.

  13. Imaging rifting at the lithospheric scale in the northern East African Rift using S-to-P receiver functions

    NASA Astrophysics Data System (ADS)

    Lavayssiere, A.; Rychert, C.; Harmon, N.; Keir, D.; Hammond, J. O. S.; Kendall, J. M.; Leroy, S. D.; Doubre, C.

    2017-12-01

    The lithosphere is modified during rifting by a combination of mechanical stretching, heating and potentially partial melt. We image the crust and upper mantle discontinuity structure beneath the northern East African Rift System (EARS), a unique tectonically active continental rift exposing along strike the transition from continental rifting in the Main Ethiopian rift (MER) to incipient seafloor spreading in Afar and the Red Sea. S-to-P receiver functions from 182 stations across the northern EARS were generated from 3688 high quality waveforms using a multitaper technique and then migrated to depth using a regional velocity model. Waveform modelling of data stacked in large conversion point bins confirms the depth and strength of imaged discontinuities. We image the Moho at 29.6±4.7 km depth beneath the Ethiopian plateaux with a variability in depth that is possibly due to lower crustal intrusions. The crust is 27.3±3.9 km thick in the MER and thinner in northern Afar, 17.5±0.7 km. The model requires a 3±1.2% reduction in shear velocity with increasing depth at 68.5±1.5 km beneath the Ethiopian plateaux, consistent with the lithosphere-asthenosphere boundary (LAB). We do not resolve a LAB beneath Afar and the MER. This is likely associated with partial melt near the base of the lithosphere, reducing the velocity contrast between the melt-intruded lithosphere and the partially molten asthenosphere. We identify a 4.5±0.7% increase in velocity with depth at 91±3 km beneath the MER. This change in velocity is consistent with the onset of melting found by previous receiver functions and petrology studies. Our results provide independent constraints on the depth of melt production in the asthenosphere and suggest melt percolation through the base of the lithosphere beneath the northernmost East African rift.

  14. Regional fringe analysis for improving depth measurement in phase-shifting fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Chien, Kuang-Che Chang; Tu, Han-Yen; Hsieh, Ching-Huang; Cheng, Chau-Jern; Chang, Chun-Yen

    2018-01-01

    This study proposes a regional fringe analysis (RFA) method to detect the regions of a target object in captured shifted images to improve depth measurement in phase-shifting fringe projection profilometry (PS-FPP). In the RFA method, region-based segmentation is exploited to segment the de-fringed image of a target object, and a multi-level fuzzy-based classification with five presented features is used to analyze and discriminate the regions of an object from the segmented regions, which were associated with explicit fringe information. Then, in the experiment, the performance of the proposed method is tested and evaluated on 26 test cases made of five types of materials. The qualitative and quantitative results demonstrate that the proposed RFA method can effectively detect the desired regions of an object to improve depth measurement in the PS-FPP system.

  15. Real-time handling of existing content sources on a multi-layer display

    NASA Astrophysics Data System (ADS)

    Singh, Darryl S. K.; Shin, Jung

    2013-03-01

    A Multi-Layer Display (MLD) consists of two or more imaging planes separated by physical depth where the depth is a key component in creating a glasses-free 3D effect. Its core benefits include being viewable from multiple angles, having full panel resolution for 3D effects with no side effects of nausea or eye-strain. However, typically content must be designed for its optical configuration in foreground and background image pairs. A process was designed to give a consistent 3D effect in a 2-layer MLD from existing stereo video content in real-time. Optimizations to stereo matching algorithms that generate depth maps in real-time were specifically tailored for the optical characteristics and image processing algorithms of a MLD. The end-to-end process included improvements to the Hierarchical Belief Propagation (HBP) stereo matching algorithm, improvements to optical flow and temporal consistency. Imaging algorithms designed for the optical characteristics of a MLD provided some visual compensation for depth map inaccuracies. The result can be demonstrated in a PC environment, displayed on a 22" MLD, used in the casino slot market, with 8mm of panel seperation. Prior to this development, stereo content had not been used to achieve a depth-based 3D effect on a MLD in real-time

  16. Detailed Image of the Subducting Plate and Upper mantle Seismic Discontinuities in the Mariana Subduction Zone

    NASA Astrophysics Data System (ADS)

    Tibi, R.; Wiens, D. A.; Shiobara, H.; Sugioka, H.; Yuan, X.

    2006-12-01

    We use P-to-S converted teleseismic phases recorded at island and ocean bottom stations in Mariana to image the subducting plate and the upper mantle seismic discontinuities in the Mariana subduction zone. The land and seafloor stations which operated from June 2003 to May 2004, were deployed within the framework of the MARGINS Subduction Factory experiment of the Mariana system. The crust in the sudducting plate is observed at about 80--90 km depth beneath the islands of Saipan, Tinian and Rota. For most of the island stations, a low velocity layer is imaged in the forearc at depth between about 20 and 60 km, with decreasing depths toward the arc. The nature of this feature is not yet clear. We found evidence for double seismic discontinuities at the base of the transition zone near the Mariana slab. A shallower discontinuity is imaged at depths of ~650--715 km, and a deeper interface lies at ~740-- 770 km depth. The amplitudes of the seismic signals suggest that the shear velocity contrasts across the two features are comparable. These characteristics support the interpretation that the discontinuities are the results of the phase transformations in olivine (ringwoodite to post-spinel) and garnet (ilminite to perovskite), respectively, for the pyrolite model of mantle composition.

  17. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  18. Robust curb detection with fusion of 3D-Lidar and camera data.

    PubMed

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-05-21

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  19. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  20. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  1. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  2. Classification of river water pollution using Hyperion data

    NASA Astrophysics Data System (ADS)

    Kar, Soumyashree; Rathore, V. S.; Champati ray, P. K.; Sharma, Richa; Swain, S. K.

    2016-06-01

    A novel attempt is made to use hyperspectral remote sensing to identify the spatial variability of metal pollutants present in river water. It was also attempted to classify the hyperspectral image - Earth Observation-1 (EO-1) Hyperion data of an 8 km stretch of the river Yamuna, near Allahabad city in India depending on its chemical composition. For validating image analysis results, a total of 10 water samples were collected and chemically analyzed using Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES). Two different spectral libraries from field and image data were generated for the 10 sample locations. Advanced per-pixel supervised classifications such as Spectral Angle Mapper (SAM), SAM target finder using BandMax and Support Vector Machine (SVM) were carried out along with the unsupervised clustering procedure - Iterative Self-Organizing Data Analysis Technique (ISODATA). The results were compared and assessed with respect to ground data. Analytical Spectral Devices (ASD), Inc. spectroradiometer, FieldSpec 4 was used to generate the spectra of the water samples which were compiled into a spectral library and used for Spectral Absorption Depth (SAD) analysis. The spectral depth pattern of image and field spectral libraries was found to be highly correlated (correlation coefficient, R2 = 0.99) which validated the image analysis results with respect to the ground data. Further, we carried out a multivariate regression analysis to assess the varying concentrations of metal ions present in water based on the spectral depth of the corresponding absorption feature. Spectral Absorption Depth (SAD) analysis along with metal analysis of field data revealed the order in which the metals affected the river pollution, which was in conformity with the findings of Central Pollution Control Board (CPCB). Therefore, it is concluded that hyperspectral imaging provides opportunity that can be used for satellite based remote monitoring of water quality from space.

  3. Characterization of a time-resolved non-contact scanning diffuse optical imaging system exploiting fast-gated single-photon avalanche diode detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Sieno, Laura, E-mail: laura.disieno@polimi.it; Dalla Mora, Alberto; Contini, Davide

    2016-03-15

    We present a system for non-contact time-resolved diffuse reflectance imaging, based on small source-detector distance and high dynamic range measurements utilizing a fast-gated single-photon avalanche diode. The system is suitable for imaging of diffusive media without any contact with the sample and with a spatial resolution of about 1 cm at 1 cm depth. In order to objectively assess its performances, we adopted two standardized protocols developed for time-domain brain imagers. The related tests included the recording of the instrument response function of the setup and the responsivity of its detection system. Moreover, by using liquid turbid phantoms with absorbingmore » inclusions, depth-dependent contrast and contrast-to-noise ratio as well as lateral spatial resolution were measured. To illustrate the potentialities of the novel approach, the characteristics of the non-contact system are discussed and compared to those of a fiber-based brain imager.« less

  4. Dense real-time stereo matching using memory efficient semi-global-matching variant based on FPGAs

    NASA Astrophysics Data System (ADS)

    Buder, Maximilian

    2012-06-01

    This paper presents a stereo image matching system that takes advantage of a global image matching method. The system is designed to provide depth information for mobile robotic applications. Typical tasks of the proposed system are to assist in obstacle avoidance, SLAM and path planning. Mobile robots pose strong requirements about size, energy consumption, reliability and output quality of the image matching subsystem. Current available systems either rely on active sensors or on local stereo image matching algorithms. The first are only suitable in controlled environments while the second suffer from low quality depth-maps. Top ranking quality results are only achieved by an iterative approach using global image matching and color segmentation techniques which are computationally demanding and therefore difficult to be executed in realtime. Attempts were made to still reach realtime performance with global methods by simplifying the routines. The depth maps are at the end almost comparable to local methods. An equally named semi-global algorithm was proposed earlier that shows both very good image matching results and relatively simple operations. A memory efficient variant of the Semi-Global-Matching algorithm is reviewed and adopted for an implementation based on reconfigurable hardware. The implementation is suitable for realtime execution in the field of robotics. It will be shown that the modified version of the efficient Semi-Global-Matching method is delivering equivalent result compared to the original algorithm based on the Middlebury dataset. The system has proven to be capable of processing VGA sized images with a disparity resolution of 64 pixel at 33 frames per second based on low cost to mid-range hardware. In case the focus is shifted to a higher image resolution, 1024×1024-sized stereo frames may be processed with the same hardware at 10 fps. The disparity resolution settings stay unchanged. A mobile system that covers preprocessing, matching and interfacing operations is also presented.

  5. Wide field video-rate two-photon imaging by using spinning disk beam scanner

    NASA Astrophysics Data System (ADS)

    Maeda, Yasuhiro; Kurokawa, Kazuo; Ito, Yoko; Wada, Satoshi; Nakano, Akihiko

    2018-02-01

    The microscope technology with wider view field, deeper penetration depth, higher spatial resolution and higher imaging speed are required to investigate the intercellular dynamics or interactions of molecules and organs in cells or a tissue in more detail. The two-photon microscope with a near infrared (NIR) femtosecond laser is one of the technique to improve the penetration depth and spatial resolution. However, the video-rate or high-speed imaging with wide view field is difficult to perform with the conventional two-photon microscope. Because point-to-point scanning method is used in conventional one, so it's difficult to achieve video-rate imaging. In this study, we developed a two-photon microscope with spinning disk beam scanner and femtosecond NIR fiber laser with around 10 W average power for the microscope system to achieve above requirements. The laser is consisted of an oscillator based on mode-locked Yb fiber laser, a two-stage pre-amplifier, a main amplifier based on a Yb-doped photonic crystal fiber (PCF), and a pulse compressor with a pair of gratings. The laser generates a beam with maximally 10 W average power, 300 fs pulse width and 72 MHz repetition rate. And the beam incident to a spinning beam scanner (Yokogawa Electric) optimized for two-photon imaging. By using this system, we achieved to obtain the 3D images with over 1mm-penetration depth and video-rate image with 350 x 350 um view field from the root of Arabidopsis thaliana.

  6. Metadata-assisted nonuniform atmospheric scattering model of image haze removal for medium-altitude unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun

    2017-09-01

    Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.

  7. Moho Depth Variations in the Northeastern North China Craton Revealed by Receiver Function Imaging

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, L.; Yao, H.; Fang, L.

    2016-12-01

    The North China Craton (NCC), one of the oldest cratons in the world, has attracted wide attention in Earth Science for decades because of the unusual Mesozoic destruction of its cratonic lithosphere. Understanding the deep processes and mechanism of this craton destruction demands detailed knowledge about the deep structure of the region. In this study, we used two-year teleseismic receiver function data from the North China Seismic Array consisting of 200 broadband stations deployed in the northeastern NCC to image the Moho undulation of the region. A 2-D wave equation-based poststack depth migration method was employed to construct the structural images along 19 profiles, and a pseudo 3D crustal velocity model of the region based on previous ambient noise tomography and receiver function study was adopted in the migration. We considered both the Ps and PpPs phases, but in some cases we also conducted PpSs+PsPs migration using different back azimuth ranges of the data, and calculated the travel times of all the considered phases to constrain the Moho depths. By combining the structure images along the 19 profiles, we got a high-resolution Moho depth map beneath the northeastern NCC. Our results broadly consist with the results of previous active source studies [http://www.craton.cn/data], and show a good correlation of the Moho depths with geological and tectonic features. Generally, the Moho depths are distinctly different on the opposite sides of the North-South Gravity Lineament. The Moho in the west are deeper than 40 km and shows a rapid uplift from 40 km to 30 km beneath the Taihang Mountain Range in the middle. To the east in the Bohai Bay Basin, the Moho further shallows to 30-26 km depth and undulates by 3 km, coinciding well with the depressions and uplifts inside the basin. The Moho depth beneath the Yin-Yan Mountains in the north gradually decreases from 42 km in the west to 25 km in the east, varying much smoother than that to the south.

  8. Layered compression for high-precision depth data.

    PubMed

    Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen

    2015-12-01

    With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.

  9. Bathymetric mapping of submarine sand waves using multiangle sun glitter imagery: a case of the Taiwan Banks with ASTER stereo imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Hua-guo; Yang, Kang; Lou, Xiu-lin; Li, Dong-ling; Shi, Ai-qin; Fu, Bin

    2015-01-01

    Submarine sand waves are visible in optical sun glitter remote sensing images and multiangle observations can provide valuable information. We present a method for bathymetric mapping of submarine sand waves using multiangle sun glitter information from Advanced Spaceborne Thermal Emission and Reflection Radiometer stereo imagery. Based on a multiangle image geometry model and a sun glitter radiance transfer model, sea surface roughness is derived using multiangle sun glitter images. These results are then used for water depth inversions based on the Alpers-Hennings model, supported by a few true depth data points (sounding data). Case study results show that the inversion and true depths match well, with high-correlation coefficients and root-mean-square errors from 1.45 to 2.46 m, and relative errors from 5.48% to 8.12%. The proposed method has some advantages over previous methods in that it requires fewer true depth data points, it does not require environmental parameters or knowledge of sand-wave morphology, and it is relatively simple to operate. On this basis, we conclude that this method is effective in mapping submarine sand waves and we anticipate that it will also be applicable to other similar topography types.

  10. In vivo optical coherence tomography imaging of dissolution of hyaluronic acid microneedles in human skin (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Song, Seungri; Kim, Jung Dong; Bae, Jung-hyun; Chang, Sooho; Kim, Soocheol; Lee, Hyungsuk; Jeong, Dohyeon; Kim, Hong Kee; Joo, Chulmin

    2017-02-01

    Transdermal drug delivery (TDD) has been recently highlighted as an alternative to oral delivery and hypodermic injections. Among many methods, drug delivery using a microneedle (MN) is one of the promising administration strategies due to its high skin permeability, mininal invasiveness, and ease of injection. In addition, microneedle-based TDD is explored for cosmetic and therapeutic purposes, rapidly developing market of microneedle industry for general population. To date, visualization of microneedles inserted into biological tissue has primarily been performed ex vivo. MRI, CT and ultrasound imaging do not provide sufficient spatial resolution, and optical microscopy is not suitable because of their limited imaging depth; structure of microneedles located in 0.2 1mm into the skin cannot be visulalized. Optical coherence tomography (OCT) is a non-invasive, cross-sectional optical imaging modality for biological tissue with high spatial resolution and acquisition speed. Compared with ultrasound imaging, it exhibits superior spatial resolution (1 10 um) and high sensitivity, while providing an imaging depth of biological tissue down to 1 2 mm. Here, we present in situ imaging and analysis of the penetration and dissolution characteristics of hyaluronic acid based MNs (HA-MN) with various needle heights in human skin in vivo. In contrast to other studies, we measured the actual penetration depths of the HA-MNs by considering the experimentally measured refractive index of HA in the solid state. For the dissolution dynamics of the HA-MNs, time-lapse structural alteration of the MNs could be clearly visualized, and the volumetric changes of the MNs were measured with an image analysis algorithm.

  11. Faulting apparently related to the 1994 Northridge, California, earthquake and possible co-seismic origin of surface cracks in Potrero Canyon, Los Angeles County, California

    USGS Publications Warehouse

    Catchings, R.D.; Goldman, M.R.; Lee, W.H.K.; Rymer, M.J.; Ponti, D.J.

    1998-01-01

    Apparent southward-dipping, reverse-fault zones are imaged to depths of about 1.5 km beneath Potrero Canyon, Los Angeles County, California. Based on their orientation and projection to the surface, we suggest that the imaged fault zones are extensions of the Oak Ridge fault. Geologic mapping by others and correlations with seismicity studies suggest that the Oak Ridge fault is the causative fault of the 17 January 1994 Northridge earthquake (Northridge fault). Our seismically imaged faults may be among several faults that collectively comprise the Northridge thrust fault system. Unusually strong shaking in Potrero Canyon during the Northridge earthquake may have resulted from focusing of seismic energy or co-seismic movement along existing, related shallow-depth faults. The strong shaking produced ground-surface cracks and sand blows distributed along the length of the canyon. Seismic reflection and refraction images show that shallow-depth faults may underlie some of the observed surface cracks. The relationship between observed surface cracks and imaged faults indicates that some of the surface cracks may have developed from nontectonic alluvial movement, but others may be fault related. Immediately beneath the surface cracks, P-wave velocities are unusually low (<400 m/sec), and there are velocity anomalies consistent with a seismic reflection image of shallow faulting to depths of at least 100 m. On the basis of velocity data, we suggest that unconsolidated soils (<800 m/sec) extend to depths of about 15 to 20 m beneath our datum (<25 m below ground surface). The underlying rocks range in velocity from about 1000 to 5000 m/sec in the upper 100 m. This study illustrates the utility of high-resolution seismic imaging in assessing local and regional seismic hazards.

  12. A Flexible Annular-Array Imaging Platform for Micro-Ultrasound

    PubMed Central

    Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei

    2013-01-01

    Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923

  13. Two sided residual refocusing for acoustic lens based photoacoustic imaging system.

    PubMed

    Kalloor Joseph, Francis; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2018-05-30

    In photoacoustic (PA) imaging, an acoustic lens-based system can form a focused image of an object plane. A real-time C-scan PA image can be formed by simply time gating the transducer response. While most of the focusing action is done by the lens, residual refocusing is needed to image multiple depths with high resolution simultaneously. However, a refocusing algorithm for PA camera has not been studied so far in the literature. In this work, we reformulate this residual refocusing problem for a PA camera into a two-sided wave propagation from a planar sensor array. One part of the problem deals with forward wave propagation while the other deals with time reversal. We have chosen a Fast Fourier Transform (FFT) based wave propagation model for the refocusing to maintain the real-time nature of the system. We have conducted Point Spread Function (PSF) measurement experiments at multiple depths and refocused the signal using the proposed method. Full Width at Half Maximum (FWHM), peak value and Signal to Noise Ratio (SNR) of the refocused PSF is analyzed to quantify the effect of refocusing. We believe that using a two-dimensional transducer array combined with the proposed refocusing, can lead to real-time volumetric imaging using a lens based PA imaging system. © 2018 Institute of Physics and Engineering in Medicine.

  14. Birefringence and vascular imaging of in vivo human skin by Jones-matrix optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Li, En; Makita, Shuichi; Hong, Young-Joo; Kasaragod, Deepa; Yasuno, Yoshiaki

    2017-02-01

    A customized 1310-nm Jones-matrix optical coherence tomography (JM-OCT) for dermatological investigation was constructed and used for in vivo normal human skin tissue imaging. This system can simultaneously measure the threedimensional depth-resolved local birefringence, complex-correlation based OCT angiography (OCT-A), degree-ofpolarization- uniformity (DOPU) and scattering OCT intensity. By obtaining these optical properties of tissue, the morphology, vasculature, and collagen content of skin can be deduced and visualized. Structures in the deep layers of the epithelium were observed with depth-resolved local birefringence and polarization uniformity images. These results suggest high diagnostic and investigative potential of JM-OCT for dermatology.

  15. Sub-40 fs, 1060-nm Yb-fiber laser enhances penetration depth in nonlinear optical microscopy of human skin

    NASA Astrophysics Data System (ADS)

    Balu, Mihaela; Saytashev, Ilyas; Hou, Jue; Dantus, Marcos; Tromberg, Bruce J.

    2015-12-01

    Advancing the practical utility of nonlinear optical microscopy requires continued improvement in imaging depth and contrast. We evaluated second-harmonic generation (SHG) and third-harmonic generation images from ex vivo human skin and showed that a sub-40 fs, 1060-nm Yb-fiber laser can enhance SHG penetration depth by up to 80% compared to a >100 fs, 800 nm Ti:sapphire source. These results demonstrate the potential of fiber-based laser systems to address a key performance limitation related to nonlinear optical microscopy (NLOM) technology while providing a low-barrier-to-access alternative to Ti:sapphire sources that could help accelerate the movement of NLOM into clinical practice.

  16. The influence of structure depth on image blurring of micrometres-thick specimens in MeV transmission electron imaging.

    PubMed

    Wang, Fang; Sun, Ying; Cao, Meng; Nishi, Ryuji

    2016-04-01

    This study investigates the influence of structure depth on image blurring of micrometres-thick films by experiment and simulation with a conventional transmission electron microscope (TEM). First, ultra-high-voltage electron microscope (ultra-HVEM) images of nanometer gold particles embedded in thick epoxy-resin films were acquired in the experiment and compared with simulated images. Then, variations of image blurring of gold particles at different depths were evaluated by calculating the particle diameter. The results showed that with a decrease in depth, image blurring increased. This depth-related property was more apparent for thicker specimens. Fortunately, larger particle depth involves less image blurring, even for a 10-μm-thick epoxy-resin film. The quality dependence on depth of a 3D reconstruction of particle structures in thick specimens was revealed by electron tomography. The evolution of image blurring with structure depth is determined mainly by multiple elastic scattering effects. Thick specimens of heavier materials produced more blurring due to a larger lateral spread of electrons after scattering from the structure. Nevertheless, increasing electron energy to 2MeV can reduce blurring and produce an acceptable image quality for thick specimens in the TEM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Identification of the critical depth-of-cut through a 2D image of the cutting region resulting from taper cutting of brittle materials

    NASA Astrophysics Data System (ADS)

    Gu, Wen; Zhu, Zhiwei; Zhu, Wu-Le; Lu, Leyao; To, Suet; Xiao, Gaobo

    2018-05-01

    An automatic identification method for obtaining the critical depth-of-cut (DoC) of brittle materials with nanometric accuracy and sub-nanometric uncertainty is proposed in this paper. With this method, a two-dimensional (2D) microscopic image of the taper cutting region is captured and further processed by image analysis to extract the margin of generated micro-cracks in the imaging plane. Meanwhile, an analytical model is formulated to describe the theoretical curve of the projected cutting points on the imaging plane with respect to a specified DoC during the whole cutting process. By adopting differential evolution algorithm-based minimization, the critical DoC can be identified by minimizing the deviation between the extracted margin and the theoretical curve. The proposed method is demonstrated through both numerical simulation and experimental analysis. Compared with conventional 2D- and 3D-microscopic-image-based methods, determination of the critical DoC in this study uses the envelope profile rather than the onset point of the generated cracks, providing a more objective approach with smaller uncertainty.

  18. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  19. A small-molecule dye for NIR-II imaging

    NASA Astrophysics Data System (ADS)

    Antaris, Alexander L.; Chen, Hao; Cheng, Kai; Sun, Yao; Hong, Guosong; Qu, Chunrong; Diao, Shuo; Deng, Zixin; Hu, Xianming; Zhang, Bo; Zhang, Xiaodong; Yaghi, Omar K.; Alamparambil, Zita R.; Hong, Xuechuan; Cheng, Zhen; Dai, Hongjie

    2016-02-01

    Fluorescent imaging of biological systems in the second near-infrared window (NIR-II) can probe tissue at centimetre depths and achieve micrometre-scale resolution at depths of millimetres. Unfortunately, all current NIR-II fluorophores are excreted slowly and are largely retained within the reticuloendothelial system, making clinical translation nearly impossible. Here, we report a rapidly excreted NIR-II fluorophore (~90% excreted through the kidneys within 24 h) based on a synthetic 970-Da organic molecule (CH1055). The fluorophore outperformed indocyanine green (ICG)--a clinically approved NIR-I dye--in resolving mouse lymphatic vasculature and sentinel lymphatic mapping near a tumour. High levels of uptake of PEGylated-CH1055 dye were observed in brain tumours in mice, suggesting that the dye was detected at a depth of ~4 mm. The CH1055 dye also allowed targeted molecular imaging of tumours in vivo when conjugated with anti-EGFR Affibody. Moreover, a superior tumour-to-background signal ratio allowed precise image-guided tumour-removal surgery.

  20. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  1. Evaluating motion parallax and stereopsis as depth cues for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Braun, Marius; Leiner, Ulrich; Ruschin, Detlef

    2011-03-01

    The perception of space in the real world is based on multifaceted depth cues, most of them monocular, some binocular. Developing 3D-displays raises the question, which of these depth cues are predominant and should be simulated by computational means in such a panel. Beyond the cues based on image content, such as shadows or patterns, Stereopsis and depth from motion parallax are the most significant mechanisms supporting observers with depth information. We set up a carefully designed test situation, widely excluding undesired other distance hints. Thereafter we conducted a user test to find out, which of these two depth cues is more relevant and whether a combination of both would increase accuracy in a depth estimation task. The trials were conducting utilizing our autostereoscopic "Free2C"-displays, which are capable to detect the user eye position and steer the image lobes dynamically into that direction. At the same time, eye position was used to update the virtual camera's location and thereby offering motion parallax to the observer. As far as we know, this was the first time that such a test has been conducted using an autosteresocopic display without any assistive technologies. Our results showed, in accordance with prior experiments, that both cues are effective, however Stereopsis is by order of magnitude more relevant. Combining both cues improved the precision of distance estimation by another 30-40%.

  2. Photoacoustics and speed-of-sound dual mode imaging with a long depth-of-field by using annular ultrasound array.

    PubMed

    Ding, Qiuning; Tao, Chao; Liu, Xiaojun

    2017-03-20

    Speed-of-sound and optical absorption reflect the structure and function of tissues from different aspects. A dual-mode microscopy system based on a concentric annular ultrasound array is proposed to simultaneously acquire the long depth-of-field images of speed-of-sound and optical absorption of inhomogeneous samples. First, speed-of-sound is decoded from the signal delay between each element of the annular array. The measured speed-of-sound could not only be used as an image contrast, but also improve the resolution and accuracy of spatial location of photoacoustic image in inhomogeneous acoustic media. Secondly, benefitting from dynamic focusing of annular array and the measured speed-of-sound, it is achieved an advanced acoustic-resolution photoacoustic microscopy with a precise position and a long depth-of-field. The performance of the dual-mode imaging system has been experimentally examined by using a custom-made annular array. The proposed dual-mode microscopy might have the significances in monitoring the biological physiological and pathological processes.

  3. Cross-matching: a modified cross-correlation underlying threshold energy model and match-based depth perception

    PubMed Central

    Doi, Takahiro; Fujita, Ichiro

    2014-01-01

    Three-dimensional visual perception requires correct matching of images projected to the left and right eyes. The matching process is faced with an ambiguity: part of one eye's image can be matched to multiple parts of the other eye's image. This stereo correspondence problem is complicated for random-dot stereograms (RDSs), because dots with an identical appearance produce numerous potential matches. Despite such complexity, human subjects can perceive a coherent depth structure. A coherent solution to the correspondence problem does not exist for anticorrelated RDSs (aRDSs), in which luminance contrast is reversed in one eye. Neurons in the visual cortex reduce disparity selectivity for aRDSs progressively along the visual processing hierarchy. A disparity-energy model followed by threshold nonlinearity (threshold energy model) can account for this reduction, providing a possible mechanism for the neural matching process. However, the essential computation underlying the threshold energy model is not clear. Here, we propose that a nonlinear modification of cross-correlation, which we term “cross-matching,” represents the essence of the threshold energy model. We placed half-wave rectification within the cross-correlation of the left-eye and right-eye images. The disparity tuning derived from cross-matching was attenuated for aRDSs. We simulated a psychometric curve as a function of graded anticorrelation (graded mixture of aRDS and normal RDS); this simulated curve reproduced the match-based psychometric function observed in human near/far discrimination. The dot density was 25% for both simulation and observation. We predicted that as the dot density increased, the performance for aRDSs should decrease below chance (i.e., reversed depth), and the level of anticorrelation that nullifies depth perception should also decrease. We suggest that cross-matching serves as a simple computation underlying the match-based disparity signals in stereoscopic depth perception. PMID:25360107

  4. Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates

    NASA Astrophysics Data System (ADS)

    Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.

    2010-04-01

    Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.

  5. Modulated Excitation Imaging System for Intravascular Ultrasound.

    PubMed

    Qiu, Weibao; Wang, Xingying; Chen, Yan; Fu, Qiang; Su, Min; Zhang, Lining; Xia, Jingjing; Dai, Jiyan; Zhang, Yaonan; Zheng, Hairong

    2017-08-01

    Advances in methodologies and tools often lead to new insights into cardiovascular diseases. Intravascular ultrasound (IVUS) is a well-established diagnostic method that provides high-resolution images of the vessel wall and atherosclerotic plaques. High-frequency (>50 MHz) ultrasound enables the spatial resolution of IVUS to approach that of optical imaging methods. However, the penetration depth decreases when using higher imaging frequencies due to the greater acoustic attenuation. An imaging method that improves the penetration depth of high-resolution IVUS would, therefore, be of major clinical importance. Modulated excitation imaging is known to allow ultrasound waves to penetrate further. This paper presents an ultrasound system specifically for modulated-excitation-based IVUS imaging. The system incorporates a high-voltage waveform generator and an image processing board that are optimized for IVUS applications. In addition, a miniaturized ultrasound transducer has been constructed using a Pb(Mg 1/3 Nb 2/3 )O 3 -PbTiO 3 single crystal to improve the ultrasound characteristics. The results show that the proposed system was able to provide increases of 86.7% in penetration depth and 9.6 dB in the signal-to-noise ratio for 60 MHz IVUS. In vitro tissue samples were also investigated to demonstrate the performance of the system.

  6. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. Themore » depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)« less

  7. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of themore » co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).« less

  8. Applications of two-photon fluorescence microscopy in deep-tissue imaging

    NASA Astrophysics Data System (ADS)

    Dong, Chen-Yuan; Yu, Betty; Hsu, Lily L.; Kaplan, Peter D.; Blankschstein, D.; Langer, Robert; So, Peter T. C.

    2000-07-01

    Based on the non-linear excitation of fluorescence molecules, two-photon fluorescence microscopy has become a significant new tool for biological imaging. The point-like excitation characteristic of this technique enhances image quality by the virtual elimination of off-focal fluorescence. Furthermore, sample photodamage is greatly reduced because fluorescence excitation is limited to the focal region. For deep tissue imaging, two-photon microscopy has the additional benefit in the greatly improved imaging depth penetration. Since the near- infrared laser sources used in two-photon microscopy scatter less than their UV/glue-green counterparts, in-depth imaging of highly scattering specimen can be greatly improved. In this work, we will present data characterizing both the imaging characteristics (point-spread-functions) and tissue samples (skin) images using this novel technology. In particular, we will demonstrate how blind deconvolution can be used further improve two-photon image quality and how this technique can be used to study mechanisms of chemically-enhanced, transdermal drug delivery.

  9. Optical clearing of melanoma in vivo: characterization by diffuse reflectance spectroscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Pires, Layla; Demidov, Valentin; Vitkin, I. Alex; Bagnato, Vanderlei; Kurachi, Cristina; Wilson, Brian C.

    2016-08-01

    Melanoma is the most aggressive type of skin cancer, with significant risk of fatality. Due to its pigmentation, light-based imaging and treatment techniques are limited to near the tumor surface, which is inadequate, for example, to evaluate the microvascular density that is associated with prognosis. White-light diffuse reflectance spectroscopy (DRS) and near-infrared optical coherence tomography (OCT) were used to evaluate the effect of a topically applied optical clearing agent (OCA) in melanoma in vivo and to image the microvascular network. DRS was performed using a contact fiber optic probe in the range from 450 to 650 nm. OCT imaging was performed using a swept-source system at 1310 nm. The OCT image data were processed using speckle variance and depth-encoded algorithms. Diffuse reflectance signals decreased with clearing, dropping by ˜90% after 45 min. OCT was able to image the microvasculature in the pigmented melanoma tissue with good spatial resolution up to a depth of ˜300 μm without the use of OCA; improved contrast resolution was achieved with optical clearing to a depth of ˜750 μm in tumor. These findings are relevant to potential clinical applications in melanoma, such as assessing prognosis and treatment responses. Optical clearing may also facilitate the use of light-based treatments such as photodynamic therapy.

  10. Laser biostimulation therapy planning supported by imaging

    NASA Astrophysics Data System (ADS)

    Mester, Adam R.

    2018-04-01

    Ultrasonography and MR imaging can help to identify the area and depth of different lesions, like injury, overuse, inflammation, degenerative diseases. The appropriate power density, sufficient dose and direction of the laser treatment can be optimally estimated. If required minimum 5 mW photon density and required optimal energy dose: 2-4 Joule/cm2 wouldn't arrive into the depth of the target volume - additional techniques can help: slight compression of soft tissues can decrease the tissue thickness or multiple laser diodes can be used. In case of multiple diode clusters light scattering results deeper penetration. Another method to increase the penetration depth is a second pulsation (in kHz range) of laser light. (So called continuous wave laser itself has inherent THz pulsation by temporal coherence). Third solution of higher light intensity in the target volume is the multi-gate technique: from different angles the same joint can be reached based on imaging findings. Recent developments is ultrasonography: elastosonography and tissue harmonic imaging with contrast material offer optimal therapy planning. While MRI is too expensive modality for laser planning images can be optimally used if a diagnostic MRI already was done. Usual DICOM images offer "postprocessing" measurements in mm range.

  11. Computational adaptive optics for broadband optical interferometric tomography of biological tissue.

    PubMed

    Adie, Steven G; Graf, Benedikt W; Ahmad, Adeel; Carney, P Scott; Boppart, Stephen A

    2012-05-08

    Aberrations in optical microscopy reduce image resolution and contrast, and can limit imaging depth when focusing into biological samples. Static correction of aberrations may be achieved through appropriate lens design, but this approach does not offer the flexibility of simultaneously correcting aberrations for all imaging depths, nor the adaptability to correct for sample-specific aberrations for high-quality tomographic optical imaging. Incorporation of adaptive optics (AO) methods have demonstrated considerable improvement in optical image contrast and resolution in noninterferometric microscopy techniques, as well as in optical coherence tomography. Here we present a method to correct aberrations in a tomogram rather than the beam of a broadband optical interferometry system. Based on Fourier optics principles, we correct aberrations of a virtual pupil using Zernike polynomials. When used in conjunction with the computed imaging method interferometric synthetic aperture microscopy, this computational AO enables object reconstruction (within the single scattering limit) with ideal focal-plane resolution at all depths. Tomographic reconstructions of tissue phantoms containing subresolution titanium-dioxide particles and of ex vivo rat lung tissue demonstrate aberration correction in datasets acquired with a highly astigmatic illumination beam. These results also demonstrate that imaging with an aberrated astigmatic beam provides the advantage of a more uniform depth-dependent signal compared to imaging with a standard gaussian beam. With further work, computational AO could enable the replacement of complicated and expensive optical hardware components with algorithms implemented on a standard desktop computer, making high-resolution 3D interferometric tomography accessible to a wider group of users and nonspecialists.

  12. Functional imaging and assessment of the glucose diffusion rate in epithelial tissues in optical coherence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larin, K V; Tuchin, V V

    2008-06-30

    Functional imaging, monitoring and quantitative description of glucose diffusion in epithelial and underlying stromal tissues in vivo and controlling of the optical properties of tissues are extremely important for many biomedical applications including the development of noninvasive or minimally invasive glucose sensors as well as for therapy and diagnostics of various diseases, such as cancer, diabetic retinopathy, and glaucoma. Recent progress in the development of a noninvasive molecular diffusion biosensor based on optical coherence tomography (OCT) is described. The diffusion of glucose was studied in several epithelial tissues both in vitro and in vivo. Because OCT provides depth-resolved imaging ofmore » tissues with high in-depth resolution, the glucose diffusion is described not only as a function of time but also as a function of depth. (special issue devoted to application of laser technologies in biophotonics and biomedical studies)« less

  13. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models

    PubMed Central

    Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir

    2016-01-01

    Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570

  14. Oscillating fluid lens in coherent retinal projection displays for extending depth of focus

    NASA Astrophysics Data System (ADS)

    von Waldkirch, extending depth of focus M.; Lukowicz, P.; Troster, G.

    2005-09-01

    See-through head-mounted displays, which allow to overlay virtual information over the user's real view, suffer normally from a limited depth of focus (DOF). To overcome this problem we discuss in this paper the use of a fast oscillating, variable-focus lens in a retinal projection display. The evaluation is based on a schematic eye model and on the partial coherence simulation tool SPLAT which allows us to calculate the projected retinal images of a text target. Objective image quality criteria demonstrate that the use of an oscillating lens is promising provided that partially coherent illumination light is used. In this case, psychometric measurements reveal that the depth of focus for reading text can be extended by a factor of up to 2.2. For fully coherent and incoherent illumination, however, the retinal images suffer from structural and contrast degradation effects, respectively.

  15. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  16. Bayesian depth estimation from monocular natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  17. Mobile robots exploration through cnn-based reinforcement learning.

    PubMed

    Tai, Lei; Liu, Ming

    2016-01-01

    Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.

  18. Super long viewing distance light homogeneous emitting three-dimensional display

    NASA Astrophysics Data System (ADS)

    Liao, Hongen

    2015-04-01

    Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.

  19. Auger Spectroscopy Analysis of Spalled LEU-10Mo Foils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, Samantha Kay; Schulze, Roland K.

    2017-08-03

    Presentation includes slides on Surface Science used to probe LEU-10Mo Spall; Auger highlights graphitic-like inclusions and Mo-deficient oxide on base metal; Higher C concentration detected within spall area Images Courtesy; Depth profiling reveals thick oxide; Mo concentration nears nominal only at depths ~400 nm; and lastly Key Findings.

  20. Salient region detection by fusing bottom-up and top-down features extracted from a single image.

    PubMed

    Tian, Huawei; Fang, Yuming; Zhao, Yao; Lin, Weisi; Ni, Rongrong; Zhu, Zhenfeng

    2014-10-01

    Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.

  1. Prototype pre-clinical PET scanner with depth-of-interaction measurements using single-layer crystal array and single-ended readout

    NASA Astrophysics Data System (ADS)

    Lee, Min Sun; Kim, Kyeong Yun; Ko, Guen Bae; Lee, Jae Sung

    2017-05-01

    In this study, we developed a proof-of-concept prototype PET system using a pair of depth-of-interaction (DOI) PET detectors based on the proposed DOI-encoding method and digital silicon photomultiplier (dSiPM). Our novel cost-effective DOI measurement method is based on a triangular-shaped reflector that requires only a single-layer pixelated crystal and single-ended signal readout. The DOI detector consisted of an 18  ×  18 array of unpolished LYSO crystal (1.47  ×  1.47  ×  15 mm3) wrapped with triangular-shaped reflectors. The DOI information was encoded by depth-dependent light distribution tailored by the reflector geometry and DOI correction was performed using four-step depth calibration data and maximum-likelihood (ML) estimation. The detector pair and the object were placed on two motorized rotation stages to demonstrate 12-block ring PET geometry with 11.15 cm diameter. Spatial resolution was measured and phantom and animal imaging studies were performed to investigate imaging performance. All images were reconstructed with and without the DOI correction to examine the impact of our DOI measurement. The pair of dSiPM-based DOI PET detectors showed good physical performances respectively: 2.82 and 3.09 peak-to-valley ratios, 14.30% and 18.95% energy resolution, and 4.28 and 4.24 mm DOI resolution averaged over all crystals and all depths. A sub-millimeter spatial resolution was achieved at the center of the field of view (FOV). After applying ML-based DOI correction, maximum 36.92% improvement was achieved in the radial spatial resolution and a uniform resolution was observed within 5 cm of transverse PET FOV. We successfully acquired phantom and animal images with improved spatial resolution and contrast by using the DOI measurement. The proposed DOI-encoding method was successfully demonstrated in the system level and exhibited good performance, showing its feasibility for animal PET applications with high spatial resolution and sensitivity.

  2. Multi-image mosaic with SIFT and vision measurement for microscale structures processed by femtosecond laser

    NASA Astrophysics Data System (ADS)

    Wang, Fu-Bin; Tu, Paul; Wu, Chen; Chen, Lei; Feng, Ding

    2018-01-01

    In femtosecond laser processing, the field of view of each image frame of the microscale structure is extremely small. In order to obtain the morphology of the whole microstructure, a multi-image mosaic with partially overlapped regions is required. In the present work, the SIFT algorithm for mosaic images was analyzed theoretically, and by using multiple images of a microgroove structure processed by femtosecond laser, a stitched image of the whole groove structure could be studied experimentally and realized. The object of our research concerned a silicon wafer with a microgroove structure ablated by femtosecond laser. First, we obtained microgrooves at a width of 380 μm at different depths. Second, based on the gray image of the microgroove, a multi-image mosaic with slot width and slot depth was realized. In order to improve the image contrast between the target and the background, and taking the slot depth image as an example, a multi-image mosaic was then realized using pseudo color enhancement. Third, in order to measure the structural size of the microgroove with the image, a known width streak ablated by femtosecond laser at 20 mW was used as a calibration sample. Through edge detection, corner extraction, and image correction for the streak images, we calculated the pixel width of the streak image and found the measurement ratio constant Kw in the width direction, and then obtained the proportional relationship between a pixel and a micrometer. Finally, circular spot marks ablated by femtosecond laser at 2 mW and 15 mW were used as test images, and proving that the value Kw was correct, the measurement ratio constant Kh in the height direction was obtained, and the image measurements for a microgroove of 380 × 117 μm was realized based on a measurement ratio constant Kw and Kh. The research and experimental results show that the image mosaic, image calibration, and geometric image parameter measurements for the microstructural image ablated by femtosecond laser were realized effectively.

  3. Bond-selective photoacoustic imaging by converting molecular vibration into acoustic waves

    PubMed Central

    Hui, Jie; Li, Rui; Phillips, Evan H.; Goergen, Craig J.; Sturek, Michael; Cheng, Ji-Xin

    2016-01-01

    The quantized vibration of chemical bonds provides a way of detecting specific molecules in a complex tissue environment. Unlike pure optical methods, for which imaging depth is limited to a few hundred micrometers by significant optical scattering, photoacoustic detection of vibrational absorption breaks through the optical diffusion limit by taking advantage of diffused photons and weak acoustic scattering. Key features of this method include both high scalability of imaging depth from a few millimeters to a few centimeters and chemical bond selectivity as a novel contrast mechanism for photoacoustic imaging. Its biomedical applications spans detection of white matter loss and regeneration, assessment of breast tumor margins, and diagnosis of vulnerable atherosclerotic plaques. This review provides an overview of the recent advances made in vibration-based photoacoustic imaging and various biomedical applications enabled by this new technology. PMID:27069873

  4. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  5. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    PubMed Central

    Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen

    2014-01-01

    Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364

  6. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  7. Laser-induced photo-thermal strain imaging

    NASA Astrophysics Data System (ADS)

    Choi, Changhoon; Ahn, Joongho; Jeon, Seungwan; Kim, Chulhong

    2018-02-01

    Vulnerable plaque is the one of the leading causes of cardiovascular disease occurrence. However, conventional intravascular imaging techniques suffer from difficulty in finding vulnerable plaque due to limitation such as lack of physiological information, imaging depth, and depth sensitivity. Therefore, new techniques are needed to help determine the vulnerability of plaque, Thermal strain imaging (TSI) is an imaging technique based on ultrasound (US) wave propagation speed that varies with temperature of medium. During temperature increase, strain occurs in the medium and its variation tendency is depending on the type of tissue, which makes it possible to use for tissue differentiation. Here, we demonstrate laser-induced photo-thermal strain imaging (pTSI) to differentiate tissue using an intravascular ultrasound (IVUS) catheter and a 1210-nm continuous-wave laser for heating lipids intensively. During heating, consecutive US images were obtained from a custom-made phantom made of porcine fat and gelatin. A cross correlation-based speckle-tracking algorithm was then applied to calculate the strain of US images. In the strain images, the positive strain produced in lipids (porcine fat) was clearly differentiated from water-bearing tissue (gelatin). This result shows that laser-induced pTSI could be a new method to distinguish lipids in the plaque and can help to differentiate vulnerability of plaque.

  8. Plenoptic layer-based modeling for image based rendering.

    PubMed

    Pearson, James; Brookes, Mike; Dragotti, Pier Luigi

    2013-09-01

    Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.

  9. A Featured-Based Strategy for Stereovision Matching in Sensors with Fish-Eye Lenses for Forest Environments

    PubMed Central

    Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando

    2009-01-01

    This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134

  10. Depth-section imaging of swine kidney by spectrally encoded microscopy

    NASA Astrophysics Data System (ADS)

    Liao, Jiuling; Gao, Wanrong

    2016-10-01

    The kidneys are essential regulatory organs whose main function is to regulate the balance of electrolytes in the blood, along with maintaining pH homeostasis. The study of the microscopic structure of the kidney will help identify kidney diseases associated with specific renal histology change. Spectrally encoded microscopy (SEM) is a new reflectance microscopic imaging technique in which a grating is used to illuminate different positions along a line on the sample with different wavelengths, reducing the size of system and imaging time. In this paper, a SEM device is described which is based on a super luminescent diode source and a home-built spectrometer. The lateral resolution was measured by imaging the USAF resolution target. The axial response curve was obtained as a reflect mirror was scanned through the focal plane axially. In order to test the feasibility of using SEM for depth-section imaging of an excised swine kidney tissue, the images of the samples were acquired by scanning the sample at 10 μm per step along the depth direction. Architectural features of the kidney tissue could be clearly visualized in the SEM images, including glomeruli and blood vessels. Results from this study suggest that SEM may be useful for locating regions with probabilities of kidney disease or cancer.

  11. Development of collision avoidance system for useful UAV applications using image sensors with laser transmitter

    NASA Astrophysics Data System (ADS)

    Cheong, M. K.; Bahiki, M. R.; Azrad, S.

    2016-10-01

    The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.

  12. View generation for 3D-TV using image reconstruction from irregularly spaced samples

    NASA Astrophysics Data System (ADS)

    Vázquez, Carlos

    2007-02-01

    Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).

  13. Tunable semiconductor laser at 1025-1095 nm range for OCT applications with an extended imaging depth

    NASA Astrophysics Data System (ADS)

    Shramenko, Mikhail V.; Chamorovskiy, Alexander; Lyu, Hong-Chou; Lobintsov, Andrei A.; Karnowski, Karol; Yakubovich, Sergei D.; Wojtkowski, Maciej

    2015-03-01

    Tunable semiconductor laser for 1025-1095 nm spectral range is developed based on the InGaAs semiconductor optical amplifier and a narrow band-pass acousto-optic tunable filter in a fiber ring cavity. Mode-hop-free sweeping with tuning speeds of up to 104 nm/s was demonstrated. Instantaneous linewidth is in the range of 0.06-0.15 nm, side-mode suppression is up to 50 dB and polarization extinction ratio exceeds 18 dB. Optical power in output single mode fiber reaches 20 mW. The laser was used in OCT system for imaging a contact lens immersed in a 0.5% intra-lipid solution. The cross-section image provided the imaging depth of more than 5mm.

  14. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images.

    PubMed

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer's own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.

  15. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793

  16. Robust stereo matching with trinary cross color census and triple image-based refinements

    NASA Astrophysics Data System (ADS)

    Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr

    2017-12-01

    For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.

  17. Surface inspection system for industrial components based on shape from shading minimization approach

    NASA Astrophysics Data System (ADS)

    Kotan, Muhammed; Öz, Cemil

    2017-12-01

    An inspection system using estimated three-dimensional (3-D) surface characteristics information to detect and classify the faults to increase the quality control on the frequently used industrial components is proposed. Shape from shading (SFS) is one of the basic and classic 3-D shape recovery problems in computer vision. In our application, we developed a system using Frankot and Chellappa SFS method based on the minimization of the selected basis function. First, the specialized image acquisition system captured the images of the component. To eliminate noise, wavelet transform is applied to the taken images. Then, estimated gradients were used to obtain depth and surface profiles. Depth information was used to determine and classify the surface defects. Also, a comparison made with some linearization-based SFS algorithms was discussed. The developed system was applied to real products and the results indicated that using SFS approaches is useful and various types of defects can easily be detected in a short period of time.

  18. Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers.

    PubMed

    Kim, Yunhee; Choi, Heejin; Kim, Joohwan; Cho, Seong-Woo; Kim, Youngmin; Park, Gilbae; Lee, Byoungho

    2007-06-20

    A depth-enhanced three-dimensional integral imaging system with electrically variable image planes is proposed. For implementing the variable image planes, polymer-dispersed liquid-crystal (PDLC) films and a projector are adopted as a new display system in the integral imaging. Since the transparencies of PDLC films are electrically controllable, we can make each film diffuse the projected light successively with a different depth from the lens array. As a result, the proposed method enables control of the location of image planes electrically and enhances the depth. The principle of the proposed method is described, and experimental results are also presented.

  19. Analysis of flood inundation in ungauged basins based on multi-source remote sensing data.

    PubMed

    Gao, Wei; Shen, Qiu; Zhou, Yuehua; Li, Xin

    2018-02-09

    Floods are among the most expensive natural hazards experienced in many places of the world and can result in heavy losses of life and economic damages. The objective of this study is to analyze flood inundation in ungauged basins by performing near-real-time detection with flood extent and depth based on multi-source remote sensing data. Via spatial distribution analysis of flood extent and depth in a time series, the inundation condition and the characteristics of flood disaster can be reflected. The results show that the multi-source remote sensing data can make up the lack of hydrological data in ungauged basins, which is helpful to reconstruct hydrological sequence; the combination of MODIS (moderate-resolution imaging spectroradiometer) surface reflectance productions and the DFO (Dartmouth Flood Observatory) flood database can achieve the macro-dynamic monitoring of the flood inundation in ungauged basins, and then the differential technique of high-resolution optical and microwave images before and after floods can be used to calculate flood extent to reflect spatial changes of inundation; the monitoring algorithm for the flood depth combining RS and GIS is simple and easy and can quickly calculate the depth with a known flood extent that is obtained from remote sensing images in ungauged basins. Relevant results can provide effective help for the disaster relief work performed by government departments.

  20. Estimating needle tip deflection in biological tissue from a single transverse ultrasound image: application to brachytherapy.

    PubMed

    Rossa, Carlos; Sloboda, Ron; Usmani, Nawaid; Tavakoli, Mahdi

    2016-07-01

    This paper proposes a method to predict the deflection of a flexible needle inserted into soft tissue based on the observation of deflection at a single point along the needle shaft. We model the needle-tissue as a discretized structure composed of several virtual, weightless, rigid links connected by virtual helical springs whose stiffness coefficient is found using a pattern search algorithm that only requires the force applied at the needle tip during insertion and the needle deflection measured at an arbitrary insertion depth. Needle tip deflections can then be predicted for different insertion depths. Verification of the proposed method in synthetic and biological tissue shows a deflection estimation error of [Formula: see text]2 mm for images acquired at 35 % or more of the maximum insertion depth, and decreases to 1 mm for images acquired closer to the final insertion depth. We also demonstrate the utility of the model for prostate brachytherapy, where in vivo needle deflection measurements obtained during early stages of insertion are used to predict the needle deflection further along the insertion process. The method can predict needle deflection based on the observation of deflection at a single point. The ultrasound probe can be maintained at the same position during insertion of the needle, which avoids complications of tissue deformation caused by the motion of the ultrasound probe.

  1. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  2. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  3. Depth-Resolved Multispectral Sub-Surface Imaging Using Multifunctional Upconversion Phosphors with Paramagnetic Properties

    PubMed Central

    Ovanesyan, Zaven; Mimun, L. Christopher; Kumar, Gangadharan Ajith; Yust, Brian G.; Dannangoda, Chamath; Martirosyan, Karen S.; Sardar, Dhiraj K.

    2015-01-01

    Molecular imaging is very promising technique used for surgical guidance, which requires advancements related to properties of imaging agents and subsequent data retrieval methods from measured multispectral images. In this article, an upconversion material is introduced for subsurface near-infrared imaging and for the depth recovery of the material embedded below the biological tissue. The results confirm significant correlation between the analytical depth estimate of the material under the tissue and the measured ratio of emitted light from the material at two different wavelengths. Experiments with biological tissue samples demonstrate depth resolved imaging using the rare earth doped multifunctional phosphors. In vitro tests reveal no significant toxicity, whereas the magnetic measurements of the phosphors show that the particles are suitable as magnetic resonance imaging agents. The confocal imaging of fibroblast cells with these phosphors reveals their potential for in vivo imaging. The depth-resolved imaging technique with such phosphors has broad implications for real-time intraoperative surgical guidance. PMID:26322519

  4. Extended depth of field imaging for high speed object analysis

    NASA Technical Reports Server (NTRS)

    Frost, Keith (Inventor); Ortyn, William (Inventor); Basiji, David (Inventor); Bauer, Richard (Inventor); Liang, Luchuan (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)

    2011-01-01

    A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.

  5. A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light

    PubMed Central

    Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning

    2017-01-01

    Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759

  6. In vivo photoacoustic imaging of uterine cervical lesion and its image processing based on light propagation in biological medium

    NASA Astrophysics Data System (ADS)

    Okawa, Shinpei; Sei, Kiguna; Hirasawa, Takeshi; Irisawa, Kaku; Hirota, Kazuhiro; Wada, Takatsugu; Kushibiki, Toshihiro; Furuya, Kenichi; Ishihara, Miya

    2017-03-01

    For diagnosis of cervical cancer, screening by colposcope and successive biopsy are usually carried out. Colposcope, which is a mesoscope, is used to examine surface of the cervix and to find precancerous lesion grossly. However, the accuracy of colposcopy depends on the skills of the examiner and is inconsistent as a result. Additionally, colposcope lacks depth information. It is known that microvessel density and blood flow in cervical lesion increases associated with angiogenesis. Therefore, photoacoustic imaging (PAI) to detect angiogenesis in cervical lesion has been studied. PAI can diagnose cervical lesion sensitively and provide depth information. The authors have been investigating the efficacy of PAI in the diagnoses of the cervical lesion and cancer by use of the PAI and ultrasonography system with transvaginal probe developed by Fujifilm Corporation. For quantitative diagnosis by use of PAI, it is required to take the light propagation in biological medium into account. The image reconstruction of the absorption coefficient from the PA image of cervix by use of the simulation of light propagation based on finite element method has been tried in this study. Numerical simulation, phantom experiment and in vivo imaging were carried out.

  7. In vivo volumetric depth-resolved vasculature imaging of human limbus and sclera with 1 μm swept source phase-variance optical coherence angiography

    NASA Astrophysics Data System (ADS)

    Poddar, Raju; Zawadzki, Robert J.; Cortés, Dennis E.; Mannis, Mark J.; Werner, John S.

    2015-06-01

    We present in vivo volumetric depth-resolved vasculature images of the anterior segment of the human eye acquired with phase-variance based motion contrast using a high-speed (100 kHz, 105 A-scans/s) swept source optical coherence tomography system (SSOCT). High phase stability SSOCT imaging was achieved by using a computationally efficient phase stabilization approach. The human corneo-scleral junction and sclera were imaged with swept source phase-variance optical coherence angiography and compared with slit lamp images from the same eyes of normal subjects. Different features of the rich vascular system in the conjunctiva and episclera were visualized and described. This system can be used as a potential tool for ophthalmological research to determine changes in the outflow system, which may be helpful for identification of abnormalities that lead to glaucoma.

  8. The Morphology of Craters on Mercury: Results from MESSENGER Flybys

    NASA Technical Reports Server (NTRS)

    Barnouin, Oliver S.; Zuber, Maria T.; Smith, David E.; Neumann, Gregory A.; Herrick, Robert R.; Chappelow, John E.; Murchie, Scott L.; Prockter, Louise M.

    2012-01-01

    Topographic data measured from the Mercury Laser Altimeter (MLA) and the Mercury Dual Imaging System (MDIS) aboard the MESSENGER spacecraft were used for investigations of the relationship between depth and diameter for impact craters on Mercury. Results using data from the MESSENGER flybys of the innermost planet indicate that most of the craters measured with MLA are shallower than those previously measured by using Mariner 10 images. MDIS images of these same MLA-measured craters show that they have been modified. The use of shadow measurement techniques, which were found to be accurate relative to the MLA results, indicate that both small bowl-shaped and large complex craters that are fresh possess depth-to-diameter ratios that are in good agreement with those measured from Mariner 10 images. The preliminary data also show that the depths of modified craters are shallower relative to fresh ones, and might provide quantitative estimates of crater in-filling by subsequent volcanic or impact processes. The diameter that defines the transition from simple to complex craters on Mercury based on MESSENGER data is consistent with that reported from Mariner 10 data.

  9. Fusion of light-field and photogrammetric surface form data

    NASA Astrophysics Data System (ADS)

    Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard K.

    2017-08-01

    Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.

  10. Automatic detection of artifacts in converted S3D video

    NASA Astrophysics Data System (ADS)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  11. Adaptive Neuro-Fuzzy Inference System (ANFIS)-Based Models for Predicting the Weld Bead Width and Depth of Penetration from the Infrared Thermal Image of the Weld Pool

    NASA Astrophysics Data System (ADS)

    Subashini, L.; Vasudevan, M.

    2012-02-01

    Type 316 LN stainless steel is the major structural material used in the construction of nuclear reactors. Activated flux tungsten inert gas (A-TIG) welding has been developed to increase the depth of penetration because the depth of penetration achievable in single-pass TIG welding is limited. Real-time monitoring and control of weld processes is gaining importance because of the requirement of remoter welding process technologies. Hence, it is essential to develop computational methodologies based on an adaptive neuro fuzzy inference system (ANFIS) or artificial neural network (ANN) for predicting and controlling the depth of penetration and weld bead width during A-TIG welding of type 316 LN stainless steel. In the current work, A-TIG welding experiments have been carried out on 6-mm-thick plates of 316 LN stainless steel by varying the welding current. During welding, infrared (IR) thermal images of the weld pool have been acquired in real time, and the features have been extracted from the IR thermal images of the weld pool. The welding current values, along with the extracted features such as length, width of the hot spot, thermal area determined from the Gaussian fit, and thermal bead width computed from the first derivative curve were used as inputs, whereas the measured depth of penetration and weld bead width were used as output of the respective models. Accurate ANFIS models have been developed for predicting the depth of penetration and the weld bead width during TIG welding of 6-mm-thick 316 LN stainless steel plates. A good correlation between the measured and predicted values of weld bead width and depth of penetration were observed in the developed models. The performance of the ANFIS models are compared with that of the ANN models.

  12. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    PubMed

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  13. Advanced Topics in Space Situational Awareness

    DTIC Science & Technology

    2007-11-07

    34super-resolution." Such optical superresolution is characteristic of many model-based image processing algorithms, and reflects the incorporation of...Sampling Theorem," J. Opt. Soc. Am. A, vol. 24, 311-325 (2007). [39] S. Prasad, "Digital and Optical Superresolution of Low-Resolution Image Sequences," Un...wavefront coding for the specific application of extension of image depth well beyond what is possible in a standard imaging system. The problem of optical

  14. Theoretical performance model for single image depth from defocus.

    PubMed

    Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme

    2014-12-01

    In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

  15. Dynamic Transmit-Receive Beamforming by Spatial Matched Filtering for Ultrasound Imaging with Plane Wave Transmission.

    PubMed

    Chen, Yuling; Lou, Yang; Yen, Jesse

    2017-07-01

    During conventional ultrasound imaging, the need for multiple transmissions for one image and the time of flight for a desired imaging depth limit the frame rate of the system. Using a single plane wave pulse during each transmission followed by parallel receive processing allows for high frame rate imaging. However, image quality is degraded because of the lack of transmit focusing. Beamforming by spatial matched filtering (SMF) is a promising method which focuses ultrasonic energy using spatial filters constructed from the transmit-receive impulse response of the system. Studies by other researchers have shown that SMF beamforming can provide dynamic transmit-receive focusing throughout the field of view. In this paper, we apply SMF beamforming to plane wave transmissions (PWTs) to achieve both dynamic transmit-receive focusing at all imaging depths and high imaging frame rate (>5000 frames per second). We demonstrated the capability of the combined method (PWT + SMF) of achieving two-way focusing mathematically through analysis based on the narrowband Rayleigh-Sommerfeld diffraction theory. Moreover, the broadband performance of PWT + SMF was quantified in terms of lateral resolution and contrast from both computer simulations and experimental data. Results were compared between SMF beamforming and conventional delay-and-sum (DAS) beamforming in both simulations and experiments. At an imaging depth of 40 mm, simulation results showed a 29% lateral resolution improvement and a 160% contrast improvement with PWT + SMF. These improvements were 17% and 48% for experimental data with noise.

  16. 3D wide field-of-view Gabor-domain optical coherence microscopy advancing real-time in-vivo imaging and metrology

    NASA Astrophysics Data System (ADS)

    Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.

    2017-02-01

    Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.

  17. Deep Tissue Fluorescent Imaging in Scattering Specimens Using Confocal Microscopy

    PubMed Central

    Clendenon, Sherry G.; Young, Pamela A.; Ferkowicz, Michael; Phillips, Carrie; Dunn, Kenneth W.

    2015-01-01

    In scattering specimens, multiphoton excitation and nondescanned detection improve imaging depth by a factor of 2 or more over confocal microscopy; however, imaging depth is still limited by scattering. We applied the concept of clearing to deep tissue imaging of highly scattering specimens. Clearing is a remarkably effective approach to improving image quality at depth using either confocal or multiphoton microscopy. Tissue clearing appears to eliminate the need for multiphoton excitation for deep tissue imaging. PMID:21729357

  18. Super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging

    NASA Astrophysics Data System (ADS)

    Wei, Lu; Zhu, Xinxin; Chen, Zhixing; Min, Wei

    2014-02-01

    Two-photon excited fluorescence microscopy (TPFM) offers the highest penetration depth with subcellular resolution in light microscopy, due to its unique advantage of nonlinear excitation. However, a fundamental imaging-depth limit, accompanied by a vanishing signal-to-background contrast, still exists for TPFM when imaging deep into scattering samples. Formally, the focusing depth, at which the in-focus signal and the out-of-focus background are equal to each other, is defined as the fundamental imaging-depth limit. To go beyond this imaging-depth limit of TPFM, we report a new class of super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging, including multiphoton activation and imaging (MPAI) harnessing novel photo-activatable fluorophores, stimulated emission reduced fluorescence (SERF) microscopy by adding a weak laser beam for stimulated emission, and two-photon induced focal saturation imaging with preferential depletion of ground-state fluorophores at focus. The resulting image contrasts all exhibit a higher-order (third- or fourth- order) nonlinear signal dependence on laser intensity than that in the standard TPFM. Both the physical principles and the imaging demonstrations will be provided for each super-nonlinear microscopy. In all these techniques, the created super-nonlinearity significantly enhances the imaging contrast and concurrently extends the imaging depth-limit of TPFM. Conceptually different from conventional multiphoton processes mediated by virtual states, our strategy constitutes a new class of fluorescence microscopy where high-order nonlinearity is mediated by real population transfer.

  19. Korean coastal water depth/sediment and land cover mapping (1:25,000) by computer analysis of LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Park, K. Y.; Miller, L. D.

    1978-01-01

    Computer analysis was applied to single date LANDSAT MSS imagery of a sample coastal area near Seoul, Korea equivalent to a 1:50,000 topographic map. Supervised image processing yielded a test classification map from this sample image containing 12 classes: 5 water depth/sediment classes, 2 shoreline/tidal classes, and 5 coastal land cover classes at a scale of 1:25,000 and with a training set accuracy of 76%. Unsupervised image classification was applied to a subportion of the site analyzed and produced classification maps comparable in results in a spatial sense. The results of this test indicated that it is feasible to produce such quantitative maps for detailed study of dynamic coastal processes given a LANDSAT image data base at sufficiently frequent time intervals.

  20. Near-Infrared II Fluorescence for Imaging Hindlimb Vessel Regeneration with Dynamic Tissue Perfusion Measurement

    PubMed Central

    Hong, Guosong; Lee, Jerry C.; Jha, Arshi; Diao, Shuo; Nakayama, Karina H.; Hou, Luqia; Doyle, Timothy C.; Robinson, Joshua T.; Antaris, Alexander L.; Dai, Hongjie; Cooke, John P.; Huang, Ngan F.

    2014-01-01

    Background Real-time vascular imaging that provides both anatomic and hemodynamic information could greatly facilitate the diagnosis of vascular diseases and provide accurate assessment of therapeutic effects. Here we have developed a novel fluorescence-based all-optical method, named near-infrared II (NIR-II) fluorescence imaging, to image murine hindlimb vasculature and blood flow in an experimental model of peripheral arterial disease, by exploiting fluorescence in the NIR-II region (1000–1400 nm) of photon wavelengths. Methods and Results Owing to the reduced photon scattering of NIR-II fluorescence compared to traditional NIR fluorescence imaging and thus much deeper penetration depth into the body, we demonstrated that the mouse hindlimb vasculature could be imaged with higher spatial resolution than in vivo microCT. Furthermore, imaging over 26 days revealed a significant increase in hindlimb microvascular density in response to experimentally induced ischemia within the first 8 days of the surgery (P < 0.005), which was confirmed by histological analysis of microvascular density. Moreover, the tissue perfusion in the ischemic hindlimb could be quantitatively measured by the dynamic NIR-II method, revealing the temporal kinetics of blood flow recovery that resembled microbead-based blood flowmetry and laser Doppler blood spectroscopy. Conclusions The penetration depth of millimeters, high spatial resolution and fast acquisition rate of NIR-II imaging makes it a useful imaging tool for murine models of vascular disease. PMID:24657826

  1. Near-infrared II fluorescence for imaging hindlimb vessel regeneration with dynamic tissue perfusion measurement.

    PubMed

    Hong, Guosong; Lee, Jerry C; Jha, Arshi; Diao, Shuo; Nakayama, Karina H; Hou, Luqia; Doyle, Timothy C; Robinson, Joshua T; Antaris, Alexander L; Dai, Hongjie; Cooke, John P; Huang, Ngan F

    2014-05-01

    Real-time vascular imaging that provides both anatomic and hemodynamic information could greatly facilitate the diagnosis of vascular diseases and provide accurate assessment of therapeutic effects. Here, we have developed a novel fluorescence-based all-optical method, named near-infrared II (NIR-II) fluorescence imaging, to image murine hindlimb vasculature and blood flow in an experimental model of peripheral arterial disease, by exploiting fluorescence in the NIR-II region (1000-1400 nm) of photon wavelengths. Because of the reduced photon scattering of NIR-II fluorescence compared with traditional NIR fluorescence imaging and thus much deeper penetration depth into the body, we demonstrated that the mouse hindlimb vasculature could be imaged with higher spatial resolution than in vivo microscopic computed tomography. Furthermore, imaging during 26 days revealed a significant increase in hindlimb microvascular density in response to experimentally induced ischemia within the first 8 days of the surgery (P<0.005), which was confirmed by histological analysis of microvascular density. Moreover, the tissue perfusion in the ischemic hindlimb could be quantitatively measured by the dynamic NIR-II method, revealing the temporal kinetics of blood flow recovery that resembled microbead-based blood flowmetry and laser Doppler blood spectroscopy. The penetration depth of millimeters, high spatial resolution, and fast acquisition rate of NIR-II imaging make it a useful imaging tool for murine models of vascular disease. © 2014 American Heart Association, Inc.

  2. Portal imaging with flat-panel detector and CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Wai; Dallas, William J.

    1997-07-01

    This paper provides a comparison of imaging parameters of two portal imaging systems at 6 MV: a flat panel detector and a CCD-camera based portal imaging system. Measurements were made of the signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. Both systems have a linear response with respect to exposure, and the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal- to-noise ratio, which is higher than that observed wit the CCD-camera based portal imaging system. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The paper also presents data on the screen's photon gain (the number of light-photons per interacting x-ray photon), as well as on the magnitude of the Swank-noise, (which describes fluctuation in the screen's photon gain). Images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center, were generated at an exposure of 1 MU. The CCD-camera based system permits detection of aluminum-holes of 0.01194 cm diameter and 0.228 mm depth while the flat-panel detector permits detection of aluminum holes of 0.01194 cm diameter and 0.1626 mm depth, indicating a better signal-to-noise ratio. Rank order filtering was applied to the raw images from the CCD-based system in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-camera and generate 'salt and pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise.

  3. Optofluidic bioimaging platform for quantitative phase imaging of lab on a chip devices using digital holographic microscopy.

    PubMed

    Pandiyan, Vimal Prabhu; John, Renu

    2016-01-20

    We propose a versatile 3D phase-imaging microscope platform for real-time imaging of optomicrofluidic devices based on the principle of digital holographic microscopy (DHM). Lab-on-chip microfluidic devices fabricated on transparent polydimethylsiloxane (PDMS) and glass substrates have attained wide popularity in biological sensing applications. However, monitoring, visualization, and characterization of microfluidic devices, microfluidic flows, and the biochemical kinetics happening in these devices is difficult due to the lack of proper techniques for real-time imaging and analysis. The traditional bright-field microscopic techniques fail in imaging applications, as the microfluidic channels and the fluids carrying biological samples are transparent and not visible in bright light. Phase-based microscopy techniques that can image the phase of the microfluidic channel and changes in refractive indices due to the fluids and biological samples present in the channel are ideal for imaging the fluid flow dynamics in a microfluidic channel at high resolutions. This paper demonstrates three-dimensional imaging of a microfluidic device with nanometric depth precisions and high SNR. We demonstrate imaging of microelectrodes of nanometric thickness patterned on glass substrate and the microfluidic channel. Three-dimensional imaging of a transparent PDMS optomicrofluidic channel, fluid flow, and live yeast cell flow in this channel has been demonstrated using DHM. We also quantify the average velocity of fluid flow through the channel. In comparison to any conventional bright-field microscope, the 3D depth information in the images illustrated in this work carry much information about the biological system under observation. The results demonstrated in this paper prove the high potential of DHM in imaging optofluidic devices; detection of pathogens, cells, and bioanalytes on lab-on-chip devices; and in studying microfluidic dynamics in real time based on phase changes.

  4. High-spatial-resolution sub-surface imaging using a laser-based acoustic microscopy technique.

    PubMed

    Balogun, Oluwaseyi; Cole, Garrett D; Huber, Robert; Chinn, Diane; Murray, Todd W; Spicer, James B

    2011-01-01

    Scanning acoustic microscopy techniques operating at frequencies in the gigahertz range are suitable for the elastic characterization and interior imaging of solid media with micrometer-scale spatial resolution. Acoustic wave propagation at these frequencies is strongly limited by energy losses, particularly from attenuation in the coupling media used to transmit ultrasound to a specimen, leading to a decrease in the depth in a specimen that can be interrogated. In this work, a laser-based acoustic microscopy technique is presented that uses a pulsed laser source for the generation of broadband acoustic waves and an optical interferometer for detection. The use of a 900-ps microchip pulsed laser facilitates the generation of acoustic waves with frequencies extending up to 1 GHz which allows for the resolution of micrometer-scale features in a specimen. Furthermore, the combination of optical generation and detection approaches eliminates the use of an ultrasonic coupling medium, and allows for elastic characterization and interior imaging at penetration depths on the order of several hundred micrometers. Experimental results illustrating the use of the laser-based acoustic microscopy technique for imaging micrometer-scale subsurface geometrical features in a 70-μm-thick single-crystal silicon wafer with a (100) orientation are presented.

  5. Computational-optical microscopy for 3D biological imaging beyond the diffraction limit

    NASA Astrophysics Data System (ADS)

    Grover, Ginni

    In recent years, super-resolution imaging has become an important fluorescent microscopy tool. It has enabled imaging of structures smaller than the optical diffraction limit with resolution less than 50 nm. Extension to high-resolution volume imaging has been achieved by integration with various optical techniques. In this thesis, development of a fluorescent microscope to enable high resolution, extended depth, three dimensional (3D) imaging is discussed; which is achieved by integration of computational methods with optical systems. In the first part of the thesis, point spread function (PSF) engineering for volume imaging is discussed. A class of PSFs, referred to as double-helix (DH) PSFs, is generated. The PSFs exhibit two focused spots in the image plane which rotate about the optical axis, encoding depth in rotation of the image. These PSFs extend the depth-of-field up to a factor of ˜5. Precision performance of the DH-PSFs, based on an information theoretical analysis, is compared with other 3D methods with conclusion that the DH-PSFs provide the best precision and the longest depth-of-field. Out of various possible DH-PSFs, a suitable PSF is obtained for super-resolution microscopy. The DH-PSFs are implemented in imaging systems, such as a microscope, with a special phase modulation at the pupil plane. Surface-relief elements which are polarization-insensitive and ˜90% light efficient are developed for phase modulation. The photon-efficient DH-PSF microscopes thus developed are used, along with optimal position estimation algorithms, for tracking and super-resolution imaging in 3D. Imaging at depths-of-field of up to 2.5 microm is achieved without focus scanning. Microtubules were imaged with 3D resolution of (6, 9, 39) nm, which is in close agreement with the theoretical limit. A quantitative study of co-localization of two proteins in volume was conducted in live bacteria. In the last part of the thesis practical aspects of the DH-PSF microscope are discussed. A method to stabilize it, for extended periods of time, with 3-4 nm precision in 3D is developed. 3D Super-resolution is demonstrated without drift. A PSF correction algorithm is demonstrated to improve characteristics of the DH-PSF in an experiment, where it is implemented with a polarization-insensitive liquid crystal spatial light modulator.

  6. Acceleration of color computer-generated hologram from three-dimensional scenes with texture and depth information

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi

    2014-06-01

    We propose acceleration of color computer-generated holograms (CGHs) from three-dimensional (3D) scenes that are expressed as texture (RGB) and depth (D) images. These images are obtained by 3D graphics libraries and RGB-D cameras: for example, OpenGL and Kinect, respectively. We can regard them as two-dimensional (2D) cross-sectional images along the depth direction. The generation of CGHs from the 2D cross-sectional images requires multiple diffraction calculations. If we use convolution-based diffraction such as the angular spectrum method, the diffraction calculation takes a long time and requires large memory usage because the convolution diffraction calculation requires the expansion of the 2D cross-sectional images to avoid the wraparound noise. In this paper, we first describe the acceleration of the diffraction calculation using "Band-limited double-step Fresnel diffraction," which does not require the expansion. Next, we describe color CGH acceleration using color space conversion. In general, color CGHs are generated on RGB color space; however, we need to repeat the same calculation for each color component, so that the computational burden of the color CGH generation increases three-fold, compared with monochrome CGH generation. We can reduce the computational burden by using YCbCr color space because the 2D cross-sectional images on YCbCr color space can be down-sampled without the impairing of the image quality.

  7. [Research and realization of signal processing algorithms based on FPGA in digital ophthalmic ultrasonography imaging].

    PubMed

    Fang, Simin; Zhou, Sheng; Wang, Xiaochun; Ye, Qingsheng; Tian, Ling; Ji, Jianjun; Wang, Yanqun

    2015-01-01

    To design and improve signal processing algorithms of ophthalmic ultrasonography based on FPGA. Achieved three signal processing modules: full parallel distributed dynamic filter, digital quadrature demodulation, logarithmic compression, using Verilog HDL hardware language in Quartus II. Compared to the original system, the hardware cost is reduced, the whole image shows clearer and more information of the deep eyeball contained in the image, the depth of detection increases from 5 cm to 6 cm. The new algorithms meet the design requirements and achieve the system's optimization that they can effectively improve the image quality of existing equipment.

  8. Motorized photoacoustic tomography probe for label-free improvement in image quality

    NASA Astrophysics Data System (ADS)

    Sangha, Gurneet S.; Hale, Nick H.; Goergen, Craig J.

    2018-02-01

    One of the challenges in high-resolution in vivo lipid-based photoacoustic tomography (PAT) is improving penetration depth and signal-to-noise ratio (SNR) past subcutaneous fat absorbers. A potential solution is to create optical manipulation techniques to maximize the photon density within a region of interest. Here, we present a motorized PAT probe that is capable of tuning the depth in which light is focused, as well as substantially reducing probe-skin artifacts that can obscure image interpretation. Our PAT system consists of a Nd:YAG laser (Surelite EX, Continuum) coupled with a 40 MHz central frequency ultrasound transducer (Vevo2100, FUJIFILM Visual Sonics). This system allows us to deliver 10 Hz, 5 ns light pulses with fluence of 40 mJ/cm2 to the tissue interest and reconstruct PAT and ultrasound images with axial resolutions of 125 µm and 40 µm, respectively. The motorized PAT holder was validated by imaging a polyethylene-50 tubing embedded polyvinyl alcohol phantom and periaortic fat on apolipoprotein-E deficient mice. We used 1210 nm light for this study, as this wavelength generates PAT signal for both lipids and polyethylene-50 tubes. Ex vivo results showed a 2 mm improvement in penetration depth and in vivo experiments showed an increase in lipid SNR of at least 62%. Our PAT probe also utilizes a 7 μm aluminum filter to prevent in vivo probe-skin reflection artifacts that have been previously resolved using image post-processing techniques. Using this optimized PAT probe, we can direct light to various depths within tissue to improve image quality and prevent reflection artifacts.

  9. Retrieving the axial position of fluorescent light emitting spots by shearing interferometry

    NASA Astrophysics Data System (ADS)

    Schindler, Johannes; Schau, Philipp; Brodhag, Nicole; Frenner, Karsten; Osten, Wolfgang

    2016-12-01

    A method for the depth-resolved detection of fluorescent radiation based on imaging of an interference pattern of two intersecting beams and shearing interferometry is presented. The illumination setup provides the local addressing of the excitation of fluorescence and a coarse confinement of the excitation volume in axial and lateral directions. The reconstruction of the depth relies on the measurement of the phase of the fluorescent wave fronts. Their curvature is directly related to the distance of a source to the focus of the imaging system. Access to the phase information is enabled by a lateral shearing interferometer based on a Michelson setup. This allows the evaluation of interference signals even for spatially and temporally incoherent light such as emitted by fluorophors. An analytical signal model is presented and the relations for obtaining the depth information are derived. Measurements of reference samples with different concentrations and spatial distributions of fluorophors and scatterers prove the experimental feasibility of the method. In a setup optimized for flexibility and operating in the visible range, sufficiently large interference signals are recorded for scatterers placed in depths in the range of hundred micrometers below the surface in a material with scattering properties comparable to dental enamel.

  10. Retrieving the axial position of fluorescent light emitting spots by shearing interferometry.

    PubMed

    Schindler, Johannes; Schau, Philipp; Brodhag, Nicole; Frenner, Karsten; Osten, Wolfgang

    2016-12-01

    A method for the depth-resolved detection of fluorescent radiation based on imaging of an interference pattern of two intersecting beams and shearing interferometry is presented. The illumination setup provides the local addressing of the excitation of fluorescence and a coarse confinement of the excitation volume in axial and lateral directions. The reconstruction of the depth relies on the measurement of the phase of the fluorescent wave fronts. Their curvature is directly related to the distance of a source to the focus of the imaging system. Access to the phase information is enabled by a lateral shearing interferometer based on a Michelson setup. This allows the evaluation of interference signals even for spatially and temporally incoherent light such as emitted by fluorophors. An analytical signal model is presented and the relations for obtaining the depth information are derived. Measurements of reference samples with different concentrations and spatial distributions of fluorophors and scatterers prove the experimental feasibility of the method. In a setup optimized for flexibility and operating in the visible range, sufficiently large interference signals are recorded for scatterers placed in depths in the range of hundred micrometers below the surface in a material with scattering properties comparable to dental enamel.

  11. Fluorescence tomography characterization for sub-surface imaging with protoporphyrin IX

    PubMed Central

    Kepshire, Dax; Davis, Scott C.; Dehghani, Hamid; Paulsen, Keith D.; Pogue, Brian W.

    2009-01-01

    Optical imaging of fluorescent objects embedded in a tissue simulating medium was characterized using non-contact based approaches to fluorescence remittance imaging (FRI) and sub-surface fluorescence diffuse optical tomography (FDOT). Using Protoporphyrin IX as a fluorescent agent, experiments were performed on tissue phantoms comprised of typical in-vivo tumor to normal tissue contrast ratios, ranging from 3.5:1 up to 10:1. It was found that tomographic imaging was able to recover interior inclusions with high contrast relative to the background; however, simple planar fluorescence imaging provided a superior contrast to noise ratio. Overall, FRI performed optimally when the object was located on or close to the surface and, perhaps most importantly, FDOT was able to recover specific depth information about the location of embedded regions. The results indicate that an optimal system for localizing embedded fluorescent regions should combine fluorescence reflectance imaging for high sensitivity and sub-surface tomography for depth detection, thereby allowing more accurate localization in all three directions within the tissue. PMID:18545571

  12. Wave Period and Coastal Bathymetry Estimations from Satellite Images

    NASA Astrophysics Data System (ADS)

    Danilo, Celine; Melgani, Farid

    2016-08-01

    We present an approach for wave period and coastal water depth estimation. The approach based on wave observations, is entirely independent of ancillary data and can theoretically be applied to SAR or optical images. In order to demonstrate its feasibility we apply our method to more than 50 Sentinel-1A images of the Hawaiian Islands, well-known for its long waves. Six wave buoys are available to compare our results with in-situ measurements. The results on Sentinel-1A images show that half of the images were unsuitable for applying the method (no swell or wavelength too small to be captured by the SAR). On the other half, 78% of the estimated wave periods are in accordance with buoy measurements. In addition, we present preliminary results of the estimation of the coastal water depth on a Landsat-8 image (with characteristics close to Sentinel-2A). With a squared correlation coefficient of 0.7 for ground truth measurement, this approach reveals promising results for monitoring coastal bathymetry.

  13. Concept of proton radiography using energy resolved dose measurement.

    PubMed

    Bentefour, El H; Schnuerer, Roland; Lu, Hsiao-Ming

    2016-08-21

    Energy resolved dosimetry offers a potential path to single detector based proton imaging using scanned proton beams. This is because energy resolved dose functions encrypt the radiological depth at which the measurements are made. When a set of predetermined proton beams 'proton imaging field' are used to deliver a well determined dose distribution in a specific volume, then, at any given depth x of this volume, the behavior of the dose against the energies of the proton imaging field is unique and characterizes the depth x. This concept applies directly to proton therapy scanning delivery methods (pencil beam scanning and uniform scanning) and it can be extended to the proton therapy passive delivery methods (single and double scattering) if the delivery of the irradiation is time-controlled with a known time-energy relationship. To derive the water equivalent path length (WEPL) from the energy resolved dose measurement, one may proceed in two different ways. A first method is by matching the measured energy resolved dose function to a pre-established calibration database of the behavior of the energy resolved dose in water, measured over the entire range of radiological depths with at least 1 mm spatial resolution. This calibration database can also be made specific to the patient if computed using the patient x-CT data. A second method to determine the WEPL is by using the empirical relationships between the WEPL and the integral dose or the depth at 80% of the proximal fall off of the energy resolved dose functions in water. In this note, we establish the evidence of the fundamental relationship between the energy resolved dose and the WEPL at the depth of the measurement. Then, we illustrate this relationship with experimental data and discuss its imaging dynamic range for 230 MeV protons.

  14. A new method for depth profiling reconstruction in confocal microscopy

    NASA Astrophysics Data System (ADS)

    Esposito, Rosario; Scherillo, Giuseppe; Mensitieri, Giuseppe

    2018-05-01

    Confocal microscopy is commonly used to reconstruct depth profiles of chemical species in multicomponent systems and to image nuclear and cellular details in human tissues via image intensity measurements of optical sections. However, the performance of this technique is reduced by inherent effects related to wave diffraction phenomena, refractive index mismatch and finite beam spot size. All these effects distort the optical wave and cause an image to be captured of a small volume around the desired illuminated focal point within the specimen rather than an image of the focal point itself. The size of this small volume increases with depth, thus causing a further loss of resolution and distortion of the profile. Recently, we proposed a theoretical model that accounts for the above wave distortion and allows for a correct reconstruction of the depth profiles for homogeneous samples. In this paper, this theoretical approach has been adapted for describing the profiles measured from non-homogeneous distributions of emitters inside the investigated samples. The intensity image is built by summing the intensities collected from each of the emitters planes belonging to the illuminated volume, weighed by the emitters concentration. The true distribution of the emitters concentration is recovered by a new approach that implements this theoretical model in a numerical algorithm based on the Maximum Entropy Method. Comparisons with experimental data and numerical simulations show that this new approach is able to recover the real unknown concentration distribution from experimental profiles with an accuracy better than 3%.

  15. Automatic Intra-Operative Stitching of Non-Overlapping Cone-Beam CT Acquisitions

    PubMed Central

    Fotouhi, Javad; Fuerst, Bernhard; Unberath, Mathias; Reichenstein, Stefan; Lee, Sing Chun; Johnson, Alex A.; Osgood, Greg M.; Armand, Mehran; Navab, Nassir

    2018-01-01

    Purpose Cone-Beam Computed Tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While CBCT provides crucial intraoperative information, it is bounded by a limited imaging volume, resulting in reduced effectiveness. This paper introduces an approach allowing real-time intraoperative stitching of overlapping and non-overlapping CBCT volumes to enable 3D measurements on large anatomical structures. Methods A CBCT-capable mobile C-arm is augmented with a Red-Green-Blue-Depth (RGBD) camera. An off-line co-calibration of the two imaging modalities results in co-registered video, infrared, and X-ray views of the surgical scene. Then, automatic stitching of multiple small, non-overlapping CBCT volumes is possible by recovering the relative motion of the C-arm with respect to the patient based on the camera observations. We propose three methods to recover the relative pose: RGB-based tracking of visual markers that are placed near the surgical site, RGBD-based simultaneous localization and mapping (SLAM) of the surgical scene which incorporates both color and depth information for pose estimation, and surface tracking of the patient using only depth data provided by the RGBD sensor. Results On an animal cadaver, we show stitching errors as low as 0.33 mm, 0.91 mm, and 1.72mm when the visual marker, RGBD SLAM, and surface data are used for tracking, respectively. Conclusions The proposed method overcomes one of the major limitations of CBCT C-arm systems by integrating vision-based tracking and expanding the imaging volume without any intraoperative use of calibration grids or external tracking systems. We believe this solution to be most appropriate for 3D intraoperative verification of several orthopedic procedures. PMID:29569728

  16. Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system

    NASA Astrophysics Data System (ADS)

    Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu

    2018-09-01

    A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.

  17. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    NASA Astrophysics Data System (ADS)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  18. Dimensional metrology of smooth micro structures utilizing the spatial modulation of white-light interference fringes

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Tang, Yan; Deng, Qinyuan; Liu, Junbo; Wang, Jian; Zhao, Lixin

    2017-08-01

    Dimensional metrology for micro structure plays an important role in addressing quality issues and observing the performance of micro-fabricated products. In white light interferometry, the proposed method is expected to measure three-dimensional topography through modulation depth in spatial frequency domain. A normalized modulation depth is first obtained in the xy plane (image plane) for each CCD image individually. After that, the modulation depth of each pixel is analyzed along the scanning direction (z-axis) to reshape the topography of micro samples. Owing to the characteristics of modulation depth in broadband light interferometry, the method could effectively suppress the negative influences caused by light fluctuations and external irradiance disturbance. Both theory and experiments are elaborated in detail to verify that the modulation depth-based method can greatly level up the stability and sensitivity with satisfied precision in the measurement system. This technique can achieve an improved robustness in a complex measurement environment with the potential to be applied in online topography measurement such as chemistry and medical domains.

  19. Depth information in natural environments derived from optic flow by insect motion detection system: a model analysis

    PubMed Central

    Schwegmann, Alexander; Lindemann, Jens P.; Egelhaaf, Martin

    2014-01-01

    Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs). It is the key result of our analysis that the absolute EMD responses, i.e., the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way. PMID:25136314

  20. Polarization-sensitive optical coherence tomography-based imaging, parameterization, and quantification of human cartilage degeneration

    NASA Astrophysics Data System (ADS)

    Brill, Nicolai; Wirtz, Mathias; Merhof, Dorit; Tingart, Markus; Jahr, Holger; Truhn, Daniel; Schmitt, Robert; Nebelung, Sven

    2016-07-01

    Polarization-sensitive optical coherence tomography (PS-OCT) is a light-based, high-resolution, real-time, noninvasive, and nondestructive imaging modality yielding quasimicroscopic cross-sectional images of cartilage. As yet, comprehensive parameterization and quantification of birefringence and tissue properties have not been performed on human cartilage. PS-OCT and algorithm-based image analysis were used to objectively grade human cartilage degeneration in terms of surface irregularity, tissue homogeneity, signal attenuation, as well as birefringence coefficient and band width, height, depth, and number. Degeneration-dependent changes were noted for the former three parameters exclusively, thereby questioning the diagnostic value of PS-OCT in the assessment of human cartilage degeneration.

  1. High dynamic range coding imaging system

    NASA Astrophysics Data System (ADS)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  2. Intrauterine photoacoustic and ultrasound imaging probe

    NASA Astrophysics Data System (ADS)

    Miranda, Christopher; Barkley, Joel; Smith, Barbara S.

    2018-04-01

    Intrauterine photoacoustic and ultrasound imaging are probe-based imaging modalities with translational potential for use in detecting endometrial diseases. This deep-tissue imaging probe design allows for the retrofitting of commercially available endometrial sampling curettes. The imaging probe presented here has a 2.92-mm diameter and approximate length of 26 cm, which allows for entry into the human endometrial cavity, making it possible to use photoacoustic imaging and high-resolution ultrasound to characterize the uterus. We demonstrate the imaging probes' ability to provide structural information of an excised pig uterus using ultrasound imaging and detect photoacoustic signals at a radial depth of 1 cm.

  3. Rational-operator-based depth-from-defocus approach to scene reconstruction.

    PubMed

    Li, Ang; Staunton, Richard; Tjahjadi, Tardi

    2013-09-01

    This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.

  4. A Parallel Product-Convolution approach for representing the depth varying Point Spread Functions in 3D widefield microscopy based on principal component analysis.

    PubMed

    Arigovindan, Muthuvel; Shaevitz, Joshua; McGowan, John; Sedat, John W; Agard, David A

    2010-03-29

    We address the problem of computational representation of image formation in 3D widefield fluorescence microscopy with depth varying spherical aberrations. We first represent 3D depth-dependent point spread functions (PSFs) as a weighted sum of basis functions that are obtained by principal component analysis (PCA) of experimental data. This representation is then used to derive an approximating structure that compactly expresses the depth variant response as a sum of few depth invariant convolutions pre-multiplied by a set of 1D depth functions, where the convolving functions are the PCA-derived basis functions. The model offers an efficient and convenient trade-off between complexity and accuracy. For a given number of approximating PSFs, the proposed method results in a much better accuracy than the strata based approximation scheme that is currently used in the literature. In addition to yielding better accuracy, the proposed methods automatically eliminate the noise in the measured PSFs.

  5. Depth image super-resolution via semi self-taught learning framework

    NASA Astrophysics Data System (ADS)

    Zhao, Furong; Cao, Zhiguo; Xiao, Yang; Zhang, Xiaodi; Xian, Ke; Li, Ruibo

    2017-06-01

    Depth images have recently attracted much attention in computer vision and high-quality 3D content for 3DTV and 3D movies. In this paper, we present a new semi self-taught learning application framework for enhancing resolution of depth maps without making use of ancillary color images data at the target resolution, or multiple aligned depth maps. Our framework consists of cascade random forests reaching from coarse to fine results. We learn the surface information and structure transformations both from a small high-quality depth exemplars and the input depth map itself across different scales. Considering that edge plays an important role in depth map quality, we optimize an effective regularized objective that calculates on output image space and input edge space in random forests. Experiments show the effectiveness and superiority of our method against other techniques with or without applying aligned RGB information

  6. Influence of aerosol estimation on coastal water products retrieved from HICO images

    NASA Astrophysics Data System (ADS)

    Patterson, Karen W.; Lamela, Gia

    2011-06-01

    The Hyperspectral Imager for the Coastal Ocean (HICO) is a hyperspectral sensor which was launched to the International Space Station in September 2009. The Naval Research Laboratory (NRL) has been developing the Coastal Water Signatures Toolkit (CWST) to estimate water depth, bottom type and water column constituents such as chlorophyll, suspended sediments and chromophoric dissolved organic matter from hyperspectral imagery. The CWST uses a look-up table approach, comparing remote sensing reflectance spectra observed in an image to a database of modeled spectra for pre-determined water column constituents, depth and bottom type. In order to successfully use this approach, the remote sensing reflectances must be accurate which implies accurately correcting for the atmospheric contribution to the HICO top of the atmosphere radiances. One tool the NRL is using to atmospherically correct HICO imagery is Correction of Coastal Ocean Atmospheres (COCOA), which is based on Tafkaa 6S. One of the user input parameters to COCOA is aerosol optical depth or aerosol visibility, which can vary rapidly over short distances in coastal waters. Changes to the aerosol thickness results in changes to the magnitude of the remote sensing reflectances. As such, the CWST retrievals for water constituents, depth and bottom type can be expected to vary in like fashion. This work is an illustration of the variability in CWST retrievals due to inaccurate aerosol thickness estimation during atmospheric correction of HICO images.

  7. Depth map generation using a single image sensor with phase masks.

    PubMed

    Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki

    2016-06-13

    Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.

  8. Performance characterization of structured light-based fingerprint scanner

    NASA Astrophysics Data System (ADS)

    Hassebrook, Laurence G.; Wang, Minghao; Daley, Raymond C.

    2013-05-01

    Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation's image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.

  9. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.

    PubMed

    Sohn, Bong-Soo

    2017-03-11

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.

  10. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones

    PubMed Central

    Sohn, Bong-Soo

    2017-01-01

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487

  11. A range/depth modulation transfer function (RMTF) framework for characterizing 3D imaging LADAR performance

    NASA Astrophysics Data System (ADS)

    Staple, Bevan; Earhart, R. P.; Slaymaker, Philip A.; Drouillard, Thomas F., II; Mahony, Thomas

    2005-05-01

    3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system"s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.

  12. Cytology 3D structure formation based on optical microscopy images

    NASA Astrophysics Data System (ADS)

    Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.

  13. Imaging System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The 1100C Virtual Window is based on technology developed under NASA Small Business Innovation (SBIR) contracts to Ames Research Center. For example, under one contract Dimension Technologies, Inc. developed a large autostereoscopic display for scientific visualization applications. The Virtual Window employs an innovative illumination system to deliver the depth and color of true 3D imaging. Its applications include surgery and Magnetic Resonance Imaging scans, viewing for teleoperated robots, training, and in aviation cockpit displays.

  14. Application of hyperosmotic agent to determine gastric cancer with optical coherence tomography ex vivo in mice

    NASA Astrophysics Data System (ADS)

    Xiong, Honglian; Guo, Zhouyi; Zeng, Changchun; Wang, Like; He, Yonghong; Liu, Songhao

    2009-03-01

    Noninvasive tumor imaging could lead to the early detection and timely treatment of cancer. Optical coherence tomography (OCT) has been reported as an ideal diagnostic tool for distinguishing tumor tissues from normal tissues based on structural imaging. In this study, the capability of OCT for functional imaging of normal and tumor tissues based on time- and depth-resolved quantification of the permeability of biomolecules through these tissues is investigated. The orthotopic graft model of gastric cancer in nude mice is used, normal and tumor tissues from the gastric wall are imaged, and a diffusion of 20% aqueous solution of glucose in normal stomach tissues and gastric tumor tissues is monitored and quantified as a function of time and tissue depth by an OCT system. Our results show that the permeability coefficient is (0.94+/-0.04)×10-5 cm/s in stomach tissues and (5.32+/-0.17)×10-5 cm/s in tumor tissues, respectively, and that tumor tissues have a higher permeability coefficient compared to normal tissues in optical coherence tomographic images. From the results, it is found that the accurate and sensitive assessment of the permeability coefficients of normal and tumor tissues offers an effective OCT image method for detection of tumor tissues and clinical diagnosis.

  15. Depth profiling and imaging capabilities of an ultrashort pulse laser ablation time of flight mass spectrometer

    PubMed Central

    Cui, Yang; Moore, Jerry F.; Milasinovic, Slobodan; Liu, Yaoming; Gordon, Robert J.; Hanley, Luke

    2012-01-01

    An ultrafast laser ablation time-of-flight mass spectrometer (AToF-MS) and associated data acquisition software that permits imaging at micron-scale resolution and sub-micron-scale depth profiling are described. The ion funnel-based source of this instrument can be operated at pressures ranging from 10−8 to ∼0.3 mbar. Mass spectra may be collected and stored at a rate of 1 kHz by the data acquisition system, allowing the instrument to be coupled with standard commercial Ti:sapphire lasers. The capabilities of the AToF-MS instrument are demonstrated on metal foils and semiconductor wafers using a Ti:sapphire laser emitting 800 nm, ∼75 fs pulses at 1 kHz. Results show that elemental quantification and depth profiling are feasible with this instrument. PMID:23020378

  16. Scattered wave imaging of the oceanic plate in Cascadia

    PubMed Central

    Rychert, Catherine A.; Harmon, Nicholas; Tharimena, Saikiran

    2018-01-01

    Fifty years after plate tectonic theory was developed, the defining mechanism of the plate is still widely debated. The relatively short, simple history of young ocean lithosphere makes it an ideal place to determine the property that defines a plate, yet the remoteness and harshness of the seafloor have made precise imaging challenging. We use S-to-P receiver functions to image discontinuities beneath newly formed lithosphere at the Juan de Fuca and Gorda Ridges. We image a strong negative discontinuity at the base of the plate increasing from 20 to 45 km depth beneath the 0- to 10-million-year-old seafloor and a positive discontinuity at the onset of melting at 90 to 130 km depth. Comparison with geodynamic models and experimental constraints indicates that the observed discontinuities cannot easily be reconciled with subsolidus mechanisms. Instead, partial melt may be required, which would decrease mantle viscosity and define the young oceanic plate. PMID:29457132

  17. New Insights on Subsurface Imaging of Carbon Nanotubes in Polymer Composites via Scanning Electron Microscopy

    NASA Technical Reports Server (NTRS)

    Zhao, Minhua; Ming, Bin; Kim, Jae-Woo; Gibbons, Luke J.; Gu, Xiaohong; Nguyen, Tinh; Park, Cheol; Lillehei, Peter T.; Villarrubia, J. S.; Vladar, Andras E.; hide

    2015-01-01

    Despite many studies of subsurface imaging of carbon nanotube (CNT)-polymer composites via scanning electron microscopy (SEM), significant controversy exists concerning the imaging depth and contrast mechanisms. We studied CNT-polyimide composites and, by threedimensional reconstructions of captured stereo-pair images, determined that the maximum SEM imaging depth was typically hundreds of nanometers. The contrast mechanisms were investigated over a broad range of beam accelerating voltages from 0.3 to 30 kV, and ascribed to modulation by embedded CNTs of the effective secondary electron (SE) emission yield at the polymer surface. This modulation of the SE yield is due to non-uniform surface potential distribution resulting from current flows due to leakage and electron beam induced current. The importance of an external electric field on SEM subsurface imaging was also demonstrated. The insights gained from this study can be generally applied to SEM nondestructive subsurface imaging of conducting nanostructures embedded in dielectric matrices such as graphene-polymer composites, silicon-based single electron transistors, high resolution SEM overlay metrology or e-beam lithography, and have significant implications in nanotechnology.

  18. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  19. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging

    PubMed Central

    Manley, Suliana

    2015-01-01

    Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope’s pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467

  20. A weighted optimization approach to time-of-flight sensor fusion.

    PubMed

    Schwarz, Sebastian; Sjostrom, Marten; Olsson, Roger

    2014-01-01

    Acquiring scenery depth is a fundamental task in computer vision, with many applications in manufacturing, surveillance, or robotics relying on accurate scenery information. Time-of-flight cameras can provide depth information in real-time and overcome short-comings of traditional stereo analysis. However, they provide limited spatial resolution and sophisticated upscaling algorithms are sought after. In this paper, we present a sensor fusion approach to time-of-flight super resolution, based on the combination of depth and texture sources. Unlike other texture guided approaches, we interpret the depth upscaling process as a weighted energy optimization problem. Three different weights are introduced, employing different available sensor data. The individual weights address object boundaries in depth, depth sensor noise, and temporal consistency. Applied in consecutive order, they form three weighting strategies for time-of-flight super resolution. Objective evaluations show advantages in depth accuracy and for depth image based rendering compared with state-of-the-art depth upscaling. Subjective view synthesis evaluation shows a significant increase in viewer preference by a factor of four in stereoscopic viewing conditions. To the best of our knowledge, this is the first extensive subjective test performed on time-of-flight depth upscaling. Objective and subjective results proof the suitability of our approach to time-of-flight super resolution approach for depth scenery capture.

  1. Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy

    PubMed Central

    Cha, Jae Won; Ballesta, Jerome; So, Peter T.C.

    2010-01-01

    The imaging depth of two-photon excitation fluorescence microscopy is partly limited by the inhomogeneity of the refractive index in biological specimens. This inhomogeneity results in a distortion of the wavefront of the excitation light. This wavefront distortion results in image resolution degradation and lower signal level. Using an adaptive optics system consisting of a Shack-Hartmann wavefront sensor and a deformable mirror, wavefront distortion can be measured and corrected. With adaptive optics compensation, we demonstrate that the resolution and signal level can be better preserved at greater imaging depth in a variety of ex-vivo tissue specimens including mouse tongue muscle, heart muscle, and brain. However, for these highly scattering tissues, we find signal degradation due to scattering to be a more dominant factor than aberration. PMID:20799824

  2. Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy.

    PubMed

    Cha, Jae Won; Ballesta, Jerome; So, Peter T C

    2010-01-01

    The imaging depth of two-photon excitation fluorescence microscopy is partly limited by the inhomogeneity of the refractive index in biological specimens. This inhomogeneity results in a distortion of the wavefront of the excitation light. This wavefront distortion results in image resolution degradation and lower signal level. Using an adaptive optics system consisting of a Shack-Hartmann wavefront sensor and a deformable mirror, wavefront distortion can be measured and corrected. With adaptive optics compensation, we demonstrate that the resolution and signal level can be better preserved at greater imaging depth in a variety of ex-vivo tissue specimens including mouse tongue muscle, heart muscle, and brain. However, for these highly scattering tissues, we find signal degradation due to scattering to be a more dominant factor than aberration.

  3. Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, Aongus; Collins, Robert J.; Krichel, Nils J.

    2009-11-10

    We describe a scanning time-of-flight system which uses the time-correlated single-photon counting technique to produce three-dimensional depth images of distant, noncooperative surfaces when these targets are illuminated by a kHz to MHz repetition rate pulsed laser source. The data for the scene are acquired using a scanning optical system and an individual single-photon detector. Depth images have been successfully acquired with centimeter xyz resolution, in daylight conditions, for low-signature targets in field trials at distances of up to 325 m using an output illumination with an average optical power of less than 50 {mu}W.

  4. Extending the fundamental imaging-depth limit of multi-photon microscopy by imaging with photo-activatable fluorophores.

    PubMed

    Chen, Zhixing; Wei, Lu; Zhu, Xinxin; Min, Wei

    2012-08-13

    It is highly desirable to be able to optically probe biological activities deep inside live organisms. By employing a spatially confined excitation via a nonlinear transition, multiphoton fluorescence microscopy has become indispensable for imaging scattering samples. However, as the incident laser power drops exponentially with imaging depth due to scattering loss, the out-of-focus fluorescence eventually overwhelms the in-focal signal. The resulting loss of imaging contrast defines a fundamental imaging-depth limit, which cannot be overcome by increasing excitation intensity. Herein we propose to significantly extend this depth limit by multiphoton activation and imaging (MPAI) of photo-activatable fluorophores. The imaging contrast is drastically improved due to the created disparity of bright-dark quantum states in space. We demonstrate this new principle by both analytical theory and experiments on tissue phantoms labeled with synthetic caged fluorescein dye or genetically encodable photoactivatable GFP.

  5. Choroidal vasculature characteristics based choroid segmentation for enhanced depth imaging optical coherence tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qiang; Niu, Sijie; Yuan, Songtao

    Purpose: In clinical research, it is important to measure choroidal thickness when eyes are affected by various diseases. The main purpose is to automatically segment choroid for enhanced depth imaging optical coherence tomography (EDI-OCT) images with five B-scans averaging. Methods: The authors present an automated choroid segmentation method based on choroidal vasculature characteristics for EDI-OCT images with five B-scans averaging. By considering the large vascular of the Haller’s layer neighbor with the choroid-sclera junction (CSJ), the authors measured the intensity ascending distance and a maximum intensity image in the axial direction from a smoothed and normalized EDI-OCT image. Then, basedmore » on generated choroidal vessel image, the authors constructed the CSJ cost and constrain the CSJ search neighborhood. Finally, graph search with smooth constraints was utilized to obtain the CSJ boundary. Results: Experimental results with 49 images from 10 eyes in 8 normal persons and 270 images from 57 eyes in 44 patients with several stages of diabetic retinopathy and age-related macular degeneration demonstrate that the proposed method can accurately segment the choroid of EDI-OCT images with five B-scans averaging. The mean choroid thickness difference and overlap ratio between the authors’ proposed method and manual segmentation drawn by experts were −11.43 μm and 86.29%, respectively. Conclusions: Good performance was achieved for normal and pathologic eyes, which proves that the authors’ method is effective for the automated choroid segmentation of the EDI-OCT images with five B-scans averaging.« less

  6. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  7. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  8. Research on the underwater target imaging based on the streak tube laser lidar

    NASA Astrophysics Data System (ADS)

    Cui, Zihao; Tian, Zhaoshuo; Zhang, Yanchao; Bi, Zongjie; Yang, Gang; Gu, Erdan

    2018-03-01

    A high frame rate streak tube imaging lidar (STIL) for real-time 3D imaging of underwater targets is presented in this paper. The system uses 532nm pulse laser as the light source, the maximum repetition rate is 120Hz, and the pulse width is 8ns. LabVIEW platform is used in the system, the system control, synchronous image acquisition, 3D data processing and display are realized through PC. 3D imaging experiment of underwater target is carried out in a flume with attenuation coefficient of 0.2, and the images of different depth and different material targets are obtained, the imaging frame rate is 100Hz, and the maximum detection depth is 31m. For an underwater target with a distance of 22m, the high resolution 3D image real-time acquisition is realized with range resolution of 1cm and space resolution of 0.3cm, the spatial relationship of the targets can be clearly identified by the image. The experimental results show that STIL has a good application prospect in underwater terrain detection, underwater search and rescue, and other fields.

  9. Analysis of the potential for non-invasive imaging of oxygenation at heart depth, using ultrasound optical tomography (UOT) or photo-acoustic tomography (PAT).

    PubMed

    Walther, Andreas; Rippe, Lars; Wang, Lihong V; Andersson-Engels, Stefan; Kröll, Stefan

    2017-10-01

    Despite the important medical implications, it is currently an open task to find optical non-invasive techniques that can image deep organs in humans. Addressing this, photo-acoustic tomography (PAT) has received a great deal of attention in the past decade, owing to favorable properties like high contrast and high spatial resolution. However, even with optimal components PAT cannot penetrate beyond a few centimeters, which still presents an important limitation of the technique. Here, we calculate the absorption contrast levels for PAT and for ultrasound optical tomography (UOT) and compare them to their relevant noise sources as a function of imaging depth. The results indicate that a new development in optical filters, based on rare-earth-ion crystals, can push the UOT technique significantly ahead of PAT. Such filters allow the contrast-to-noise ratio for UOT to be up to three orders of magnitude better than for PAT at depths of a few cm into the tissue. It also translates into a significant increase of the image depth of UOT compared to PAT, enabling deep organs to be imaged in humans in real time. Furthermore, such spectral holeburning filters are not sensitive to speckle decorrelation from the tissue and can operate at nearly any angle of incident light, allowing good light collection. We theoretically demonstrate the improved performance in the medically important case of non-invasive optical imaging of the oxygenation level of the frontal part of the human myocardial tissue. Our results indicate that further studies on UOT are of interest and that the technique may have large impact on future directions of biomedical optics.

  10. A new data processing technique for Rayleigh-Taylor instability growth experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Yongteng; Tu, Shaoyong; Miao, Wenyong

    Typical face-on experiments for Rayleigh-Taylor instability study involve the time-resolved radiography of an accelerated foil with line-of-sight of the radiography along the direction of motion. The usual method which derives perturbation amplitudes from the face-on images reverses the actual image transmission procedure, so the obtained results will have a large error in the case of large optical depth. In order to improve the accuracy of data processing, a new data processing technique has been developed to process the face-on images. This technique based on convolution theorem, refined solutions of optical depth can be achieved by solving equations. Furthermore, we discussmore » both techniques for image processing, including the influence of modulation transfer function of imaging system and the backlighter spatial profile. Besides, we use the two methods to the process the experimental results in Shenguang-II laser facility and the comparison shows that the new method effectively improve the accuracy of data processing.« less

  11. Image translation for single-shot focal tomography

    DOE PAGES

    Llull, Patrick; Yuan, Xin; Carin, Lawrence; ...

    2015-01-01

    Focus and depth of field are conventionally addressed by adjusting longitudinal lens position. More recently, combinations of deliberate blur and computational processing have been used to extend depth of field. Here we show that dynamic control of transverse and longitudinal lens position can be used to decode focus and extend depth of field without degrading static resolution. Our results suggest that optical image stabilization systems may be used for autofocus, extended depth of field, and 3D imaging.

  12. In vivo imaging of inducible tyrosinase gene expression with an ultrasound array-based photoacoustic system

    NASA Astrophysics Data System (ADS)

    Harrison, Tyler; Paproski, Robert J.; Zemp, Roger J.

    2012-02-01

    Tyrosinase, a key enzyme in the production of melanin, has shown promise as a reporter of genetic activity. While green fluorescent protein has been used extensively in this capacity, it is limited in its ability to provide information deep in tissue at a reasonable resolution. As melanin is a strong absorber of light, it is possible to image gene expression using tyrosinase with photoacoustic imaging technologies, resulting in excellent resolutions at multiple-centimeter depths. While our previous work has focused on creating and imaging MCF-7 cells with doxycycline-controlled tyrosinase expression, we have now established the viability of these cells in a murine model. Using an array-based photoacoustic imaging system with 5 MHz center frequency, we capture interleaved ultrasound and photoacoustic images of tyrosinase-expressing MCF-7 tumors both in a tissue mimicking phantom, and in vivo. Images of both the tyrosinase-expressing tumor and a control tumor are presented as both coregistered ultrasound-photoacoustic B-scan images and 3-dimensional photoacoustic volumes created by mechanically scanning the transducer. We find that the tyrosinase-expressing tumor is visible with a signal level 12dB greater than that of the control tumor in vivo. Phantom studies with excised tumors show that the tyrosinase-expressing tumor is visible at depths in excess of 2cm, and have suggested that our imaging system is sensitive to a transfection rate of less than 1%.

  13. Multispectral near-infrared reflectance and transillumination imaging of occlusal carious lesions: variations in lesion contrast with lesion depth

    NASA Astrophysics Data System (ADS)

    Simon, Jacob C.; Curtis, Donald A.; Darling, Cynthia L.; Fried, Daniel

    2018-02-01

    In vivo and in vitro studies have demonstrated that near-infrared (NIR) light at λ=1300-1700-nm can be used to acquire high contrast images of enamel demineralization without interference of stains. The objective of this study was to determine if a relationship exists between the NIR image contrast of occlusal lesions and the depth of the lesion. Extracted teeth with varying amounts of natural occlusal decay were measured using a multispectral-multimodal NIR imaging system which captures λ=1300-nm occlusal transillumination, and λ=1500-1700-nm cross-polarized reflectance images. Image analysis software was used to calculate the lesion contrast detected in both images from matched positions of each imaging modality. Samples were serially sectioned across the lesion with a precision saw, and polarized light microscopy was used to measure the respective lesion depth relative to the dentinoenamel junction. Lesion contrast measured from NIR crosspolarized reflectance images positively correlated (p<0.05) with increasing lesion depth and a statistically significant difference between inner enamel and dentin lesions was observed. The lateral width of pit and fissures lesions measured in both NIR cross-polarized reflectance and NIR transillumination positively correlated with lesion depth.

  14. Enhanced truncated-correlation photothermal coherence tomography with application to deep subsurface defect imaging and 3-dimensional reconstructions

    NASA Astrophysics Data System (ADS)

    Tavakolian, Pantea; Sivagurunathan, Koneswaran; Mandelis, Andreas

    2017-07-01

    Photothermal diffusion-wave imaging is a promising technique for non-destructive evaluation and medical applications. Several diffusion-wave techniques have been developed to produce depth-resolved planar images of solids and to overcome imaging depth and image blurring limitations imposed by the physics of parabolic diffusion waves. Truncated-Correlation Photothermal Coherence Tomography (TC-PCT) is the most successful class of these methodologies to-date providing 3-D subsurface visualization with maximum depth penetration and high axial and lateral resolution. To extend the depth range and axial and lateral resolution, an in-depth analysis of TC-PCT, a novel imaging system with improved instrumentation, and an optimized reconstruction algorithm over the original TC-PCT technique is developed. Thermal waves produced by a laser chirped pulsed heat source in a finite thickness solid and the image reconstruction algorithm are investigated from the theoretical point of view. 3-D visualization of subsurface defects utilizing the new TC-PCT system is reported. The results demonstrate that this method is able to detect subsurface defects at the depth range of ˜4 mm in a steel sample, which exhibits dynamic range improvement by a factor of 2.6 compared to the original TC-PCT. This depth does not represent the upper limit of the enhanced TC-PCT. Lateral resolution in the steel sample was measured to be ˜31 μm.

  15. A suite of phantom-based test methods for assessing image quality of photoacoustic tomography systems

    NASA Astrophysics Data System (ADS)

    Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Pfefer, T. Joshua

    2017-03-01

    As Photoacoustic Tomography (PAT) matures and undergoes clinical translation, objective performance test methods are needed to facilitate device development, regulatory clearance and clinical quality assurance. For mature medical imaging modalities such as CT, MRI, and ultrasound, tissue-mimicking phantoms are frequently incorporated into consensus standards for performance testing. A well-validated set of phantom-based test methods is needed for evaluating performance characteristics of PAT systems. To this end, we have constructed phantoms using a custom tissue-mimicking material based on PVC plastisol with tunable, biologically-relevant optical and acoustic properties. Each phantom is designed to enable quantitative assessment of one or more image quality characteristics including 3D spatial resolution, spatial measurement accuracy, ultrasound/PAT co-registration, uniformity, penetration depth, geometric distortion, sensitivity, and linearity. Phantoms contained targets including high-intensity point source targets and dye-filled tubes. This suite of phantoms was used to measure the dependence of performance of a custom PAT system (equipped with four interchangeable linear array transducers of varying design) on design parameters (e.g., center frequency, bandwidth, element geometry). Phantoms also allowed comparison of image artifacts, including surface-generated clutter and bandlimited sensing artifacts. Results showed that transducer design parameters create strong variations in performance including a trade-off between resolution and penetration depth, which could be quantified with our method. This study demonstrates the utility of phantom-based image quality testing in device performance assessment, which may guide development of consensus standards for PAT systems.

  16. Validation of multi-angle imaging spectroradiometer aerosol products in China

    Treesearch

    J. Liu; X. Xia; Z. Li; P. Wang; M. Min; WeiMin Hao; Y. Wang; J. Xin; X. Li; Y. Zheng; Z. Chen

    2010-01-01

    Based on AErosol RObotic NETwork and Chinese Sun Hazemeter Network data, the Multi-angle Imaging SpectroRadiometer (MISR) level 2 aerosol optical depth (AOD) products are evaluated in China. The MISR retrievals depict well the temporal aerosol trend in China with correlation coefficients exceeding 0.8 except for stations located in northeast China and at the...

  17. From Relativistic Electrons to X-ray Phase Contrast Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Garson, A. B.; Anastasio, M. A.

    2017-10-09

    We report the initial demonstrations of the use of single crystals in indirect x-ray imaging for x-ray phase contrast imaging at the Washington University in St. Louis Computational Bioimaging Laboratory (CBL). Based on single Gaussian peak fits to the x-ray images, we observed a four times smaller system point spread function (21 μm (FWHM)) with the 25-mm diameter single crystals than the reference polycrystalline phosphor’s 80-μm value. Potential fiber-optic plate depth-of-focus aspects and 33-μm diameter carbon fiber imaging are also addressed.

  18. Filtering high resolution hyperspectral imagery and analyzing it for quantification of water quality parameters and aquatic vegetation

    NASA Astrophysics Data System (ADS)

    Pande-Chhetri, Roshan

    High resolution hyperspectral imagery (airborne or ground-based) is gaining momentum as a useful analytical tool in various fields including agriculture and aquatic systems. These images are often contaminated with stripes and noise resulting in lower signal-to-noise ratio, especially in aquatic regions where signal is naturally low. This research investigates effective methods for filtering high spatial resolution hyperspectral imagery and use of the imagery in water quality parameter estimation and aquatic vegetation classification. The striping pattern of the hyperspectral imagery is non-parametric and difficult to filter. In this research, a de-striping algorithm based on wavelet analysis and adaptive Fourier domain normalization was examined. The result of this algorithm was found superior to other available algorithms and yielded highest Peak Signal to Noise Ratio improvement. The algorithm was implemented on individual image bands and on selected bands of the Maximum Noise Fraction (MNF) transformed images. The results showed that image filtering in the MNF domain was efficient and produced best results. The study investigated methods of analyzing hyperspectral imagery to estimate water quality parameters and to map aquatic vegetation in case-2 waters. Ground-based hyperspectral imagery was analyzed to determine chlorophyll-a (Chl-a) concentrations in aquaculture ponds. Two-band and three-band indices were implemented and the effect of using submerged reflectance targets was evaluated. Laboratory measured values were found to be in strong correlation with two-band and three-band spectral indices computed from the hyperspectral image. Coefficients of determination (R2) values were found to be 0.833 and 0.862 without submerged targets and stronger values of 0.975 and 0.982 were obtained using submerged targets. Airborne hyperspectral images were used to detect and classify aquatic vegetation in a black river estuarine system. Image normalization for water surface reflectance and water depths was conducted and non-parametric classifiers such as ANN, SVM and SAM were tested and compared. Quality assessment indicated better classification and detection when non-parametric classifiers were applied to normalized or depth invariant transform images. Best classification accuracy of 73% was achieved when ANN is applied on normalized image and best detection accuracy of around 92% was obtained when SVM or SAM was applied on depth invariant images.

  19. Enhancement of panoramic image resolution based on swift interpolation of Bezier surface

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Yang, Guo-guang; Bai, Jian

    2007-01-01

    Panoramic annular lens project the view of the entire 360 degrees around the optical axis onto an annular plane based on the way of flat cylinder perspective. Due to the infinite depth of field and the linear mapping relationship between an object and an image, the panoramic imaging system plays important roles in the applications of robot vision, surveillance and virtual reality. An annular image needs to be unwrapped to conventional rectangular image without distortion, in which interpolation algorithm is necessary. Although cubic splines interpolation can enhance the resolution of unwrapped image, it occupies too much time to be applied in practices. This paper adopts interpolation method based on Bezier surface and proposes a swift interpolation algorithm for panoramic image, considering the characteristic of panoramic image. The result indicates that the resolution of the image is well enhanced compared with the image by cubic splines and bilinear interpolation. Meanwhile the time consumed is shortened up by 78% than the time consumed cubic interpolation.

  20. Modeling of Composite Scenes Using Wires, Plates and Dielectric Parallelized (WIPL-DP)

    DTIC Science & Technology

    2006-06-01

    formation and solves the data communications problem. The ability to perform subsurface imaging to depths of 200’ have already been demonstrated by...perform subsurface imaging to depths of 200’ have already been demonstrated by Brown in [3] and presented in Figure 3 above. Furthermore, reference [3...transmitter platform for use in image formation and solves the data communications problem. The ability to perform subsurface imaging to depths of 200

  1. Derivation and Validation of Supraglacial Lake Volumes on the Greenland Ice Sheet from High-Resolution Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Moussavi, Mahsa S.; Abdalati, Waleed; Pope, Allen; Scambos, Ted; Tedesco, Marco; MacFerrin, Michael; Grigsby, Shane

    2016-01-01

    Supraglacial meltwater lakes on the western Greenland Ice Sheet (GrIS) are critical components of its surface hydrology and surface mass balance, and they also affect its ice dynamics. Estimates of lake volume, however, are limited by the availability of in situ measurements of water depth,which in turn also limits the assessment of remotely sensed lake depths. Given the logistical difficulty of collecting physical bathymetric measurements, methods relying upon in situ data are generally restricted to small areas and thus their application to largescale studies is difficult to validate. Here, we produce and validate spaceborne estimates of supraglacial lake volumes across a relatively large area (1250 km(exp 2) of west Greenland's ablation region using data acquired by the WorldView-2 (WV-2) sensor, making use of both its stereo-imaging capability and its meter-scale resolution. We employ spectrally-derived depth retrieval models, which are either based on absolute reflectance (single-channel model) or a ratio of spectral reflectances in two bands (dual-channel model). These models are calibrated by usingWV-2multispectral imagery acquired early in the melt season and depth measurements from a high resolutionWV-2 DEM over the same lake basins when devoid of water. The calibrated models are then validated with different lakes in the area, for which we determined depths. Lake depth estimates based on measurements recorded in WV-2's blue (450-510 nm), green (510-580 nm), and red (630-690 nm) bands and dual-channel modes (blue/green, blue/red, and green/red band combinations) had near-zero bias, an average root-mean-squared deviation of 0.4 m (relative to post-drainage DEMs), and an average volumetric error of b1%. The approach outlined in this study - image-based calibration of depth-retrieval models - significantly improves spaceborne supraglacial bathymetry retrievals, which are completely independent from in situ measurements.

  2. Tools to Improve the Accuracy of Kidney Stone Sizing with Ultrasound

    PubMed Central

    Dunmire, Barbrina; Hsi, Ryan S.; Cunitz, Bryan W.; Paun, Marla; Bailey, Michael R.; Sorensen, Mathew D.; Harper, Jonathan D.

    2015-01-01

    Abstract Purpose: Ultrasound (US) overestimates stone size when compared with CT. The purpose of this work was to evaluate the overestimation of stone size with US in an in vitro water bath model and investigate methods to reduce overestimation. Materials and Methods: Ten human stones (3–12 mm) were measured using B-mode (brightness mode) US by a sonographer blinded to the true stone size. Images were captured and compared using both a commercial US machine and software-based research US device. Image gain was adjusted between moderate and high stone intensities, and the transducer-to-stone depth was varied from 6 to 10 cm. A computerized stone-sizing program was developed to outline the stone width based on a grayscale intensity threshold. Results: Overestimation with the commercial device increased with both gain and depth. Average overestimation at moderate and high gain was 1.9±0.8 and 2.1±0.9 mm, respectively (p=0.6). Overestimation increased an average of 22% with an every 2-cm increase in depth (p=0.02). Overestimation using the research device was 1.5±0.9 mm and did not vary with depth (p=0.28). Overestimation could be reduced to 0.02±1.1 mm (p<0.001) with the computerized stone-sizing program. However, a standardized threshold consistent across depth, system, or system settings could not be resolved. Conclusion: Stone size is consistently overestimated with US. Overestimation increased with increasing depth and gain using the commercial machine. Overestimation was reduced and did not vary with depth, using the software-based US device. The computerized stone-sizing program shows the potential to reduce overestimation by implementing a grayscale intensity threshold for defining the stone size. More work is needed to standardize the approach, but if successful, such an approach could significantly improve stone-sizing accuracy and lead to automation of stone sizing. PMID:25105243

  3. Implementation of Multi-Agent Object Attention System Based on Biologically Inspired Attractor Selection

    NASA Astrophysics Data System (ADS)

    Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao

    A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.

  4. Comparison of cortical folding measures for evaluation of developing human brain.

    PubMed

    Shimony, Joshua S; Smyser, Christopher D; Wideman, Graham; Alexopoulos, Dimitrios; Hill, Jason; Harwell, John; Dierker, Donna; Van Essen, David C; Inder, Terrie E; Neil, Jeffrey J

    2016-01-15

    We evaluated 22 measures of cortical folding, 20 derived from local curvature (curvature-based measures) and two based on other features (sulcal depth and gyrification index), for their capacity to distinguish between normal and aberrant cortical development. Cortical surfaces were reconstructed from 12 term-born control and 63 prematurely-born infants. Preterm infants underwent 2-4 MR imaging sessions between 27 and 42weeks postmenstrual age (PMA). Term infants underwent a single MR imaging session during the first postnatal week. Preterm infants were divided into two groups. One group (38 infants) had no/minimal abnormalities on qualitative assessment of conventional MR images. The second group (25 infants) consisted of infants with injury on conventional MRI at term equivalent PMA. For both preterm infant groups, all folding measures increased or decreased monotonically with increasing PMA, but only sulcal depth and gyrification index differentiated preterm infants with brain injury from those without. We also compared scans obtained at term equivalent PMA (36-42weeks) for all three groups. No curvature-based measured distinguished between the groups, whereas sulcal depth distinguished term control from injured preterm infants and gyrification index distinguished all three groups. When incorporating total cerebral volume into the statistical model, sulcal depth no longer distinguished between the groups, though gyrification index distinguished between all three groups and positive shape index distinguished between the term control and uninjured preterm groups. We also analyzed folding measures averaged over brain lobes separately. These results demonstrated similar patterns to those obtained from the whole brain analyses. Overall, though the curvature-based measures changed during this period of rapid cerebral development, they were not sensitive for detecting the differences in folding associated with brain injury and/or preterm birth. In contrast, gyrification index was effective in differentiating these groups. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Combined photoacoustic and magneto-acoustic imaging.

    PubMed

    Qu, Min; Mallidi, Srivalleesha; Mehrmohammadi, Mohammad; Ma, Li Leo; Johnston, Keith P; Sokolov, Konstantin; Emelianov, Stanislav

    2009-01-01

    Ultrasound is a widely used modality with excellent spatial resolution, low cost, portability, reliability and safety. In clinical practice and in the biomedical field, molecular ultrasound-based imaging techniques are desired to visualize tissue pathologies, such as cancer. In this paper, we present an advanced imaging technique - combined photoacoustic and magneto-acoustic imaging - capable of visualizing the anatomical, functional and biomechanical properties of tissues or organs. The experiments to test the combined imaging technique were performed using dual, nanoparticle-based contrast agents that exhibit the desired optical and magnetic properties. The results of our study demonstrate the feasibility of the combined photoacoustic and magneto-acoustic imaging that takes the advantages of each imaging techniques and provides high sensitivity, reliable contrast and good penetrating depth. Therefore, the developed imaging technique can be used in wide range of biomedical and clinical application.

  6. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  7. Comparison of the depth of an optic nerve head obtained using stereo retinal images and HRT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Aoyama, Akira; Hara, Takeshi; Kakogawa, Masakatsu; Fujita, Hiroshi; Yamamoto, Tetsuya

    2007-03-01

    The analysis of the optic nerve head (ONH) in the retinal fundus is important for the early detection of glaucoma. In this study, we investigate an automatic reconstruction method for producing the 3-D structure of the ONH from a stereo retinal image pair; the depth value of the ONH measured by using this method was compared with the measurement results determined from the Heidelberg Retina Tomograph (HRT). We propose a technique to obtain the depth value from the stereo image pair, which mainly consists of four steps: (1) cutout of the ONH region from the retinal images, (2) registration of the stereo pair, (3) disparity detection, and (4) depth calculation. In order to evaluate the accuracy of this technique, the shape of the depression of an eyeball phantom that had a circular dent as generated from the stereo image pair and used to model the ONH was compared with a physically measured quantity. The measurement results obtained when the eyeball phantom was used were approximately consistent. The depth of the ONH obtained using the stereo retinal images was in accordance with the results obtained using the HRT. These results indicate that the stereo retinal images could be useful for assessing the depth of the ONH for the diagnosis of glaucoma.

  8. Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.

    PubMed

    Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin

    2018-04-02

    Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.

  9. Scanning transmission electron microscopy through-focal tilt-series on biological specimens.

    PubMed

    Trepout, Sylvain; Messaoudi, Cédric; Perrot, Sylvie; Bastin, Philippe; Marco, Sergio

    2015-10-01

    Since scanning transmission electron microscopy can produce high signal-to-noise ratio bright-field images of thick (≥500 nm) specimens, this tool is emerging as the method of choice to study thick biological samples via tomographic approaches. However, in a convergent-beam configuration, the depth of field is limited because only a thin portion of the specimen (from a few nanometres to tens of nanometres depending on the convergence angle) can be imaged in focus. A method known as through-focal imaging enables recovery of the full depth of information by combining images acquired at different levels of focus. In this work, we compare tomographic reconstruction with the through-focal tilt-series approach (a multifocal series of images per tilt angle) with reconstruction with the classic tilt-series acquisition scheme (one single-focus image per tilt angle). We visualised the base of the flagellum in the protist Trypanosoma brucei via an acquisition and image-processing method tailored to obtain quantitative and qualitative descriptors of reconstruction volumes. Reconstructions using through-focal imaging contained more contrast and more details for thick (≥500 nm) biological samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Correlation mapping: rapid method for retrieving microcirculation morphology from optical coherence tomography intensity images

    NASA Astrophysics Data System (ADS)

    Jonathan, E.; Enfield, J.; Leahy, M. J.

    2011-03-01

    The microcirculation plays a critical role is maintaining organ health and function by serving as a vascular are where trophic metabolism exchanges between blood and tissue takes place. To facilitate regular assessment in vivo, noninvasive microcirculation imagers are required in clinics. Among this group of clinical devices, are those that render microcirculation morphology such as nailfold capillaroscopy, a common device for early diagnosis and monitoring of microangiopathies. However, depth ambiguity disqualify this and other similar techniques in medical tomography where due to the 3-D nature of biological organs, imagers that support depth-resolved 2-D imaging and 3-D image reconstruction are required. Here, we introduce correlation map OCT (cmOCT), a promising technique for microcirculation morphology imaging that combines standard optical coherence tomography and an agile imaging analysis software based on correlation statistic. Promising results are presented of the microcirculation morphology images of the brain region of a small animal model as well as measurements of vessel geometry at bifurcations, such as vessel diameters, branch angles. These data will be useful for obtaining cardiovascular related characteristics such as volumetric flow, velocity profile and vessel-wall shear stress for circulatory and respiratory system.

  11. Saliency detection algorithm based on LSC-RC

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu

    2018-02-01

    Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.

  12. High-resolution harmonic motion imaging (HR-HMI) for tissue biomechanical property characterization

    PubMed Central

    Ma, Teng; Qian, Xuejun; Chiu, Chi Tat; Yu, Mingyue; Jung, Hayong; Tung, Yao-Sheng; Shung, K. Kirk

    2015-01-01

    Background Elastography, capable of mapping the biomechanical properties of biological tissues, serves as a useful technique for clinicians to perform disease diagnosis and determine stages of many diseases. Many acoustic radiation force (ARF) based elastography, including acoustic radiation force impulse (ARFI) imaging and harmonic motion imaging (HMI), have been developed to remotely assess the elastic properties of tissues. However, due to the lower operating frequencies of these approaches, their spatial resolutions are insufficient for revealing stiffness distribution on small scale applications, such as cancerous tumor margin detection, atherosclerotic plaque composition analysis and ophthalmologic tissue characterization. Though recently developed ARF-based optical coherence elastography (OCE) methods open a new window for the high resolution elastography, shallow imaging depths significantly limit their usefulness in clinics. Methods The aim of this study is to develop a high-resolution HMI method to assess the tissue biomechanical properties with acceptable field of view (FOV) using a 4 MHz ring transducer for efficient excitation and a 40 MHz needle transducer for accurate detection. Under precise alignment of two confocal transducers, the high-resolution HMI system has a lateral resolution of 314 µm and an axial resolution of 
147 µm with an effective FOV of 2 mm in depth. Results The performance of this high resolution imaging system was validated on the agar-based tissue mimicking phantoms with different stiffness distributions. These data demonstrated the imaging system’s improved resolution and sensitivity on differentiating materials with varying stiffness. In addition, ex vivo imaging of a human atherosclerosis coronary artery demonstrated the capability of high resolution HMI in identifying layer-specific structures and characterizing atherosclerotic plaques based on their stiffness differences. Conclusions All together high resolution HMI appears to be a promising ultrasound-only technology for characterizing tissue biomechanical properties at the microstructural level to improve the image-based diseases diagnosis in multiple clinical applications. PMID:25694960

  13. High-resolution harmonic motion imaging (HR-HMI) for tissue biomechanical property characterization.

    PubMed

    Ma, Teng; Qian, Xuejun; Chiu, Chi Tat; Yu, Mingyue; Jung, Hayong; Tung, Yao-Sheng; Shung, K Kirk; Zhou, Qifa

    2015-02-01

    Elastography, capable of mapping the biomechanical properties of biological tissues, serves as a useful technique for clinicians to perform disease diagnosis and determine stages of many diseases. Many acoustic radiation force (ARF) based elastography, including acoustic radiation force impulse (ARFI) imaging and harmonic motion imaging (HMI), have been developed to remotely assess the elastic properties of tissues. However, due to the lower operating frequencies of these approaches, their spatial resolutions are insufficient for revealing stiffness distribution on small scale applications, such as cancerous tumor margin detection, atherosclerotic plaque composition analysis and ophthalmologic tissue characterization. Though recently developed ARF-based optical coherence elastography (OCE) methods open a new window for the high resolution elastography, shallow imaging depths significantly limit their usefulness in clinics. The aim of this study is to develop a high-resolution HMI method to assess the tissue biomechanical properties with acceptable field of view (FOV) using a 4 MHz ring transducer for efficient excitation and a 40 MHz needle transducer for accurate detection. Under precise alignment of two confocal transducers, the high-resolution HMI system has a lateral resolution of 314 µm and an axial resolution of 
147 µm with an effective FOV of 2 mm in depth. The performance of this high resolution imaging system was validated on the agar-based tissue mimicking phantoms with different stiffness distributions. These data demonstrated the imaging system's improved resolution and sensitivity on differentiating materials with varying stiffness. In addition, ex vivo imaging of a human atherosclerosis coronary artery demonstrated the capability of high resolution HMI in identifying layer-specific structures and characterizing atherosclerotic plaques based on their stiffness differences. All together high resolution HMI appears to be a promising ultrasound-only technology for characterizing tissue biomechanical properties at the microstructural level to improve the image-based diseases diagnosis in multiple clinical applications.

  14. High-frequency Pulse-compression Ultrasound Imaging with an Annular Array

    NASA Astrophysics Data System (ADS)

    Mamou, J.; Ketterling, J. A.; Silverman, R. H.

    High-frequency ultrasound (HFU) allows fine-resolution imaging at the expense of limited depth-of-field (DOF) and shallow acoustic penetration depth. Coded-excitation imaging permits a significant increase in the signal-to-noise ratio (SNR) and therefore, the acoustic penetration depth. A 17-MHz, five-element annular array with a focal length of 31 mm and a total aperture of 10 mm was fabricated using a 25-μm thick piezopolymer membrane. An optimized 8-μs linear chirp spanning 6.5-32 MHz was used to excite the transducer. After data acquisition, the received signals were linearly filtered by a compression filter and synthetically focused. To compare the chirp-array imaging method with conventional impulse imaging in terms of resolution, a 25-μm wire was scanned and the -6-dB axial and lateral resolutions were computed at depths ranging from 20.5 to 40.5 mm. A tissue-mimicking phantom containing 10-μm glass beads was scanned, and backscattered signals were analyzed to evaluate SNR and penetration depth. Finally, ex-vivo ophthalmic images were formed and chirp-coded images showed features that were not visible in conventional impulse images.

  15. Snow Depth Depicted on Mt. Lyell by NASA Airborne Snow Observatory

    NASA Image and Video Library

    2013-05-02

    A natural color image of Mt. Lyell, the highest point in the Tuolumne River Basin top image is compared with a three-dimensional color composite image of Mt. Lyell from NASA Airborne Snow Observatory depicting snow depth bottom image.

  16. Depth-tunable three-dimensional display with interactive light field control

    NASA Astrophysics Data System (ADS)

    Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.

  17. Wide-bandwidth, wide-beamwidth, high-resolution, millimeter-wave imaging for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Fernandes, Justin L.; Tedeschi, Jonathan R.; McMakin, Douglas L.; Jones, A. Mark; Lechelt, Wayne M.; Severtsen, Ronald H.

    2013-05-01

    Active millimeter-wave imaging is currently being used for personnel screening at airports and other high-security facilities. The cylindrical imaging techniques used in the deployed systems are based on licensed technology developed at the Pacific Northwest National Laboratory. The cylindrical and a related planar imaging technique form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images of the person being screened. The resolution, clothing penetration, and image illumination quality obtained with these techniques can be significantly enhanced through the selection of the aperture size, antenna beamwidth, center frequency, and bandwidth. The lateral resolution can be improved by increasing the center frequency, or it can be increased with a larger antenna beamwidth. The wide beamwidth approach can significantly improve illumination quality relative to a higher frequency system. Additionally, a wide antenna beamwidth allows for operation at a lower center frequency resulting in less scattering and attenuation from the clothing. The depth resolution of the system can be improved by increasing the bandwidth. Utilization of extremely wide bandwidths of up to 30 GHz can result in depth resolution as fine as 5 mm. This wider bandwidth operation may allow for improved detection techniques based on high range resolution. In this paper, the results of an extensive imaging study that explored the advantages of using extremely wide beamwidth and bandwidth are presented, primarily for 10-40 GHz frequency band.

  18. Natural Crack Sizing Based on Eddy Current Image and Electromagnetic Field Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endo, H.; Uchimoto, T.; Takagi, T.

    2006-03-06

    An eddy current testing (ECT) system with multi-coil type probes is applied to size up cracks fabricated on austenite stainless plates. We have developed muti-channel ECT system to produce data as digital images. The probes consist of transmit-receive type sensors as elements to classify crack directions, working as two scan direction modes simultaneously. Template matching applied to the ECT images determines regions of interest in sizing up cracks. Finite element based inversion sizes up the crack depth from the measured ECT signal. The present paper demonstrates this approach for fatigue crack and stress corrosion cracking.

  19. True 3D digital holographic tomography for virtual reality applications

    NASA Astrophysics Data System (ADS)

    Downham, A.; Abeywickrema, U.; Banerjee, P. P.

    2017-09-01

    Previously, a single CCD camera has been used to record holograms of an object while the object is rotated about a single axis to reconstruct a pseudo-3D image, which does not show detailed depth information from all perspectives. To generate a true 3D image, the object has to be rotated through multiple angles and along multiple axes. In this work, to reconstruct a true 3D image including depth information, a die is rotated along two orthogonal axes, and holograms are recorded using a Mach-Zehnder setup, which are subsequently numerically reconstructed. This allows for the generation of multiple images containing phase (i.e., depth) information. These images, when combined, create a true 3D image with depth information which can be exported to a Microsoft® HoloLens for true 3D virtual reality.

  20. Fiber bundle endomicroscopy with multi-illumination for three-dimensional reflectance image reconstruction

    NASA Astrophysics Data System (ADS)

    Ando, Yoriko; Sawahata, Hirohito; Kawano, Takeshi; Koida, Kowa; Numano, Rika

    2018-02-01

    Bundled fiber optics allow in vivo imaging at deep sites in a body. The intrinsic optical contrast detects detailed structures in blood vessels and organs. We developed a bundled-fiber-coupled endomicroscope, enabling stereoscopic three-dimensional (3-D) reflectance imaging with a multipositional illumination scheme. Two illumination sites were attached to obtain reflectance images with left and right illumination. Depth was estimated by the horizontal disparity between the two images under alternative illuminations and was calibrated by the targets with known depths. This depth reconstruction was applied to an animal model to obtain the 3-D structure of blood vessels of the cerebral cortex (Cereb cortex) and preputial gland (Pre gla). The 3-D endomicroscope could be instrumental to microlevel reflectance imaging, improving the precision in subjective depth perception, spatial orientation, and identification of anatomical structures.

  1. Accuracy of frame-based stereotactic depth electrode implantation during craniotomy for subdural grid placement.

    PubMed

    Munyon, Charles N; Koubeissi, Mohamad Z; Syed, Tanvir U; Lüders, Hans O; Miller, Jonathan P

    2013-01-01

    Frame-based stereotaxy and open craniotomy may seem mutually exclusive, but invasive electrophysiological monitoring can require broad sampling of the cortex and precise targeting of deeper structures. The purpose of this study is to describe simultaneous frame-based insertion of depth electrodes and craniotomy for placement of subdural grids through a single surgical field and to determine the accuracy of depth electrodes placed using this technique. A total of 6 patients with intractable epilepsy underwent placement of a stereotactic frame with the center of the planned cranial flap equidistant from the fixation posts. After volumetric imaging, craniotomy for placement of subdural grids was performed. Depth electrodes were placed using frame-based stereotaxy. Postoperative CT determined the accuracy of electrode placement. A total of 31 depth electrodes were placed. Mean distance of distal electrode contact from the target was 1.0 ± 0.15 mm. Error was correlated to distance to target, with an additional 0.35 mm error for each centimeter (r = 0.635, p < 0.001); when corrected, there was no difference in accuracy based on target structure or method of placement (prior to craniotomy vs. through grid, p = 0.23). The described technique for craniotomy through a stereotactic frame allows placement of subdural grids and depth electrodes without sacrificing the accuracy of a frame or requiring staged procedures.

  2. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    PubMed

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  3. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    PubMed Central

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-01-01

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942

  4. Imaging latex–carbon nanotube composites by subsurface electrostatic force microscopy

    DOE PAGES

    Patel, Sajan; Petty, Clayton W.; Krafcik, Karen Lee; ...

    2016-09-08

    Electrostatic modes of atomic force microscopy have shown to be non-destructive and relatively simple methods for imaging conductors embedded in insulating polymers. Here we use electrostatic force microscopy to image the dispersion of carbon nanotubes in a latex-based conductive composite, which brings forth features not observed in previously studied systems employing linear polymer films. A fixed-potential model of the probe-nanotube electrostatics is presented which in principle gives access to the conductive nanoparticle's depth and radius, and the polymer film dielectric constant. Comparing this model to the data results in nanotube depths that appear to be slightly above the film–air interface.more » Furthermore, this result suggests that water-mediated charge build-up at the film–air interface may be the source of electrostatic phase contrast in ambient conditions.« less

  5. Improved high-resolution ultrasonic imaging of the eye.

    PubMed

    Silverman, Ronald H; Ketterling, Jeffrey A; Mamou, Jonathan; Coleman, D Jackson

    2008-01-01

    Currently, virtually all clinical diagnostic ultrasound systems used in ophthalmology are based on fixed-focus, single-element transducers. High-frequency (> or = 20-MHz) transducers introduced to ophthalmology during the last decade have led to improved resolution and diagnostic capabilities for assessment of the anterior segment and the retina. However, single-element transducers are restricted to a small depth of field, limiting their capacity to image the eye as a whole. We fabricated a 20-MHz annular array probe prototype consisting of 5 concentric transducer elements and scanned an ex vivo human eye. Synthetically focused images of the bank eye showed improved depth of field and sensitivity, allowing simultaneous display of the anterior and posterior segments and the full lens contour. This capability may be useful in assessment of vitreoretinal pathologies and investigation of the accommodative mechanism.

  6. Diffuse Optical Imaging and Spectroscopy of the Human Breast for Quantitative Oximetry with Depth Resolution

    NASA Astrophysics Data System (ADS)

    Yu, Yang

    Near-infrared spectral imaging for breast cancer diagnostics and monitoring has been a hot research topic for the past decade. Here we present instrumentation for diffuse optical imaging of breast tissue with tandem scan of a single source-detector pair with broadband light in transmission geometry for tissue oximetry. The efforts to develop the continuous-wave (CW) domain instrument have been described, and a frequency-domain (FD) system is also used to measure the bulk tissue optical properties and the breast thickness distribution. We also describe the efforts to improve the data processing codes in the 2D spatial domain for better noise suppression, contrast enhancement, and spectral analysis. We developed a paired-wavelength approach, which is based on finding pairs of wavelength that feature the same optical contrast, to quantify the tissue oxygenation for the absorption structures detected in the 2D structural image. A total of eighteen subjects, two of whom were bearing breast cancer on their right breasts, were measured with this hybrid CW/FD instrument and processed with the improved algorithms. We obtained an average tissue oxygenation value of 87% +/- 6% from the healthy breasts, significantly higher than that measured in the diseased breasts (69% +/- 14%) (p < 0.01). For the two diseased breasts, the tumor areas bear hypoxia signatures versus the remainder of the breast, with oxygenation values of 49 +/- 11% (diseased region) vs. 61 +/- 16% (healthy regions) for the breast with invasive ductal carcinoma, and 58 +/- 8% (diseased region) vs 77 +/- 11% (healthy regions) for ductal carcinoma in situ. Our subjects came from various ethnical/racial backgrounds, and two-thirds of our subjects were less than thirty years old, indicating a potential to apply the optical mammography to a broad population. The second part of this thesis covers the topic of depth discrimination, which is lacking with our single source-detector scan system. Based on an off-axis detection method, we incorporated an additional detector to acquire a second set of image independently. We then proposed an inner-product approach to associate absorption structures detected in the on-axis image with those detected in the off-axis image. The spatial coordinate difference for the same structure between the two images is directly related to the depth of the corresponding structure, and the monotonic dependence can be quantified by perturbation theory of the diffusion equation. A preliminary phantom study shows good agreement between the measured and the actual depth of embedded structures, and human measurements show the capability to assign a depth coordinate to the more complex absorption structures inside the breast.

  7. Clinical optical coherence tomography combined with multiphoton tomography for evaluation of several skin disorders

    NASA Astrophysics Data System (ADS)

    König, Karsten; Speicher, Marco; Bückle, Rainer; Reckfort, Julia; McKenzie, Gordon; Welzel, Julia; Koehler, Martin J.; Elsner, Peter; Kaatz, Martin

    2010-02-01

    The first clinical trial of optical coherence tomography (OCT) combined with multiphoton tomography (MPT) and dermoscopy is reported. State-of-the-art (i) OCT systems for dermatology (e.g. multibeam swept source OCT), (ii) the femtosecond laser multiphoton tomograph DermaInspectTM, and (iii) digital dermoscopes were applied to 47 patients with a diversity of skin diseases and disorders such as skin cancer, psoriasis, hemangioma, connective tissue diseases, pigmented lesions, and autoimmune bullous skin diseases. Dermoscopy, also called 'epiluminescent microscopy', provides two-dimensional color images of the skin surface. OCT imaging is based on the detection of optical reflections within the tissue measured interferometrically whereas nonlinear excitation of endogenous fluorophores and the second harmonic generation are the bases of MPT images. OCT cross sectional "wide field" image provides a typical field of view of 5 x 2 mm2 and offers fast information on the depth and the volume of the investigated lesion. In comparison, multiphoton tomography presents 0.36 x 0.36 mm2 horizontal or diagonal sections of the region of interest within seconds with submicron resolution and down to a tissue depth of 200 μm. The combination of OCT and MPT provides a synergistic optical imaging modality for early detection of skin cancer and other skin diseases.

  8. Needle-based polarization-sensitive OCT of breast tumor (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Villiger, Martin; Lorenser, Dirk; McLaughlin, Robert A.; Quirk, Bryden C.; Kirk, Rodney W.; Bouma, Brett E.; Sampson, David D.

    2016-03-01

    OCT imaging through miniature needle probes has extended the range of OCT and enabled structural imaging deep inside breast tissue, with the potential to assist in the intraoperative assessment of tumor margins. However, in many situations, scattering contrast alone is insufficient to clearly identify and delineate malignant areas. Here, we present a portable, depth-encoded polarization-sensitive OCT system, connected to a miniature needle probe. From the measured polarization states we constructed the tissue Mueller matrix at each sample location and improved the accuracy of the measured polarization states through incoherent averaging before retrieving the depth-resolved tissue birefringence. With the Mueller matrix at hand, additional polarization properties such as depolarization are readily available. We then imaged freshly excised breast tissue from a patient undergoing lumpectomy. The reconstructed local retardation highlighted regions of connective tissue, which exhibited birefringence due to the abundance of collagen fibers, and offered excellent contrast to areas of malignant tissue, which exhibited less birefringence due to their different tissue composition. Results were validated against co-located histology sections. The combination of needle-based imaging with the complementary contrast provided by polarization-sensitive analysis offers a powerful instrument for advanced tissue imaging and has potential to aid in the assessment of tumor margins during the resection of breast cancer.

  9. Optical coherence tomography - principles and applications

    NASA Astrophysics Data System (ADS)

    Fercher, A. F.; Drexler, W.; Hitzenberger, C. K.; Lasser, T.

    2003-02-01

    There have been three basic approaches to optical tomography since the early 1980s: diffraction tomography, diffuse optical tomography and optical coherence tomography (OCT). Optical techniques are of particular importance in the medical field, because these techniques promise to be safe and cheap and, in addition, offer a therapeutic potential. Advances in OCT technology have made it possible to apply OCT in a wide variety of applications but medical applications are still dominating. Specific advantages of OCT are its high depth and transversal resolution, the fact, that its depth resolution is decoupled from transverse resolution, high probing depth in scattering media, contact-free and non-invasive operation, and the possibility to create various function dependent image contrasting methods. This report presents the principles of OCT and the state of important OCT applications. OCT synthesises cross-sectional images from a series of laterally adjacent depth-scans. At present OCT is used in three different fields of optical imaging, in macroscopic imaging of structures which can be seen by the naked eye or using weak magnifications, in microscopic imaging using magnifications up to the classical limit of microscopic resolution and in endoscopic imaging, using low and medium magnification. First, OCT techniques, like the reflectometry technique and the dual beam technique were based on time-domain low coherence interferometry depth-scans. Later, Fourier-domain techniques have been developed and led to new imaging schemes. Recently developed parallel OCT schemes eliminate the need for lateral scanning and, therefore, dramatically increase the imaging rate. These schemes use CCD cameras and CMOS detector arrays as photodetectors. Video-rate three-dimensional OCT pictures have been obtained. Modifying interference microscopy techniques has led to high-resolution optical coherence microscopy that achieved sub-micrometre resolution. This report is concluded with a short presentation of important OCT applications. Ophthalmology is, due to the transparent ocular structures, still the main field of OCT application. The first commercial instrument too has been introduced for ophthalmic diagnostics (Carl Zeiss Meditec AG). Advances in using near-infrared light, however, opened the path for OCT imaging in strongly scattering tissues. Today, optical in vivo biopsy is one of the most challenging fields of OCT application. High resolution, high penetration depth, and its potential for functional imaging attribute to OCT an optical biopsy quality, which can be used to assess tissue and cell function and morphology in situ. OCT can already clarify the relevant architectural tissue morphology. For many diseases, however, including cancer in its early stages, higher resolution is necessary. New broad-bandwidth light sources, like photonic crystal fibres and superfluorescent fibre sources, and new contrasting techniques, give access to new sample properties and unmatched sensitivity and resolution.

  10. Displays. [three dimensional analog visual system for aiding pilot space perception

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An experimental investigation made to determine the depth cue of a head movement perspective and image intensity as a function of depth is summarized. The experiment was based on the use of a hybrid computer generated contact analog visual display in which various perceptual depth cues are included on a two dimensional CRT screen. The system's purpose was to impart information, in an integrated and visually compelling fashion, about the vehicle's position and orientation in space. Results show head movement gives a 40% improvement in depth discrimination when the display is between 40 and 100 cm from the subject; intensity variation resulted in as much improvement as head movement.

  11. Sea-Floor Images and Data from Multibeam Surveys in San Francisco Bay, Southern California, Hawaii, the Gulf of Mexico, and Lake Tahoe, California-Nevada

    USGS Publications Warehouse

    Dartnell, Peter; Gardiner, James V.

    1999-01-01

    Accurate base maps are a prerequisite for any geologic study, regardless of the objectives. Land-based studies commonly utilize aerial photographs, USGS 7.5-minute quadrangle maps, and satellite images as base maps. Until now, studies that involve the ocean floor have been at a disadvantage due to an almost complete lack of accurate marine base maps. Many base maps of the sea floor have been constructed over the past century but with a wide range in navigational and depth accuracies. Only in the past few years has marine surveying technology advanced far enough to produce navigational accuracy of 1 meter and depth resolutions of 50 centimeters. The Pacific Seafloor Mapping Project of the U.S. Geological Survey's, Western Coastal and Marine Geology Program, Menlo Park, California, U.S.A., in cooperation with the Ocean Mapping Group, University of New Brunswick, Fredericton, Canada, is using this new technology to systematically map the ocean floor and lakes. This type of marine surveying, called multibeam surveying, collects high-resolution bathymetric and backscatter data that can be used for various base maps, GIS coverages, and scientific visualization methods. This is an interactive CD-ROM that contains images, movies, and data of all the surveys the Pacific Seafloor Mapping Project has completed up to January 1999. The images and movies on this CD-ROM, such as shaded relief of the bathymetry, backscatter, oblique views, 3-D views, and QuickTime movies help the viewer to visualize the multibeam data. This CD-ROM also contains ARC/INFO export (.e00) files and full-resolution TIFF images of all the survey sites that can be downloaded and used in many GIS packages.

  12. The design of wavefront coded imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Shun; Cen, Zhaofeng; Li, Xiaotong

    2016-10-01

    Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.

  13. Single shot, three-dimensional fluorescence microscopy with a spatially rotating point spread function

    PubMed Central

    Wang, Zhaojun; Cai, Yanan; Liang, Yansheng; Zhou, Xing; Yan, Shaohui; Dan, Dan; Bianco, Piero R.; Lei, Ming; Yao, Baoli

    2017-01-01

    A wide-field fluorescence microscope with a double-helix point spread function (PSF) is constructed to obtain the specimen’s three-dimensional distribution with a single snapshot. Spiral-phase-based computer-generated holograms (CGHs) are adopted to make the depth-of-field of the microscope adjustable. The impact of system aberrations on the double-helix PSF at high numerical aperture is analyzed to reveal the necessity of the aberration correction. A modified cepstrum-based reconstruction scheme is promoted in accordance with properties of the new double-helix PSF. The extended depth-of-field images and the corresponding depth maps for both a simulated sample and a tilted section slice of bovine pulmonary artery endothelial (BPAE) cells are recovered, respectively, verifying that the depth-of-field is properly extended and the depth of the specimen can be estimated at a precision of 23.4nm. This three-dimensional fluorescence microscope with a framerate-rank time resolution is suitable for studying the fast developing process of thin and sparsely distributed micron-scale cells in extended depth-of-field. PMID:29296483

  14. Quantifying how the combination of blur and disparity affects the perceived depth

    NASA Astrophysics Data System (ADS)

    Wang, Junle; Barkowsky, Marcus; Ricordel, Vincent; Le Callet, Patrick

    2011-03-01

    The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper. When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases. We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth comparing to the images without any blur in the background. The increase of perceived depth can be considered as a function of the relative distance between the foreground and background, while it is insensitive to the distance between the viewer and the depth plane at which the blur is added.

  15. ALA-PpIX variability quantitatively imaged in A431 epidermoid tumors using in vivo ultrasound fluorescence tomography and ex vivo assay

    NASA Astrophysics Data System (ADS)

    DSouza, Alisha V.; Flynn, Brendan P.; Gunn, Jason R.; Samkoe, Kimberley S.; Anand, Sanjay; Maytin, Edward V.; Hasan, Tayyaba; Pogue, Brian W.

    2014-03-01

    Treatment monitoring of Aminolevunilic-acid (ALA) - Photodynamic Therapy (PDT) of basal-cell carcinoma (BCC) calls for superficial and subsurface imaging techniques. While superficial imagers exist for this purpose, their ability to assess PpIX levels in thick lesions is poor; additionally few treatment centers have the capability to measure ALA-induced PpIX production. An area of active research is to improve treatments to deeper and nodular BCCs, because treatment is least effective in these. The goal of this work was to understand the logistics and technical capabilities to quantify PpIX at depths over 1mm, using a novel hybrid ultrasound-guided, fiber-based fluorescence molecular spectroscopictomography system. This system utilizes a 633nm excitation laser and detection using filtered spectrometers. Source and detection fibers are collinear so that their imaging plane matches that of ultrasound transducer. Validation with phantoms and tumor-simulating fluorescent inclusions in mice showed sensitivity to fluorophore concentrations as low as 0.025μg/ml at 4mm depth from surface, as presented in previous years. Image-guided quantification of ALA-induced PpIX production was completed in subcutaneous xenograft epidermoid cancer tumor model A431 in nude mice. A total of 32 animals were imaged in-vivo, using several time points, including pre-ALA, 4-hours post-ALA, and 24-hours post-ALA administration. On average, PpIX production in tumors increased by over 10-fold, 4-hours post-ALA. Statistical analysis of PpIX fluorescence showed significant difference among all groups; p<0.05. Results were validated by exvivo imaging of resected tumors. Details of imaging, analysis and results will be presented to illustrate variability and the potential for imaging these values at depth.

  16. Miniature all-optical probe for photoacoustic and ultrasound dual-modality imaging

    NASA Astrophysics Data System (ADS)

    Li, Guangyao; Guo, Zhendong; Chen, Sung-Liang

    2018-02-01

    Photoacoustic (PA) imaging forms an image based on optical absorption contrasts with ultrasound (US) resolution. In contrast, US imaging is based on acoustic backscattering to provide structural information. In this study, we develop a miniature all-optical probe for high-resolution PA-US dual-modality imaging over a large imaging depth range. The probe employs three individual optical fibers (F1-F3) to achieve optical generation and detection of acoustic waves for both PA and US modalities. To offer wide-angle laser illumination, fiber F1 with a large numerical aperture (NA) is used for PA excitation. On the other hand, wide-angle US waves are generated by laser illumination on an optically absorbing composite film which is coated on the end face of fiber F2. Both the excited PA and backscattered US waves are detected by a Fabry-Pérot cavity on the tip of fiber F3 for wide-angle acoustic detection. The wide angular features of the three optical fibers make large-NA synthetic aperture focusing technique possible and thus high-resolution PA and US imaging. The probe diameter is less than 2 mm. Over a depth range of 4 mm, lateral resolutions of PA and US imaging are 104-154 μm and 64-112 μm, respectively, and axial resolutions of PA and US imaging are 72-117 μm and 31-67 μm, respectively. To show the imaging capability of the probe, phantom imaging with both PA and US contrasts is demonstrated. The results show that the probe has potential for endoscopic and intravascular imaging applications that require PA and US contrast with high resolution.

  17. Seismic constraints on the lithosphere-asthenosphere boundary

    NASA Astrophysics Data System (ADS)

    Rychert, Catherine A.

    2014-05-01

    The basic tenet of plate tectonics is that a rigid plate, or lithosphere, moves over a weaker asthenospheric layer. However, the exact location and defining mechanism of the boundary at the base of the plate, the lithosphere-asthenosphere boundary (LAB) is debated. The oceans should represent a simple scenario since the lithosphere is predicted to thicken with seafloor age if it thermally defined, whereas a constant plate thickness might indicate a compositional definition. However, the oceans are remote and difficult to constrain, and studies with different sensitivities and resolutions have come to different conclusions. Hotspot regions lend additional insight, since they are relatively well instrumented with seismic stations, and also since the effect of a thermal plume on the LAB should depend on the defining mechanism of the plate. Here I present new results using S-to-P receiver functions to image upper mantle discontinuity structure beneath volcanically active regions including Hawaii, Iceland, Galapagos, and Afar. In particular I focus on the lithosphere-asthenosphere boundary and discontinuities related to the base of melting, which can be used to highlight plume locations. I image a lithosphere-asthenosphere boundary in the 50 - 95 km depth range beneath Hawaii, Galapagos, and Iceland. Although LAB depth variations exist within these regions, significant thinning is not observed in the locations of hypothesized plume impingement from receiver functions (see below). Since a purely thermally defined lithosphere is expected to thin significantly in the presence of a thermal plume anomaly, a compositional component in the definition of the LAB is implied. Beneath Afar, an LAB is imaged at 75 km depth on the flank of the rift, but no LAB is imaged beneath the rift itself. The transition from flank of rift is relatively abrupt, again suggesting something other than a purely thermally defined lithosphere. Melt may also exist in the asthenosphere in these regions of hotpot volcanism. Indeed, S-to-P also images strong velocity increases that are likely related to the base of a melt-rich layer beneath the oceanic LAB. This discontinuity may highlight plume locations since melt is predicted deeper at thermal anomalies. For instance, beneath Hawaii the base of melting increases from 110 km to 155 km depth 100 km west of Hawaii, i.e., the likely location of plume impingement on the lithosphere. Beneath Galapagos the discontinuity is deeper in 3 sectors, all off the island axis, suggesting multiple plume diversions and complex plume-ridge interactions. Beneath Iceland deepening is imaged to the northeast of the island. Beneath the Afar rift a shallow melt discontinuity is imaged at ~75 km, suggesting that the plume is located outside the study region. Overall, the deepest realizations of the discontinuities agree with the slowest velocities from surface waves, but are not located directly beneath surface volcanoes. This suggests that either plumes approach the surface at an angle or that restite roots beneath hotspots divert plumes at shallow depths. In either case, mantle melts are likely guided from the location of impingement on the lithosphere to current day surface volcanoes by pre-existing structures of the lithosphere.

  18. Determination of cup-to-disc ratio of optical nerve head for diagnosis of glaucoma on stereo retinal fundus image pairs

    NASA Astrophysics Data System (ADS)

    Muramatsu, Chisako; Nakagawa, Toshiaki; Sawada, Akira; Hatanaka, Yuji; Hara, Takeshi; Yamamoto, Tetsuya; Fujita, Hiroshi

    2009-02-01

    A large cup-to-disc (C/D) ratio, which is the ratio of the diameter of the depression (cup) to that of the optical nerve head (ONH, disc), can be one of the important signs for diagnosis of glaucoma. Eighty eyes, including 25 eyes with the signs of glaucoma, were imaged by a stereo retinal fundus camera. An ophthalmologist provided the outlines of cup and disc on a regular monitor and on the stereo display. The depth image of the ONH was created by determining the corresponding pixels in a pair of images based on the correlation coefficient in localized regions. The areas of the disc and cup were determined by use of the red component in one of the color images and by use of the depth image, respectively. The C/D ratio was determined based on the largest vertical lengths in the cup and disc areas, which was then compared with that by the ophthalmologist. The disc areas determined by the computerized method agreed relatively well with those determined by the ophthalmologist, whereas the agreement for the cup areas was somewhat lower. When C/D ratios were employed for distinction between the glaucomatous and non-glaucomatous eyes, the area under the receiver operating characteristic curve (AUC) was 0.83. The computerized analysis of ONH can be useful for diagnosis of glaucoma.

  19. A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Hu, W.; Ning, J.

    2017-12-01

    Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.

  20. Computerized planning of prostate cryosurgery using variable cryoprobe insertion depth.

    PubMed

    Rossi, Michael R; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed

    2010-02-01

    The current study presents a computerized planning scheme for prostate cryosurgery using a variable insertion depth strategy. This study is a part of an ongoing effort to develop computerized tools for cryosurgery. Based on typical clinical practices, previous automated planning schemes have required that all cryoprobes be aligned at a single insertion depth. The current study investigates the benefit of removing this constraint, in comparison with results based on uniform insertion depth planning as well as the so-called "pullback procedure". Planning is based on the so-called "bubble-packing method", and its quality is evaluated with bioheat transfer simulations. This study is based on five 3D prostate models, reconstructed from ultrasound imaging, and cryoprobe active length in the range of 15-35 mm. The variable insertion depth technique is found to consistently provide superior results when compared to the other placement methods. Furthermore, it is shown that both the optimal active length and the optimal number of cryoprobes vary among prostate models, based on the size and shape of the target region. Due to its low computational cost, the new scheme can be used to determine the optimal cryoprobe layout for a given prostate model in real time. Copyright 2008 Elsevier Inc. All rights reserved.

  1. Transparent volume imaging

    NASA Astrophysics Data System (ADS)

    Wixson, Steve E.

    1990-07-01

    Transparent Volume Imaging began with the stereo xray in 1895 and ended for most investigators when radiation safety concerns eliminated the second view. Today, similiar images can be generated by the computer without safety hazards providing improved perception and new means of image quantification. A volumetric workstation is under development based on an operational prototype. The workstation consists of multiple symbolic and numeric processors, binocular stereo color display generator with large image memory and liquid crystal shutter, voice input and output, a 3D pointer that uses projection lenses so that structures in 3 space can be touched directly, 3D hard copy using vectograph and lenticular printing, and presentation facilities using stereo 35mm slide and stereo video tape projection. Volumetric software includes a volume window manager, Mayo Clinic's Analyze program and our Digital Stereo Microscope (DSM) algorithms. The DSM uses stereo xray-like projections, rapidly oscillating motion and focal depth cues such that detail can be studied in the spatial context of the entire set of data. Focal depth cues are generated with a lens and apeture algorithm that generates a plane of sharp focus, and multiple stereo pairs each with a different plane of sharp focus are generated and stored in the large memory for interactive selection using a physical or symbolic depth selector. More recent work is studying non-linear focussing. Psychophysical studies are underway to understand how people perce ive images on a volumetric display and how accurately 3 dimensional structures can be quantitated from these displays.

  2. Ultrasound-Mediated Biophotonic Imaging: A Review of Acousto-Optical Tomography and Photo-Acoustic Tomography

    PubMed Central

    Wang, Lihong V.

    2004-01-01

    This article reviews two types of ultrasound-mediated biophotonic imaging–acousto-optical tomography (AOT, also called ultrasound-modulated optical tomography) and photo-acoustic tomography (PAT, also called opto-acoustic or thermo-acoustic tomography)–both of which are based on non-ionizing optical and ultrasonic waves. The goal of these technologies is to combine the contrast advantage of the optical properties and the resolution advantage of ultrasound. In these two technologies, the imaging contrast is based primarily on the optical properties of biological tissues, and the imaging resolution is based primarily on the ultrasonic waves that either are provided externally or produced internally, within the biological tissues. In fact, ultrasonic mediation overcomes both the resolution disadvantage of pure optical imaging in thick tissues and the contrast and speckle disadvantages of pure ultrasonic imaging. In our discussion of AOT, the relationship between modulation depth and acoustic amplitude is clarified. Potential clinical applications of ultrasound-mediated biophotonic imaging include early cancer detection, functional imaging, and molecular imaging. PMID:15096709

  3. A Low-Cost PC-Based Image Workstation for Dynamic Interactive Display of Three-Dimensional Anatomy

    NASA Astrophysics Data System (ADS)

    Barrett, William A.; Raya, Sai P.; Udupa, Jayaram K.

    1989-05-01

    A system for interactive definition, automated extraction, and dynamic interactive display of three-dimensional anatomy has been developed and implemented on a low-cost PC-based image workstation. An iconic display is used for staging predefined image sequences through specified increments of tilt and rotation over a solid viewing angle. Use of a fast processor facilitates rapid extraction and rendering of the anatomy into predefined image views. These views are formatted into a display matrix in a large image memory for rapid interactive selection and display of arbitrary spatially adjacent images within the viewing angle, thereby providing motion parallax depth cueing for efficient and accurate perception of true three-dimensional shape, size, structure, and spatial interrelationships of the imaged anatomy. The visual effect is that of holding and rotating the anatomy in the hand.

  4. Visual attention in egocentric field-of-view using RGB-D data

    NASA Astrophysics Data System (ADS)

    Olesova, Veronika; Benesova, Wanda; Polatsek, Patrik

    2017-03-01

    Most of the existing solutions predicting visual attention focus solely on referenced 2D images and disregard any depth information. This aspect has always represented a weak point since the depth is an inseparable part of the biological vision. This paper presents a novel method of saliency map generation based on results of our experiments with egocentric visual attention and investigation of its correlation with perceived depth. We propose a model to predict the attention using superpixel representation with an assumption that contrast objects are usually salient and have a sparser spatial distribution of superpixels than their background. To incorporate depth information into this model, we propose three different depth techniques. The evaluation is done on our new RGB-D dataset created by SMI eye-tracker glasses and KinectV2 device.

  5. Extending 'Deep Blue' aerosol retrieval coverage to cases of absorbing aerosols above clouds: sensitivity analysis and first case studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayer, Andrew M.; Hsu, C.; Bettenhausen, Corey

    Cases of absorbing aerosols above clouds (AAC), such as smoke or mineral dust, are omitted from most routinely-processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar

  6. Effect of Binary Source Companions on the Microlensing Optical Depth Determination toward the Galactic Bulge Field

    NASA Astrophysics Data System (ADS)

    Han, Cheongho

    2005-11-01

    Currently, gravitational microlensing survey experiments toward the Galactic bulge field use two different methods of minimizing the blending effect for the accurate determination of the optical depth τ. One is measuring τ based on clump giant (CG) source stars, and the other is using ``difference image analysis'' (DIA) photometry to measure the unblended source flux variation. Despite the expectation that the two estimates should be the same assuming that blending is properly considered, the estimates based on CG stars systematically fall below the DIA results based on all events with source stars down to the detection limit. Prompted by the gap, we investigate the previously unconsidered effect of companion-associated events on τ determination. Although the image of a companion is blended with that of its primary star and thus not resolved, the event associated with the companion can be detected if the companion flux is highly magnified. Therefore, companions work effectively as source stars to microlensing, and thus the neglect of them in the source star count could result in a wrong τ estimation. By carrying out simulations based on the assumption that companions follow the same luminosity function as primary stars, we estimate that the contribution of the companion-associated events to the total event rate is ~5fbi% for current surveys and can reach up to ~6fbi% for future surveys monitoring fainter stars, where fbi is the binary frequency. Therefore, we conclude that the companion-associated events comprise a nonnegligible fraction of all events. However, their contribution to the optical depth is not large enough to explain the systematic difference between the optical depth estimates based on the two different methods.

  7. Scene Semantic Segmentation from Indoor Rgb-D Images Using Encode-Decoder Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Li, T.; Pan, L.; Kang, Z.

    2017-09-01

    With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.

  8. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    PubMed

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  9. Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    So, Peter T.

    2016-03-01

    Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens.

  10. Hard X-ray full field microscopy and magnifying microtomography using compound refractive lenses

    NASA Astrophysics Data System (ADS)

    Schroer, Christian G.; Günzler, Til Florian; Benner, Boris; Kuhlmann, Marion; Tümmler, Johannes; Lengeler, Bruno; Rau, Christoph; Weitkamp, Timm; Snigirev, Anatoly; Snigireva, Irina

    2001-07-01

    For hard X-rays, parabolic compound refractive lenses (PCRLs) are genuine imaging devices like glass lenses for visible light. Based on these new lenses, a hard X-ray full field microscope has been constructed that is ideally suited to image the interior of opaque samples with a minimum of sample preparation. As a result of a large depth of field, CRL micrographs are sharp projection images of most samples. To obtain 3D information about a sample, tomographic techniques are combined with magnified imaging.

  11. An image of the Columbia Plateau from inversion of high-resolution seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutter, W.J.; Catchings, R.D.; Jarchow, C.M.

    1994-08-01

    The authors use a method of traveltime inversion of high-resolution seismic data to provide the first reliable images of internal details of the Columbia River Basalt Group (CRBG), the subsurface basalt/sediment interface, and the deeper sediment/basement interface. Velocity structure within the basalts, delineated on the order of 1 km horizontally and 0.2 km vertically, is constrained to within [plus minus]0.1 km/s for most of the seismic profile. Over 5,000 observed traveltimes fit their model with an rms error of 0.018 s. The maximum depth of penetration of the basalt diving waves (truncated by underlying low-velocity sediments) provides a reliable estimatemore » of the depth to the base of the basalt, which agrees with well-log measurements to within 0.05 km (165 ft). The authors use image blurring, calculated from the resolution matrix, to estimate the aspect ratio of images velocity anomaly widths to true widths for velocity features within the basalt. From their calculations of image blurring, they interpret low velocity zones (LVZ) within the basalts at Boylston Mountain and the Whiskey Dick anticline to have widths of 4.5 and 3 km, respectively, within the upper 1.5 km of the model. At greater depth, the widths of these imaged LVZs thin to approximately 2 km or less. They interpret these linear, subparallel, low-velocity zones imaged adjacent to anticlines of the Yakima Fold Belt to be brecciated fault zones. These fault zones dip to the south at angles between 15 to 45 degrees.« less

  12. Penetration depth of photons in biological tissues from hyperspectral imaging in shortwave infrared in transmission and reflection geometries.

    PubMed

    Zhang, Hairong; Salo, Daniel; Kim, David M; Komarov, Sergey; Tai, Yuan-Chuan; Berezin, Mikhail Y

    2016-12-01

    Measurement of photon penetration in biological tissues is a central theme in optical imaging. A great number of endogenous tissue factors such as absorption, scattering, and anisotropy affect the path of photons in tissue, making it difficult to predict the penetration depth at different wavelengths. Traditional studies evaluating photon penetration at different wavelengths are focused on tissue spectroscopy that does not take into account the heterogeneity within the sample. This is especially critical in shortwave infrared where the individual vibration-based absorption properties of the tissue molecules are affected by nearby tissue components. We have explored the depth penetration in biological tissues from 900 to 1650 nm using Monte–Carlo simulation and a hyperspectral imaging system with Michelson spatial contrast as a metric of light penetration. Chromatic aberration-free hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 min. Relatively short recording time minimized artifacts from sample drying. Results from both transmission and reflection geometries consistently revealed that the highest spatial contrast in the wavelength range for deep tissue lies within 1300 to 1375 nm; however, in heavily pigmented tissue such as the liver, the range 1550 to 1600 nm is also prominent.

  13. Laryngeal electromyography: electrode guidance based on 3-dimensional magnetic resonance tomography images of the larynx.

    PubMed

    Storck, Claudio; Gehrer, Raphael; Hofer, Michael; Neumayer, Bernhard; Stollberger, Rudolf; Schumacher, Ralf; Gugatschka, Markus; Friedrich, Gerhard; Wolfensberger, Markus

    2012-01-01

    Laryngeal electromyography (LEMG) is an important tool for the assessment of laryngeal nerve and muscle functioning. The purpose of the study was to determine electrode insertion angle and insertion depth for the various laryngeal muscles. Twenty-three cadaver larynges were examined with magnetic resonance tomography (MRT) and Materialize Interactive Medical Image Control System (Leuven, Belgium) 3-dimensional (3D) imaging software. Geometrical analysis was used to calculate the electrode insertion angles. All laryngeal muscles could be identified and 3D visualized on MRT scans. Although the insertion angles were the same in male and female larynges, the insertion depth was significantly larger in male than in female larynges (P<0.05). Of particular clinical importance is the fact that the electrode has to be directed lateral and upward for the thyroarytenoid muscle but lateral and downward for the lateral cricoarytenoid muscle (insertion point=midline lower border of the thyroid). This is the first study that analyzes electrode insertion angles and insertion depths for each laryngeal muscle using 3D imaging. We hope that the information gained from this study will help clinicians performing LEMG to localize the individual laryngeal muscles. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  14. Profiling defect depth in composite materials using thermal imaging NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2018-04-01

    Sonic Infrared (IR) NDE, is a relatively new NDE technology; it has been demonstrated as a reliable and sensitive method to detect defects. SIR uses ultrasonic excitation with IR imaging to detect defects and flaws in the structures being inspected. An IR camera captures infrared radiation from the target for a period of time covering the ultrasound pulse. This period of time may be much longer than the pulse depending on the defect depth and the thermal properties of the materials. With the increasing deployment of composites in modern aerospace and automobile structures, fast, wide-area and reliable NDE methods are necessary. Impact damage is one of the major concerns in modern composites. Damage can occur at a certain depth without any visual indication on the surface. Defect depth information can influence maintenance decisions. Depth profiling relies on the time delays in the captured image sequence. We'll present our work on the defect depth profiling by using the temporal information of IR images. An analytical model is introduced to describe heat diffusion from subsurface defects in composite materials. Depth profiling using peak time is introduced as well.

  15. Non-destructive optical clearing technique enhances optical coherence tomography (OCT) for real-time, 3D histomorphometry of brain tissue (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Paul, Akshay; Chang, Theodore H.; Chou, Li-Dek; Ramalingam, Tirunelveli S.

    2016-03-01

    Evaluation of neurodegenerative disease often requires examination of brain morphology. Volumetric analysis of brain regions and structures can be used to track developmental changes, progression of disease, and the presence of transgenic phenotypes. Current standards for microscopic investigation of brain morphology are limited to detection of superficial structures at a maximum depth of 300μm. While histological techniques can provide detailed cross-sections of brain structures, they require complicated tissue preparation and the ultimate destruction of the sample. A non-invasive, label-free imaging modality known as Optical Coherence Tomography (OCT) can produce 3-dimensional reconstructions through high-speed, cross-sectional scans of biological tissue. Although OCT allows for the preservation of intact samples, the highly scattering and absorbing properties of biological tissue limit imaging depth to 1-2mm. Optical clearing agents have been utilized to increase imaging depth by index matching and lipid digestion, however, these contemporary techniques are expensive and harsh on tissues, often irreversibly denaturing proteins. Here we present an ideal optical clearing agent that offers ease-of-use and reversibility. Similar to how SeeDB has been effective for microscopy, our fructose-based, reversible optical clearing technique provides improved OCT imaging and functional immunohistochemical mapping of disease. Fructose is a natural, non-toxic sugar with excellent water solubility, capable of increasing tissue transparency and reducing light scattering. We will demonstrate the improved depth-resolving performance of OCT for enhanced whole-brain imaging of normal and diseased murine brains following a fructose clearing treatment. This technique potentially enables rapid, 3-dimensional evaluation of biological tissues at axial and lateral resolutions comparable to histopathology.

  16. Probing neural tissue with airy light-sheet microscopy: investigation of imaging performance at depth within turbid media

    NASA Astrophysics Data System (ADS)

    Nylk, Jonathan; McCluskey, Kaley; Aggarwal, Sanya; Tello, Javier A.; Dholakia, Kishan

    2017-02-01

    Light-sheet microscopy (LSM) has received great interest for fluorescent imaging applications in biomedicine as it facilitates three-dimensional visualisation of large sample volumes with high spatiotemporal resolution whilst minimising irradiation of, and photo-damage to the specimen. Despite these advantages, LSM can only visualize superficial layers of turbid tissues, such as mammalian neural tissue. Propagation-invariant light modes have played a key role in the development of high-resolution LSM techniques as they overcome the natural divergence of a Gaussian beam, enabling uniform and thin light-sheets over large distances. Most notably, Bessel and Airy beam-based light-sheet imaging modalities have been demonstrated. In the single-photon excitation regime and in lightly scattering specimens, Airy-LSM has given competitive performance with advanced Bessel-LSM techniques. Airy and Bessel beams share the property of self-healing, the ability of the beam to regenerate its transverse beam profile after propagation around an obstacle. Bessel-LSM techniques have been shown to increase the penetration-depth of the illumination into turbid specimens but this effect has been understudied in biologically relevant tissues, particularly for Airy beams. It is expected that Airy-LSM will give a similar enhancement over Gaussian-LSM. In this paper, we report on the comparison of Airy-LSM and Gaussian-LSM imaging modalities within cleared and non-cleared mouse brain tissue. In particular, we examine image quality versus tissue depth by quantitative spatial Fourier analysis of neural structures in virally transduced fluorescent tissue sections, showing a three-fold enhancement at 50 μm depth into non-cleared tissue with Airy-LSM. Complimentary analysis is performed by resolution measurements in bead-injected tissue sections.

  17. Multi-contrast light profile microscopy for the depth-resolved imaging of the properties of multi-ply thin films.

    PubMed

    Power, J F

    2009-06-01

    Light profile microscopy (LPM) is a direct method for the spectral depth imaging of thin film cross-sections on the micrometer scale. LPM uses a perpendicular viewing configuration that directly images a source beam propagated through a thin film. Images are formed in dark field contrast, which is highly sensitive to subtle interfacial structures that are invisible to reference methods. The independent focusing of illumination and imaging systems allows multiple registered optical sources to be hosted on a single platform. These features make LPM a powerful multi-contrast (MC) imaging technique, demonstrated in this work with six modes of imaging in a single instrument, based on (1) broad-band elastic scatter; (2) laser excited wideband luminescence; (3) coherent elastic scatter; (4) Raman scatter (three channels with RGB illumination); (5) wavelength resolved luminescence; and (6) spectral broadband scatter, resolved in immediate succession. MC-LPM integrates Raman images with a wider optical and morphological picture of the sample than prior art microprobes. Currently, MC-LPM resolves images at an effective spectral resolution better than 9 cm(-1), at a spatial resolution approaching 1 microm, with optics that operate in air at half the maximum numerical aperture of the prior art microprobes.

  18. Spectrally based mapping of riverbed composition

    USGS Publications Warehouse

    Legleiter, Carl; Stegman, Tobin K.; Overstreet, Brandon T.

    2016-01-01

    Remote sensing methods provide an efficient means of characterizing fluvial systems. This study evaluated the potential to map riverbed composition based on in situ and/or remote measurements of reflectance. Field spectra and substrate photos from the Snake River, Wyoming, USA, were used to identify different sediment facies and degrees of algal development and to quantify their optical characteristics. We hypothesized that accounting for the effects of depth and water column attenuation to isolate the reflectance of the streambed would enhance distinctions among bottom types and facilitate substrate classification. A bottom reflectance retrieval algorithm adapted from coastal research yielded realistic spectra for the 450 to 700 nm range; but bottom reflectance-based substrate classifications, generated using a random forest technique, were no more accurate than classifications derived from above-water field spectra. Additional hypothesis testing indicated that a combination of reflectance magnitude (brightness) and indices of spectral shape provided the most accurate riverbed classifications. Convolving field spectra to the response functions of a multispectral satellite and a hyperspectral imaging system did not reduce classification accuracies, implying that high spectral resolution was not essential. Supervised classifications of algal density produced from hyperspectral data and an inferred bottom reflectance image were not highly accurate, but unsupervised classification of the bottom reflectance image revealed distinct spectrally based clusters, suggesting that such an image could provide additional river information. We attribute the failure of bottom reflectance retrieval to yield more reliable substrate maps to a latent correlation between depth and bottom type. Accounting for the effects of depth might have eliminated a key distinction among substrates and thus reduced discriminatory power. Although further, more systematic study across a broader range of fluvial environments is needed to substantiate our initial results, this case study suggests that bed composition in shallow, clear-flowing rivers potentially could be mapped remotely.

  19. Coordinated Airborne, Spaceborne, and Ground-Based Measurements of Massive, Thick Aerosol Layers During the Dry Season in Southern Africa

    NASA Technical Reports Server (NTRS)

    Schmid, B.; Redemann, J.; Russell, P. B.; Hobbs, P. V.; Hlavka, D. L.; McGill, M. J.; Holben, B. N.; Welton, E. J.; Campbell, J.; Torres, O.; hide

    2002-01-01

    During the dry-season airborne campaign of the Southern African Regional Science Initiative (SAFARI 2000), unique coordinated observations were made of massive, thick aerosol layers. These layers were often dominated by aerosols from biomass burning. We report on airborne Sunphotometer measurements of aerosol optical depth (lambda=354-1558 nm), columnar water vapor, and vertical profiles of aerosol extinction and water vapor density that were obtained aboard the University of Washington's Convair-580 research aircraft. We compare these with ground-based AERONET Sun/sky radiometer results, with ground based lidar data MPL-Net), and with measurements from a downward-pointing lidar aboard the high-flying NASA ER-2 aircraft. Finally, we show comparisons between aerosol optical depths from the Sunphotometer and those retrieved over land and over water using four spaceborne sensors (TOMS (Total Ozone Mapping Spectrometer), MODIS (Moderate Resolution Imaging Spectrometer), MISR (Multiangle Imaging Spectroradiometer) and ATSR-2 (Along Track Scanning Radiometer)).

  20. Planarity constrained multi-view depth map reconstruction for urban scenes

    NASA Astrophysics Data System (ADS)

    Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie

    2018-05-01

    Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.

  1. An adaptive block-based fusion method with LUE-SSIM for multi-focus images

    NASA Astrophysics Data System (ADS)

    Zheng, Jianing; Guo, Yongcai; Huang, Yukun

    2016-09-01

    Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.

  2. A Kinect based sign language recognition system using spatio-temporal features

    NASA Astrophysics Data System (ADS)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  3. Three-Dimensional Optical Coherence Tomography

    NASA Technical Reports Server (NTRS)

    Gutin, Mikhail; Wang, Xu-Ming; Gutin, Olga

    2009-01-01

    Three-dimensional (3D) optical coherence tomography (OCT) is an advanced method of noninvasive infrared imaging of tissues in depth. Heretofore, commercial OCT systems for 3D imaging have been designed principally for external ophthalmological examination. As explained below, such systems have been based on a one-dimensional OCT principle, and in the operation of such a system, 3D imaging is accomplished partly by means of a combination of electronic scanning along the optical (Z) axis and mechanical scanning along the two axes (X and Y) orthogonal to the optical axis. In 3D OCT, 3D imaging involves a form of electronic scanning (without mechanical scanning) along all three axes. Consequently, the need for mechanical adjustment is minimal and the mechanism used to position the OCT probe can be correspondingly more compact. A 3D OCT system also includes a probe of improved design and utilizes advanced signal- processing techniques. Improvements in performance over prior OCT systems include finer resolution, greater speed, and greater depth of field.

  4. Quantified Differentiation of Surface Topography for Nano-materials As-Obtained from Atomic Force Microscopy Images

    NASA Astrophysics Data System (ADS)

    Gupta, Mousumi; Chatterjee, Somenath

    2018-04-01

    Surface texture is an important issue to realize the nature (crest and trough) of surfaces. Atomic force microscopy (AFM) image is a key analysis for surface topography. However, in nano-scale, the nature (i.e., deflection or crack) as well as quantification (i.e., height or depth) of deposited layers is essential information for material scientist. In this paper, a gradient-based K-means algorithm is used to differentiate the layered surfaces depending on their color contrast of as-obtained from AFM images. A transformation using wavelet decomposition is initiated to extract the information about deflection or crack on the material surfaces from the same images. Z-axis depth analysis from wavelet coefficients provides information about the crack present in the material. Using the above method corresponding surface information for the material is obtained. In addition, the Gaussian filter is applied to remove the unwanted lines, which occurred during AFM scanning. Few known samples are taken as input, and validity of the above approaches is shown.

  5. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  6. Angle-domain common imaging gather extraction via Kirchhoff prestack depth migration based on a traveltime table in transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Shaoyong; Gu, Hanming; Tang, Yongjie; Bingkai, Han; Wang, Huazhong; Liu, Dingjin

    2018-04-01

    Angle-domain common image-point gathers (ADCIGs) can alleviate the limitations of common image-point gathers in an offset domain, and have been widely used for velocity inversion and amplitude variation with angle (AVA) analysis. We propose an effective algorithm for generating ADCIGs in transversely isotropic (TI) media based on the gradient of traveltime by Kirchhoff pre-stack depth migration (KPSDM), as the dynamic programming method for computing the traveltime in TI media would not suffer from the limitation of shadow zones and traveltime interpolation. Meanwhile, we present a specific implementation strategy for ADCIG extraction via KPSDM. Three major steps are included in the presented strategy: (1) traveltime computation using a dynamic programming approach in TI media; (2) slowness vector calculation by the gradient of a traveltime table calculated previously; (3) construction of illumination vectors and subsurface angles in the migration process. Numerical examples are included to demonstrate the effectiveness of our approach, which henceforce shows its potential application for subsequent tomographic velocity inversion and AVA.

  7. SU-D-207-07: Implementation of Full/half Bowtie Filter Model in a Commercial Treatment Planning System for Kilovoltage X-Ray Imaging Dose Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S; Alaei, P

    2015-06-15

    Purpose: To implement full/half bowtie filter models in a commercial treatment planning system (TPS) to calculate kilovoltage (kV) x-ray imaging dose of Varian On-Board Imager (OBI) cone beam CT (CBCT) system. Methods: Full/half bowtie filters of Varian OBI were created as compensator models in Pinnacle TPS (version 9.6) using Matlab software (version 2011a). The profiles of both bowtie filters were acquired from the manufacturer, imported into the Matlab system and hard coded in binary file format. A Pinnacle script was written to import each bowtie filter data into a Pinnacle treatment plan as a compensator. A kV x-ray beam modelmore » without including the compensator model was commissioned per each bowtie filter setting based on percent depth dose and lateral profile data acquired from Monte Carlo simulations. To validate the bowtie filter models, a rectangular water phantom was generated in the planning system and an anterior/posterior beam with each bowtie filter was created. Using the Pinnacle script, each bowtie filter compensator was added to the treatment plan. Lateral profile at the depth of 3cm and percent depth dose were measured using an ion chamber and compared with the data extracted from the treatment plans. Results: The kV x-ray beams for both full and half bowtie filter have been modeled in a commercial TPS. The difference of lateral and depth dose profiles between dose calculations and ion chamber measurements were within 6%. Conclusion: Both full/half bowtie filter models provide reasonable results in kV x-ray dose calculations in the water phantom. This study demonstrates the possibility of using a model-based treatment planning system to calculate the kV imaging dose for both full and half bowtie filter modes. Further study is to be performed to evaluate the models in clinical situations.« less

  8. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms

    PubMed Central

    Perez-Sanz, Fernando; Navarro, Pedro J

    2017-01-01

    Abstract The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. PMID:29048559

  9. 110 °C range athermalization of wavefront coding infrared imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong

    2017-09-01

    110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.

  10. Diaphragm depth in normal subjects.

    PubMed

    Shahgholi, Leili; Baria, Michael R; Sorenson, Eric J; Harper, Caitlin J; Watson, James C; Strommen, Jeffrey A; Boon, Andrea J

    2014-05-01

    Needle electromyography (EMG) of the diaphragm carries the potential risk of pneumothorax. Knowing the approximate depth of the diaphragm should increase the test's safety and accuracy. Distances from the skin to the diaphragm and from the outer surface of the rib to the diaphragm were measured using B mode ultrasound in 150 normal subjects. When measured at the lower intercostal spaces, diaphragm depth varied between 0.78 and 4.91 cm beneath the skin surface and between 0.25 and 1.48 cm below the outer surface of the rib. Using linear regression modeling, body mass index (BMI) could be used to predict diaphragm depth from the skin to within an average of 1.15 mm. Diaphragm depth from the skin can vary by more than 4 cm. When image guidance is not available to enhance accuracy and safety of diaphragm EMG, it is possible to reliably predict the depth of the diaphragm based on BMI. Copyright © 2013 Wiley Periodicals, Inc.

  11. Noise removal in extended depth of field microscope images through nonlinear signal processing.

    PubMed

    Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J

    2013-04-01

    Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.

  12. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  13. Retrieving Atmospheric Dust Loading on Mars Using Engineering Cameras and MSL's Mars Hand Lens Imager (MAHLI)

    NASA Astrophysics Data System (ADS)

    Wolfe, C. A.; Lemmon, M. T.

    2015-12-01

    Dust in the Martian atmosphere influences energy deposition, dynamics, and the viability of solar powered exploration vehicles. The Viking, Pathfinder, Spirit, Opportunity, Phoenix, and Curiosity landers and rovers each included the ability to image the Sun with a science camera equipped with a neutral density filter. Direct images of the Sun not only provide the ability to measure extinction by dust and ice in the atmosphere, but also provide a variety of constraints on the Martian dust and water cycles. These observations have been used to characterize dust storms, to provide ground truth sites for orbiter-based global measurements of dust loading, and to help monitor solar panel performance. In the cost-constrained environment of Mars exploration, future missions may omit such cameras, as the solar-powered InSight mission has. We seek to provide a robust capability of determining atmospheric opacity from sky images taken with cameras that have not been designed for solar imaging, such as the engineering cameras onboard Opportunity and the Mars Hand Lens Imager (MAHLI) on Curiosity. Our investigation focuses primarily on the accuracy of a method that determines optical depth values using scattering models that implement the ratio of sky radiance measurements at different elevation angles, but at the same scattering angle. Operational use requires the ability to retrieve optical depth on a timescale useful to mission planning, and with an accuracy and precision sufficient to support both mission planning and validating orbital measurements. We will present a simulation-based assessment of imaging strategies and their error budgets, as well as a validation based on the comparison of direct extinction measurements from archival Navcam, Hazcam, and MAHLI camera data.

  14. Sentinel lymph nodes and lymphatic vessels: noninvasive dual-modality in vivo mapping by using indocyanine green in rats--volumetric spectroscopic photoacoustic imaging and planar fluorescence imaging.

    PubMed

    Kim, Chulhong; Song, Kwang Hyun; Gao, Feng; Wang, Lihong V

    2010-05-01

    To noninvasively map sentinel lymph nodes (SLNs) and lymphatic vessels in rats in vivo by using dual-modality nonionizing imaging-volumetric spectroscopic photoacoustic imaging, which measures optical absorption, and planar fluorescence imaging, which measures fluorescent emission-of indocyanine green (ICG). Institutional animal care and use committee approval was obtained. Healthy Sprague-Dawley rats weighing 250-420 g (age range, 60-120 days) were imaged by using volumetric photoacoustic imaging (n = 5) and planar fluorescence imaging (n = 3) before and after injection of 1 mmol/L ICG. Student paired t tests based on a logarithmic scale were performed to evaluate the change in photoacoustic signal enhancement of SLNs and lymphatic vessels before and after ICG injection. The spatial resolutions of both imaging systems were compared at various imaging depths (2-8 mm) by layering additional biologic tissues on top of the rats in vivo. Spectroscopic photoacoustic imaging was applied to identify ICG-dyed SLNs. In all five rats examined with photoacoustic imaging, SLNs were clearly visible, with a mean signal enhancement of 5.9 arbitrary units (AU) + or - 1.8 (standard error of the mean) (P < .002) at 0.2 hour after injection, while lymphatic vessels were seen in four of the five rats, with a signal enhancement of 4.3 AU + or - 0.6 (P = .001). In all three rats examined with fluorescence imaging, SLNs and lymphatic vessels were seen. The average full width at half maximum (FWHM) of the SLNs in the photoacoustic images at three imaging depths (2, 6, and 8 mm) was 2.0 mm + or - 0.2 (standard deviation), comparable to the size of a dissected lymph node as measured with a caliper. However, the FWHM of the SLNs in fluorescence images widened from 8 to 22 mm as the imaging depth increased, owing to strong light scattering. SLNs were identified spectroscopically in photoacoustic images. These two modalities, when used together with ICG, have the potential to help map SLNs in axillary staging and to help evaluate tumor metastasis in patients with breast cancer.

  15. Three dimensional single molecule localization using a phase retrieved pupilfunction

    PubMed Central

    Liu, Sheng; Kromann, Emil B.; Krueger, Wesley D.; Bewersdorf, Joerg; Lidke, Keith A.

    2013-01-01

    Localization-based superresolution imaging is dependent on finding the positions of individualfluorophores in a sample by fitting the observed single-molecule intensity pattern to the microscopepoint spread function (PSF). For three-dimensional imaging, system-specific aberrations of theoptical system can lead to inaccurate localizations when the PSF model does not account for theseaberrations. Here we describe the use of phase-retrieved pupil functions to generate a more accuratePSF and therefore more accurate 3D localizations. The complex-valued pupil function containsinformation about the system-specific aberrations and can thus be used to generate the PSF forarbitrary defocus. Further, it can be modified to include depth dependent aberrations. We describethe phase retrieval process, the method for including depth dependent aberrations, and a fastfitting algorithm using graphics processing units. The superior localization accuracy of the pupilfunction generated PSF is demonstrated with dual focal plane 3D superresolution imaging ofbiological structures. PMID:24514501

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Okumura, Satoshi; Komori, Masataka

    We developed a prototype positron emission tomography (PET) system based on a new concept called Open-close PET, which has two modes: open and close-modes. In the open-mode, the detector ring is separated into two halved rings and subject is imaged with the open space and projection image is formed. In the close-mode, the detector ring is closed to be a regular circular ring, and the subject can be imaged without an open space, and so reconstructed images can be made without artifacts. The block detector of the Open-close PET system consists of two scintillator blocks that use two types ofmore » gadolinium orthosilicate (GSO) scintillators with different decay times, angled optical fiber-based image guides, and a flat panel photomultiplier tube. The GSO pixel size was 1.6 × 2.4 × 7 mm and 8 mm for fast (35 ns) and slow (60 ns) GSOs, respectively. These GSOs were arranged into an 11 × 15 matrix and optically coupled in the depth direction to form a depth-of-interaction detector. The angled optical fiber-based image guides were used to arrange the two scintillator blocks at 22.5° so that they can be arranged in a hexadecagonal shape with eight block detectors to simplify the reconstruction algorithm. The detector ring was divided into two halves to realize the open-mode and set on a mechanical stand with which the distance between the two parts can be manually changed. The spatial resolution in the close-mode was 2.4-mm FWHM, and the sensitivity was 1.7% at the center of the field-of-view. In both the close- and open-modes, we made sagittal (y-z plane) projection images between the two halved detector rings. We obtained reconstructed and projection images of {sup 18}F-NaF rat studies and proton-irradiated phantom images. These results indicate that our developed Open-close PET is useful for some applications such as proton therapy as well as other applications such as molecular imaging.« less

  17. Method to optimize patch size based on spatial frequency response in image rendering of the light field

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Wang, Yanan; Zhu, Zhenhao; Su, Jinhui

    2018-05-01

    A focused plenoptic camera can effectively transform angular and spatial information to yield a refocused rendered image with high resolution. However, choosing a proper patch size poses a significant problem for the image-rendering algorithm. By using a spatial frequency response measurement, a method to obtain a suitable patch size is presented. By evaluating the spatial frequency response curves, the optimized patch size can be obtained quickly and easily. Moreover, the range of depth over which images can be rendered without artifacts can be estimated. Experiments show that the results of the image rendered based on frequency response measurement are in accordance with the theoretical calculation, which indicates that this is an effective way to determine the patch size. This study may provide support to light-field image rendering.

  18. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  19. Statistical model of laminar structure for atlas-based segmentation of the fetal brain from in utero MR images

    NASA Astrophysics Data System (ADS)

    Habas, Piotr A.; Kim, Kio; Chandramohan, Dharshan; Rousseau, Francois; Glenn, Orit A.; Studholme, Colin

    2009-02-01

    Recent advances in MR and image analysis allow for reconstruction of high-resolution 3D images from clinical in utero scans of the human fetal brain. Automated segmentation of tissue types from MR images (MRI) is a key step in the quantitative analysis of brain development. Conventional atlas-based methods for adult brain segmentation are limited in their ability to accurately delineate complex structures of developing tissues from fetal MRI. In this paper, we formulate a novel geometric representation of the fetal brain aimed at capturing the laminar structure of developing anatomy. The proposed model uses a depth-based encoding of tissue occurrence within the fetal brain and provides an additional anatomical constraint in a form of a laminar prior that can be incorporated into conventional atlas-based EM segmentation. Validation experiments are performed using clinical in utero scans of 5 fetal subjects at gestational ages ranging from 20.5 to 22.5 weeks. Experimental results are evaluated against reference manual segmentations and quantified in terms of Dice similarity coefficient (DSC). The study demonstrates that the use of laminar depth-encoded tissue priors improves both the overall accuracy and precision of fetal brain segmentation. Particular refinement is observed in regions of the parietal and occipital lobes where the DSC index is improved from 0.81 to 0.82 for cortical grey matter, from 0.71 to 0.73 for the germinal matrix, and from 0.81 to 0.87 for white matter.

  20. Controllable 3D Display System Based on Frontal Projection Lenticular Screen

    NASA Astrophysics Data System (ADS)

    Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.

    2014-08-01

    A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.

  1. Gabor fusion master slave optical coherence tomography

    PubMed Central

    Cernat, Ramona; Bradu, Adrian; Israelsen, Niels Møller; Bang, Ole; Rivet, Sylvain; Keane, Pearse A.; Heath, David-Garway; Rajendram, Ranjan; Podoleanu, Adrian

    2017-01-01

    This paper describes the application of the Gabor filtering protocol to a Master/Slave (MS) swept source optical coherence tomography (SS)-OCT system at 1300 nm. The MS-OCT system delivers information from selected depths, a property that allows operation similar to that of a time domain OCT system, where dynamic focusing is possible. The Gabor filtering processing following collection of multiple data from different focus positions is different from that utilized by a conventional swept source OCT system using a Fast Fourier transform (FFT) to produce an A-scan. Instead of selecting the bright parts of A-scans for each focus position, to be placed in a final B-scan image (or in a final volume), and discarding the rest, the MS principle can be employed to advantageously deliver signal from the depths within each focus range only. The MS procedure is illustrated on creating volumes of data of constant transversal resolution from a cucumber and from an insect by repeating data acquisition for 4 different focus positions. In addition, advantage is taken from the tolerance to dispersion of the MS principle that allows automatic compensation for dispersion created by layers above the object of interest. By combining the two techniques, Gabor filtering and Master/Slave, a powerful imaging instrument is demonstrated. The master/slave technique allows simultaneous display of three categories of images in one frame: multiple depth en-face OCT images, two cross-sectional OCT images and a confocal like image obtained by averaging the en-face ones. We also demonstrate the superiority of MS-OCT over its FFT based counterpart when used with a Gabor filtering OCT instrument in terms of the speed of assembling the fused volume. For our case, we show that when more than 4 focus positions are required to produce the final volume, MS is faster than the conventional FFT based procedure. PMID:28270987

  2. Co-registered Topographical, Band Excitation Nanomechanical, and Mass Spectral Imaging Using a Combined Atomic Force Microscopy/Mass Spectrometry Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikova, Olga S.; Tai, Tamin; Bocharova, Vera

    The advancement of a hybrid atomic force microscopy/mass spectrometry imaging platform demonstrating for the first time co-registered topographical, band excitation nanomechanical, and mass spectral imaging of a surface using a single instrument is reported. The mass spectrometry-based chemical imaging component of the system utilized nanothermal analysis probes for pyrolytic surface sampling followed by atmospheric pressure chemical ionization of the gas phase species produced with subsequent mass analysis. We discuss the basic instrumental setup and operation and the multimodal imaging capability and utility are demonstrated using a phase separated polystyrene/poly(2-vinylpyridine) polymer blend thin film. The topography and band excitation images showedmore » that the valley and plateau regions of the thin film surface were comprised primarily of one of the two polymers in the blend with the mass spectral chemical image used to definitively identify the polymers at the different locations. Data point pixel size for the topography (390 nm x 390 nm), band excitation (781 nm x 781 nm), mass spectrometry (690 nm x 500 nm) images was comparable and submicrometer in all three cases, but the data voxel size for each of the three images was dramatically different. The topography image was uniquely a surface measurement, whereas the band excitation image included information from an estimated 10 nm deep into the sample and the mass spectral image from 110-140 nm in depth. Moreover, because of this dramatic sampling depth variance, some differences in the band excitation and mass spectrometry chemical images were observed and were interpreted to indicate the presence of a buried interface in the sample. The spatial resolution of the mass spectral image was estimated to be between 1.5 m 2.6 m, based on the ability to distinguish surface features in that image that were also observed in the other images.« less

  3. Super-resolved all-refocused image with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Li, Lin; Hou, Guangqi

    2015-12-01

    This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.

  4. Co-registered Topographical, Band Excitation Nanomechanical, and Mass Spectral Imaging Using a Combined Atomic Force Microscopy/Mass Spectrometry Platform

    DOE PAGES

    Ovchinnikova, Olga S.; Tai, Tamin; Bocharova, Vera; ...

    2015-03-18

    The advancement of a hybrid atomic force microscopy/mass spectrometry imaging platform demonstrating for the first time co-registered topographical, band excitation nanomechanical, and mass spectral imaging of a surface using a single instrument is reported. The mass spectrometry-based chemical imaging component of the system utilized nanothermal analysis probes for pyrolytic surface sampling followed by atmospheric pressure chemical ionization of the gas phase species produced with subsequent mass analysis. We discuss the basic instrumental setup and operation and the multimodal imaging capability and utility are demonstrated using a phase separated polystyrene/poly(2-vinylpyridine) polymer blend thin film. The topography and band excitation images showedmore » that the valley and plateau regions of the thin film surface were comprised primarily of one of the two polymers in the blend with the mass spectral chemical image used to definitively identify the polymers at the different locations. Data point pixel size for the topography (390 nm x 390 nm), band excitation (781 nm x 781 nm), mass spectrometry (690 nm x 500 nm) images was comparable and submicrometer in all three cases, but the data voxel size for each of the three images was dramatically different. The topography image was uniquely a surface measurement, whereas the band excitation image included information from an estimated 10 nm deep into the sample and the mass spectral image from 110-140 nm in depth. Moreover, because of this dramatic sampling depth variance, some differences in the band excitation and mass spectrometry chemical images were observed and were interpreted to indicate the presence of a buried interface in the sample. The spatial resolution of the mass spectral image was estimated to be between 1.5 m 2.6 m, based on the ability to distinguish surface features in that image that were also observed in the other images.« less

  5. Imaging of dental material by polarization-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Dichtl, Sabine; Baumgartner, Angela; Hitzenberger, Christoph K.; Moritz, Andreas; Wernisch, Johann; Robl, Barbara; Sattmann, Harald; Leitgeb, Rainer; Sperr, Wolfgang; Fercher, Adolf F.

    1999-05-01

    Partial coherence interferometry (PCI) and optical coherence tomography (OCT) are noninvasive and noncontact techniques for high precision biometry and for obtaining cross- sectional images of biologic structures. OCT was initially introduced to depict the transparent tissue of the eye. It is based on interferometry employing the partial coherence properties of a light source with high spatial coherence ut short coherence length to image structures with a resolution of the order of a few microns. Recently this technique has been modified for cross section al imaging of dental and periodontal tissues. In vitro and in vivo OCT images have been recorded, which distinguish enamel, cemento and dentin structures and provide detailed structural information on clinical abnormalities. In contrast to convention OCT, where the magnitude of backscattered light as a function of depth is imaged, polarization sensitive OCT uses backscattered light to image the magnitude of the birefringence in the sample as a function of depth. First polarization sensitive OCT recordings show, that changes in the mineralization status of enamel or dentin caused by caries or non-caries lesions can result in changes of the polarization state of the light backscattered by dental material. Therefore polarization sensitive OCT might provide a new diagnostic imaging modality in clinical and research dentistry.

  6. Combined in-depth, 3D, en face imaging of the optic disc, optic disc pits and optic disc pit maculopathy using swept-source megahertz OCT at 1050 nm.

    PubMed

    Maertz, Josef; Kolb, Jan Philip; Klein, Thomas; Mohler, Kathrin J; Eibl, Matthias; Wieser, Wolfgang; Huber, Robert; Priglinger, Siegfried; Wolf, Armin

    2018-02-01

    To demonstrate papillary imaging of eyes with optic disc pits (ODP) or optic disc pit associated maculopathy (ODP-M) with ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s. To generate 3D-renderings of the papillary area with 3D volume-reconstructions of the ODP and highly resolved en face images from a single densely-sampled megahertz-OCT (MHz-OCT) dataset for investigation of ODP-characteristics. A 1.68 MHz-prototype SS-MHz-OCT system at 1050 nm based on a Fourier-domain mode-locked laser was employed to acquire high-definition, 3D datasets with a dense sampling of 1600 × 1600 A-scans over a 45° field of view. Six eyes with ODPs, and two further eyes with glaucomatous alteration or without ocular pathology are presented. 3D-rendering of the deep papillary structures, virtual 3D-reconstructions of the ODPs and depth resolved isotropic en face images were generated using semiautomatic segmentation. 3D-rendering and en face imaging of the optic disc, ODPs and ODP associated pathologies showed a broad spectrum regarding ODP characteristics. Between individuals the shape of the ODP and the appending pathologies varied considerably. MHz-OCT en face imaging generates distinct top-view images of ODPs and ODP-M. MHz-OCT generates high resolution images of retinal pathologies associated with ODP-M and allows visualizing ODPs with depths of up to 2.7 mm. Different patterns of ODPs can be visualized in patients for the first time using 3D-reconstructions and co-registered high-definition en face images extracted from a single densely sampled 1050 nm megahertz-OCT (MHz-OCT) dataset. As the immediate vicinity to the SAS and the site of intrapapillary proliferation is located at the bottom of the ODP it is crucial to image the complete structure and the whole depth of ODPs. Especially in very deep pits, where non-swept-source OCT fails to reach the bottom, conventional swept-source devices and the MHz-OCT alike are feasible and beneficial methods to examine deep details of optic disc pathologies, while the MHz-OCT bears the advantage of an essentially swifter imaging process.

  7. A depth-of-interaction PET detector using mutual gain-equalized silicon photomultiplier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Xi, A.G, Weisenberger, H. Dong, Brian Kross, S. Lee, J. McKisson, Carl Zorn

    We developed a prototype high resolution, high efficiency depth-encoding detector for PET applications based on dual-ended readout of LYSO array with two silicon photomultipliers (SiPMs). Flood images, energy resolution, and depth-of-interaction (DOI) resolution were measured for a LYSO array - 0.7 mm in crystal pitch and 10 mm in thickness - with four unpolished parallel sides. Flood images were obtained such that individual crystal element in the array is resolved. The energy resolution of the entire array was measured to be 33%, while individual crystal pixel elements utilizing the signal from both sides ranged from 23.3% to 27%. By applyingmore » a mutual-gain equalization method, a DOI resolution of 2 mm for the crystal array was obtained in the experiments while simulations indicate {approx}1 mm DOI resolution could possibly be achieved. The experimental DOI resolution can be further improved by obtaining revised detector supporting electronics with better energy resolutions. This study provides a detailed detector calibration and DOI response characterization of the dual-ended readout SiPM-based PET detectors, which will be important in the design and calibration of a PET scanner in the future.« less

  8. A calibration method immune to the projector errors in fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Guo, Hongwei

    2017-08-01

    In fringe projection technique, system calibration is a tedious task to establish the mapping relationship between the object depths and the fringe phases. Especially, it is not easy to accurately determine the parameters of the projector in this system, which may induce errors in the measurement results. To solve this problem, this paper proposes a new calibration by using the cross-ratio invariance in the system geometry for determining the phase-to-depth relations. In it, we analyze the epipolar eometry of the fringe projection system. On each epipolar plane, the depth variation along an incident ray induces the pixel movement along the epipolar line on the image plane of the camera. These depth variations and pixel movements can be connected by use of the projective transformations, under which condition the cross-ratio for each of them keeps invariant. Based on this fact, we suggest measuring the depth map by use of this cross-ratio invariance. Firstly, we shift the reference board in its perpendicular direction to three positions with known depths, and measure their phase maps as the reference phase maps; and secondly, when measuring an object, we calculate the object depth at each pixel by equating the cross-ratio of the depths to that of the corresponding pixels having the same phase on the image plane of the camera. This method is immune to the errors sourced from the projector, including the distortions both in the geometric shapes and in the intensity profiles of the projected fringe patterns.The experimental results demonstrate the proposed method to be feasible and valid.

  9. Infrared cloud imaging in support of Earth-space optical communication.

    PubMed

    Nugent, Paul W; Shaw, Joseph A; Piazzolla, Sabino

    2009-05-11

    The increasing need for high data return from near-Earth and deep-space missions is driving a demand for the establishment of Earth-space optical communication links. These links will require a nearly obstruction-free path to the communication platform, so there is a need to measure spatial and temporal statistics of clouds at potential ground-station sites. A technique is described that uses a ground-based thermal infrared imager to provide continuous day-night cloud detection and classification according to the cloud optical depth and potential communication channel attenuation. The benefit of retrieving cloud optical depth and corresponding attenuation is illustrated through measurements that identify cloudy times when optical communication may still be possible through thin clouds.

  10. Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.

    PubMed

    Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas

    2016-03-01

    Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.

  11. Lateral femoral notch depth is not associated with increased rotatory instability in ACL-injured knees: a quantitative pivot shift analysis.

    PubMed

    Kanakamedala, Ajay C; Burnham, Jeremy M; Pfeiffer, Thomas R; Herbst, Elmar; Kowalczuk, Marcin; Popchak, Adam; Irrgang, James; Fu, Freddie H; Musahl, Volker

    2018-05-01

    A deep lateral femoral notch (LFN) on lateral radiographs is indicative of ACL injury. Prior studies have suggested that a deep LFN may also be a sign of persistent rotatory instability and a concomitant lateral meniscus tear. Therefore, the purpose of this study was to evaluate the relationship between LFN depth and both quantitative measures of rotatory knee instability and the incidence of lateral meniscus tears. It was hypothesized that greater LFN depth would be correlated with increased rotatory instability, quantified by lateral compartment translation and tibial acceleration during a quantitative pivot shift test, and incidence of lateral meniscus tears. ACL-injured patients enrolled in a prospective ACL registry from 2014 to 2016 were analyzed. To limit confounders, patients were only included if they had primary ACL tears, no concurrent ligamentous or bony injuries requiring operative treatment, and no previous knee injuries or surgeries to either knee. Eighty-four patients were included in the final analysis. A standardized quantitative pivot shift test was performed pre-operatively under anesthesia in both knees, and rotatory instability, specifically lateral compartment translation and tibial acceleration, was quantified using tablet image analysis software and accelerometer sensors. Standard lateral radiographs and sagittal magnetic resonance images (MRI) of the injured knee were evaluated for LFN depth. There were no significant correlations between LFN depth on either imaging modality and ipsilateral lateral compartment translation or tibial acceleration during a quantitative pivot shift test or side-to-side differences in these measurements. Patients with lateral meniscus tears were found to have significantly greater LFN depths than those without on conventional radiograph and MRI (1.0 vs. 0.6 mm, p < 0.05; 1.2 vs. 0.8 mm, p < 0.05, respectively). There was no correlation between lateral femoral notch depth on conventional radiographs or MRI and quantitative measures of rotatory instability. Concomitant lateral meniscus injury was associated with significantly greater LFN depth. Based on these findings, LFN depth should not be used as an indicator of excessive rotatory instability, but may be an indicator of lateral meniscus injury in ACL-injured patients. Prognostic level IV.

  12. Learning spatially coherent properties of the visual world in connectionist networks

    NASA Astrophysics Data System (ADS)

    Becker, Suzanna; Hinton, Geoffrey E.

    1991-10-01

    In the unsupervised learning paradigm, a network of neuron-like units is presented with an ensemble of input patterns from a structured environment, such as the visual world, and learns to represent the regularities in that input. The major goal in developing unsupervised learning algorithms is to find objective functions that characterize the quality of the network's representation without explicitly specifying the desired outputs of any of the units. The sort of objective functions considered cause a unit to become tuned to spatially coherent features of visual images (such as texture, depth, shading, and surface orientation), by learning to predict the outputs of other units which have spatially adjacent receptive fields. Simulations show that using an information-theoretic algorithm called IMAX, a network can be trained to represent depth by observing random dot stereograms of surfaces with continuously varying disparities. Once a layer of depth-tuned units has developed, subsequent layers are trained to perform surface interpolation of curved surfaces, by learning to predict the depth of one image region based on depth measurements in surrounding regions. An extension of the basic model allows a population of competing neurons to learn a distributed code for disparity, which naturally gives rise to a representation of discontinuities.

  13. Acoustic Reverse Time Migration of the Cascadia Subduction Zone Dataset

    NASA Astrophysics Data System (ADS)

    Jia, L.; Mallick, S.

    2017-12-01

    Reverse time migration (RTM) is a wave-equation based migration method, which provides more accurate images than ray-based migration methods, especially for the structures in deep areas, making it an effective tool for imaging the subduction plate boundary. In this work, we extend the work of Fortin (2015) and applied acoustic finite-element RTM on the Cascadia Subduction Zone (CSZ) dataset. The dataset was acquired by Cascadia Open-Access Seismic Transects (COAST) program, targeting the megathrust in the central Cascadia subduction zone (Figure 1). The data on a 2D seismic reflection line that crosses the Juan de Fuca/North American subduction boundary off Washington (Line 5) were pre-processed and worked through Kirchhoff prestack depth migration (PSDM). Figure 2 compares the depth image of Line 5 of the CSZ data using Kirchhoff PSDM (top) and RTM (bottom). In both images, the subducting plate is indicated with yellow arrows. Notice that the RTM image is much superior to the PSDM image by several aspects. First, the plate boundary appears to be much more continuous in the RTM image than the PSDM image. Second, the RTM image indicates the subducting plate is relatively smooth on the seaward (west) side between 0-50 km. Within the deformation front of the accretionary prism (50-80 km), the RTM image shows substantial roughness in the subducting plate. These features are not clear in the PSDM image. Third, the RTM image shows a lot of fine structures below the subducting plate which are almost absent in the PSDM image. Finally, the RTM image indicates that the plate is gently dipping within the undeformed sediment (0-50 km) and becomes steeply dipping beyond 50 km as it enters the deformation front of the accretionary prism. Although the same conclusion could be drawn from the discontinuous plate boundary imaged by PSDM, RTM results are far more convincing than the PSDM.

  14. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Characterisation of optically cleared paper by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Fabritius, T.; Alarousu, E.; Prykäri, T.; Hast, J.; Myllylä, Risto

    2006-02-01

    Due to the highly light scattering nature of paper, the imaging depth of optical methods such as optical coherence tomography (OCT) is limited. In this work, we study the effect of refractive index matching on improving the imaging depth of OCT in paper. To this end, four different refractive index matching liquids (ethanol, 1-pentanol, glycerol and benzyl alcohol) with a refraction index between 1.359 and 1.538 were used in experiments. Low coherent light transmission was studied in commercial copy paper sheets, and the results indicate that benzyl alcohol offers the best improvement in imaging depth, while also being sufficiently stable for the intended purpose. Constructed cross-sectional images demonstrate visually that the imaging depth of OCT is considerably improved by optical clearing. Both surfaces of paper sheets can be detected along with information about the sheet's inner structure.

  15. Anatomy-based algorithm for automatic segmentation of human diaphragm in noncontrast computed tomography images

    PubMed Central

    Karami, Elham; Wang, Yong; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2016-01-01

    Abstract. In-depth understanding of the diaphragm’s anatomy and physiology has been of great interest to the medical community, as it is the most important muscle of the respiratory system. While noncontrast four-dimensional (4-D) computed tomography (CT) imaging provides an interesting opportunity for effective acquisition of anatomical and/or functional information from a single modality, segmenting the diaphragm in such images is very challenging not only because of the diaphragm’s lack of image contrast with its surrounding organs but also because of respiration-induced motion artifacts in 4-D CT images. To account for such limitations, we present an automatic segmentation algorithm, which is based on a priori knowledge of diaphragm anatomy. The novelty of the algorithm lies in using the diaphragm’s easy-to-segment contacting organs—including the lungs, heart, aorta, and ribcage—to guide the diaphragm’s segmentation. Obtained results indicate that average mean distance to the closest point between diaphragms segmented using the proposed technique and corresponding manual segmentation is 2.55±0.39  mm, which is favorable. An important feature of the proposed technique is that it is the first algorithm to delineate the entire diaphragm. Such delineation facilitates applications, where the diaphragm boundary conditions are required such as biomechanical modeling for in-depth understanding of the diaphragm physiology. PMID:27921072

  16. Image-based path planning for automated virtual colonoscopy navigation

    NASA Astrophysics Data System (ADS)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  17. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images

    PubMed Central

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-01-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787

  18. Corneal topography with high-speed swept source OCT in clinical examination

    PubMed Central

    Karnowski, Karol; Kaluzny, Bartlomiej J.; Szkulmowski, Maciej; Gora, Michalina; Wojtkowski, Maciej

    2011-01-01

    We present the applicability of high-speed swept source (SS) optical coherence tomography (OCT) for quantitative evaluation of the corneal topography. A high-speed OCT device of 108,000 lines/s permits dense 3D imaging of the anterior segment within a time period of less than one fourth of second, minimizing the influence of motion artifacts on final images and topographic analysis. The swept laser performance was specially adapted to meet imaging depth requirements. For the first time to our knowledge the results of a quantitative corneal analysis based on SS OCT for clinical pathologies such as keratoconus, a cornea with superficial postinfectious scar, and a cornea 5 months after penetrating keratoplasty are presented. Additionally, a comparison with widely used commercial systems, a Placido-based topographer and a Scheimpflug imaging-based topographer, is demonstrated. PMID:21991558

  19. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images.

    PubMed

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-06-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

  20. Detection of cortical optical changes during seizure activity using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ornelas, Danielle; Hasan, Md.; Gonzalez, Oscar; Krishnan, Giri; Szu, Jenny I.; Myers, Timothy; Hirota, Koji; Bazhenov, Maxim; Binder, Devin K.; Park, Boris H.

    2017-02-01

    Electrophysiology has remained the gold standard of neural activity detection but its resolution and high susceptibility to noise and motion artifact limit its efficiency. Imaging techniques, including fMRI, intrinsic optical imaging, and diffuse optical imaging, have been used to detect neural activity, but rely on indirect measurements such as changes in blood flow. Fluorescence-based techniques, including genetically encoded indicators, are powerful techniques, but require introduction of an exogenous fluorophore. A more direct optical imaging technique is optical coherence tomography (OCT), a label-free, high resolution, and minimally invasive imaging technique that can produce depth-resolved cross-sectional and 3D images. In this study, we sought to examine non-vascular depth-dependent optical changes directly related to neural activity. We used an OCT system centered at 1310 nm to search for changes in an ex vivo brain slice preparation and an in vivo model during 4-AP induced seizure onset and propagation with respect to electrical recording. By utilizing Doppler OCT and the depth-dependency of the attenuation coefficient, we demonstrate the ability to locate and remove the optical effects of vasculature within the upper regions of the cortex from in vivo attenuation calculations. The results of this study show a non-vascular decrease in intensity and attenuation in ex vivo and in vivo seizure models, respectively. Regions exhibiting decreased optical changes show significant temporal correlation to regions of increased electrical activity during seizure. This study allows for a thorough and biologically relevant analysis of the optical signature of seizure activity both ex vivo and in vivo using OCT.

  1. Solving the inverse scattering problem in reflection-mode dynamic speckle-field phase microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhou, Renjie; So, Peter T. C.; Yaqoob, Zahid; Jin, Di; Hosseini, Poorya; Kuang, Cuifang; Singh, Vijay Raj; Kim, Yang-Hyo; Dasari, Ramachandra R.

    2017-02-01

    Most of the quantitative phase microscopy systems are unable to provide depth-resolved information for measuring complex biological structures. Optical diffraction tomography provides a non-trivial solution to it by 3D reconstructing the object with multiple measurements through different ways of realization. Previously, our lab developed a reflection-mode dynamic speckle-field phase microscopy (DSPM) technique, which can be used to perform depth resolved measurements in a single shot. Thus, this system is suitable for measuring dynamics in a layer of interest in the sample. DSPM can be also used for tomographic imaging, which promises to solve the long-existing "missing cone" problem in 3D imaging. However, the 3D imaging theory for this type of system has not been developed in the literature. Recently, we have developed an inverse scattering model to rigorously describe the imaging physics in DSPM. Our model is based on the diffraction tomography theory and the speckle statistics. Using our model, we first precisely calculated the defocus response and the depth resolution in our system. Then, we further calculated the 3D coherence transfer function to link the 3D object structural information with the axially scanned imaging data. From this transfer function, we found that in the reflection mode excellent sectioning effect exists in the low lateral spatial frequency region, thus allowing us to solve the "missing cone" problem. Currently, we are working on using this coherence transfer function to reconstruct layered structures and complex cells.

  2. Large-scale image-based profiling of single-cell phenotypes in arrayed CRISPR-Cas9 gene perturbation screens.

    PubMed

    de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas

    2018-01-23

    High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  3. Validation of luminescent source reconstruction using spectrally resolved bioluminescence images

    NASA Astrophysics Data System (ADS)

    Virostko, John M.; Powers, Alvin C.; Jansen, E. D.

    2008-02-01

    This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.

  4. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    PubMed

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  5. Imaging patients with glaucoma using spectral-domain optical coherence tomography and optical microangiography

    NASA Astrophysics Data System (ADS)

    Auyeung, Kris; Auyeung, Kelsey; Kono, Rei; Chen, Chieh-Li; Zhang, Qinqin; Wang, Ruikang K.

    2015-03-01

    In ophthalmology, a reliable means of diagnosing glaucoma in its early stages is still an open issue. Past efforts, including forays into fluorescent angiography (FA) and early optical coherence tomography (OCT) systems, to develop a potential biomarker for the disease have been explored. However, this development has been hindered by the inability of the current techniques to provide useful depth and microvasculature information of the optic nerve head (ONH), which have been debated as possible hallmarks of glaucoma progression. We reasoned that a system incorporating a spectral-domain OCT (SD-OCT) based Optical Microangiography (OMAG) system, could allow an effective, non-invasive methodology to evaluate effects on microvasculature by glaucoma. SD-OCT follows the principle of light reflection and interference to produce detailed cross-sectional and 3D images of the eye. OMAG produces imaging contrasts via endogenous light scattering from moving particles, allowing for 3D image productions of dynamic blood perfusion at capillary-level resolution. The purpose of this study was to investigate the optic cup perfusion (flow) differences in glaucomatous and normal eyes. Images from three normal and five glaucomatous subjects were analyzed our OCT based OMAG system for blood perfusion and structural images, allowing for comparisons. Preliminary results from blood flow analysis revealed reduced blood perfusion within the whole-depth region encompassing the Lamina Cribrosa in glaucomatous cases as compared to normal ones. We conclude that our OCT-OMAG system may provide promise and viability for glaucoma screening.

  6. Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.

    PubMed

    Zalvidea, D; Sicre, E E

    1998-06-10

    A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.

  7. Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images

    NASA Astrophysics Data System (ADS)

    Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2006-03-01

    We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.

  8. Stereo-Based Region-Growing using String Matching

    NASA Technical Reports Server (NTRS)

    Mandelbaum, Robert; Mintz, Max

    1995-01-01

    We present a novel stereo algorithm based on a coarse texture segmentation preprocessing phase. Matching is performed using a string comparison. Matching sub-strings correspond to matching sequences of textures. Inter-scanline clustering of matching sub-strings yields regions of matching texture. The shape of these regions yield information concerning object's height, width and azimuthal position relative to the camera pair. Hence, rather than the standard dense depth map, the output of this algorithm is a segmentation of objects in the scene. Such a format is useful for the integration of stereo with other sensor modalities on a mobile robotic platform. It is also useful for localization; the height and width of a detected object may be used for landmark recognition, while depth and relative azimuthal location determine pose. The algorithm does not rely on the monotonicity of order of image primitives. Occlusions, exposures, and foreshortening effects are not problematic. The algorithm can deal with certain types of transparencies. It is computationally efficient, and very amenable to parallel implementation. Further, the epipolar constraints may be relaxed to some small but significant degree. A version of the algorithm has been implemented and tested on various types of images. It performs best on random dot stereograms, on images with easily filtered backgrounds (as in synthetic images), and on real scenes with uncontrived backgrounds.

  9. Clinical optical coherence tomography combined with multiphoton tomography of patients with skin diseases.

    PubMed

    König, Karsten; Speicher, Marco; Bückle, Rainer; Reckfort, Julia; McKenzie, Gordon; Welzel, Julia; Koehler, Martin J; Elsner, Peter; Kaatz, Martin

    2009-07-01

    We report on the first clinical study based on optical coherence tomography (OCT) in combination with multiphoton tomography (MPT) and dermoscopy. 47 patients with a variety of skin diseases and disorders such as skin cancer, psoriasis, hemangioma, connective tissue diseases, pigmented lesions, and autoimmune bullous skin diseases have been investigated with (i) state-of-the-art OCT systems for dermatology including multibeam swept source OCT, (ii) the femtosecond laser multiphoton tomograph, and (iii) dermoscopes. Dermoscopy provides two-dimensional color images of the skin surface. OCT images reflect modifications of the intratissue refractive index whereas MPT is based on nonlinear excitation of endogenous fluorophores and second harmonic generation. A stack of cross-sectional OCT "wide field" images with a typical field of view of 5 x 2 mm(2) gave fast information on the depth and the volume of the lesion. Multiphoton tomography provided 0.36 x 0.36 mm(2) horizontal/diagonal optical sections within seconds of a particular region of interest with superior submicron resolution down to a tissue depth of 200 mum. The combination of OCT and MPT provides a unique powerful optical imaging modality for early detection of skin cancer and other skin diseases as well as for the evaluation of the efficiency of treatments.

  10. A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study.

    PubMed

    Li, Guang; Luo, Shouhua; Yan, Yuling; Gu, Ning

    2015-01-01

    The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion.

  11. A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study

    PubMed Central

    2015-01-01

    Background The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. Methods In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. Results By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. Conclusions The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion. PMID:25602532

  12. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  13. 2D/3D facial feature extraction

    NASA Astrophysics Data System (ADS)

    Çinar Akakin, Hatice; Ali Salah, Albert; Akarun, Lale; Sankur, Bülent

    2006-02-01

    We propose and compare three different automatic landmarking methods for near-frontal faces. The face information is provided as 480x640 gray-level images in addition to the corresponding 3D scene depth information. All three methods follow a coarse-to-fine suite and use the 3D information in an assist role. The first method employs a combination of principal component analysis (PCA) and independent component analysis (ICA) features to analyze the Gabor feature set. The second method uses a subset of DCT coefficients for template-based matching. These two methods employ SVM classifiers with polynomial kernel functions. The third method uses a mixture of factor analyzers to learn Gabor filter outputs. We contrast the localization performance separately with 2D texture and 3D depth information. Although the 3D depth information per se does not perform as well as texture images in landmark localization, the 3D information has still a beneficial role in eliminating the background and the false alarms.

  14. The application research of microwave nondestructive testing and imaging based on ω-k algorithm

    NASA Astrophysics Data System (ADS)

    Qi, Shengxiang; Ren, Jian; Gu, Lihua; Xu, Hui; Wang, Yuanbo

    2017-07-01

    The Bridges had collapsed accidents in recent years due to bridges quality problems. Therefore, concretes nondestructive testing are particularly important. At present, most applications are Ground Penetrating Radar (GPR) technology in the detection of reinforced concretes structure. GPR are used the pulse method which alongside with definitive advantages, but the testing of the internal structure of the small thickness concretes has very low resolution by this method. In this paper, it's the first time to use the ultra-wideband (UWB) stepped frequency conversion radar above problems. We use vector network analyzer and double ridged horn antenna microwave imaging system to test the reinforced concretes block. The internal structure of the concretes is reconstructed with a method of synthetic aperture of ω-k algorithm. By this method, the depth of the steel bar with the diameter of 1cm is shown exactly in the depth of 450mm×400mm×500mm and the depth error do not exceed 1cm.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honda, M.; Kudo, T.; Terada, H.

    We made near-infrared multicolor imaging observations of a disk around Herbig Be star HD 100546 using Gemini/NICI. K (2.2 μm), H{sub 2}O ice (3.06 μm), and L′ (3.8 μm) disk images were obtained and we found a 3.1 μm absorption feature in the scattered light spectrum, likely due to water ice grains at the disk surface. We compared the observed depth of the ice absorption feature with the disk model based on Oka et al., including the water ice photodesorption effect by stellar UV photons. The observed absorption depth can be explained by both the disk models with and without themore » photodesorption effect within the measurement accuracy, but the model with photodesorption effects is slightly more favored, implying that the UV photons play an important role in the survival/destruction of ice grains at the Herbig Ae/Be disk surface. Further improvement to the accuracy of the observations of the water ice absorption depth is needed to constrain the disk models.« less

  16. 4D light-field sensing system for people counting

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Zhang, Chi; Wang, Yunlong; Sun, Zhenan

    2016-03-01

    Counting the number of people is still an important task in social security applications, and a few methods based on video surveillance have been proposed in recent years. In this paper, we design a novel optical sensing system to directly acquire the depth map of the scene from one light-field camera. The light-field sensing system can count the number of people crossing the passageway, and record the direction and intensity of rays at a snapshot without any assistant light devices. Depth maps are extracted from the raw light-ray sensing data. Our smart sensing system is equipped with a passive imaging sensor, which is able to naturally discern the depth difference between the head and shoulders for each person. Then a human model is built. Through detecting the human model from light-field images, the number of people passing the scene can be counted rapidly. We verify the feasibility of the sensing system as well as the accuracy by capturing real-world scenes passing single and multiple people under natural illumination.

  17. Riverine Bathymetry Imaging with Indirect Observations

    NASA Astrophysics Data System (ADS)

    Farthing, M.; Lee, J. H.; Ghorbanidehno, H.; Hesser, T.; Darve, E. F.; Kitanidis, P. K.

    2017-12-01

    Bathymetry, i.e, depth, imaging in a river is of crucial importance for shipping operations and flood management. With advancements in sensor technology and computational resources, various types of indirect measurements can be used to estimate high-resolution riverbed topography. Especially, the use of surface velocity measurements has been actively investigated recently since they are easy to acquire at a low cost in all river conditions and surface velocities are sensitive to the river depth. In this work, we image riverbed topography using depth-averaged quasi-steady velocity observations related to the topography through the 2D shallow water equations (SWE). The principle component geostatistical approach (PCGA), a fast and scalable variational inverse modeling method powered by low-rank representation of covariance matrix structure, is presented and applied to two "twin" riverine bathymetry identification problems. To compare the efficiency and effectiveness of the proposed method, an ensemble-based approach is also applied to the test problems. Results demonstrate that PCGA is superior to the ensemble-based approach in terms of computational effort and accuracy. Especially, the results obtained from PCGA capture small-scale bathymetry features irrespective of the initial guess through the successive linearization of the forward model. Analysis on the direct survey data of the riverine bathymetry used in one of the test problems shows an efficient, parsimonious choice of the solution basis in PCGA so that the number of the numerical model runs used to achieve the inversion results is close to the minimum number that reconstructs the underlying bathymetry.

  18. Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.

    PubMed

    Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos

    2017-11-01

    The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.

  19. Pose-Invariant Face Recognition via RGB-D Images.

    PubMed

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  20. Potential of coded excitation in medical ultrasound imaging.

    PubMed

    Misaridis, T X; Gammelmark, K; Jørgensen, C H; Lindberg, N; Thomsen, A H; Pedersen, M H; Jensen, J A

    2000-03-01

    Improvement in signal-to-noise ratio (SNR) and/or penetration depth can be achieved in medical ultrasound by using long coded waveforms, in a similar manner as in radars or sonars. However, the time-bandwidth product (TB) improvement, and thereby SNR improvement is considerably lower in medical ultrasound, due to the lower available bandwidth. There is still space for about 20 dB improvement in the SNR, which will yield a penetration depth up to 20 cm at 5 MHz [M. O'Donnell, IEEE Trans. Ultrason. Ferroelectr. Freq. Contr., 39(3) (1992) 341]. The limited TB additionally yields unacceptably high range sidelobes. However, the frequency weighting from the ultrasonic transducer's bandwidth, although suboptimal, can be beneficial in sidelobe reduction. The purpose of this study is an experimental evaluation of the above considerations in a coded excitation ultrasound system. A coded excitation system based on a modified commercial scanner is presented. A predistorted FM signal is proposed in order to keep the resulting range sidelobes at acceptably low levels. The effect of the transducer is taken into account in the design of the compression filter. Intensity levels have been considered and simulations on the expected improvement in SNR are also presented. Images of a wire phantom and clinical images have been taken with the coded system. The images show a significant improvement in penetration depth and they preserve both axial resolution and contrast.

  1. Penetration depth of photons in biological tissues from hyperspectral imaging in shortwave infrared in transmission and reflection geometries

    PubMed Central

    Zhang, Hairong; Salo, Daniel; Kim, David M.; Komarov, Sergey; Tai, Yuan-Chuan; Berezin, Mikhail Y.

    2016-01-01

    Abstract. Measurement of photon penetration in biological tissues is a central theme in optical imaging. A great number of endogenous tissue factors such as absorption, scattering, and anisotropy affect the path of photons in tissue, making it difficult to predict the penetration depth at different wavelengths. Traditional studies evaluating photon penetration at different wavelengths are focused on tissue spectroscopy that does not take into account the heterogeneity within the sample. This is especially critical in shortwave infrared where the individual vibration-based absorption properties of the tissue molecules are affected by nearby tissue components. We have explored the depth penetration in biological tissues from 900 to 1650 nm using Monte–Carlo simulation and a hyperspectral imaging system with Michelson spatial contrast as a metric of light penetration. Chromatic aberration-free hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 min. Relatively short recording time minimized artifacts from sample drying. Results from both transmission and reflection geometries consistently revealed that the highest spatial contrast in the wavelength range for deep tissue lies within 1300 to 1375 nm; however, in heavily pigmented tissue such as the liver, the range 1550 to 1600 nm is also prominent. PMID:27930773

  2. A deep learning approach for pose estimation from volumetric OCT data.

    PubMed

    Gessert, Nils; Schlüter, Matthias; Schlaefer, Alexander

    2018-05-01

    Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images' 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label's resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Superficial ultrasound shear wave speed measurements in soft and hard elasticity phantoms: repeatability and reproducibility using two ultrasound systems.

    PubMed

    Dillman, Jonathan R; Chen, Shigao; Davenport, Matthew S; Zhao, Heng; Urban, Matthew W; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L

    2015-03-01

    There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. To assess the repeatability and reproducibility of superficial shear wave speed measurements acquired from elasticity phantoms at varying imaging depths using three imaging methods, two US systems and multiple operators. Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems Inc. (Norfolk, VA) were utilized for our investigation. Institution No. 1 used an Acuson S3000 US system (Siemens Medical Solutions USA, Malvern, PA) and three shear wave imaging method/transducer combinations, while institution No. 2 used an Aixplorer US system (SuperSonic Imagine, Bothell, WA) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0 cm, 2.5 cm and 4.0 cm) by four operators at each institution. Student's t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single-measure intra-class correlation coefficients (ICCs) and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (P = 0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (P > 0.05). The coefficients of variation were low (0.5-6.8%), and interoperator agreement was near-perfect (ICCs ≥ 0.99). Shear wave imaging method and imaging depth significantly affected measured SWS (P < 0.0001). Superficial shear wave speed measurements in elasticity phantoms demonstrate minimal variability across imaging method/transducer combinations, imaging depths and operators. The exact clinical significance of this variation is uncertain and may change according to organ and specific disease state.

  4. Superficial Ultrasound Shear Wave Speed Measurements in Soft and Hard Elasticity Phantoms: Repeatability and Reproducibility Using Two Different Ultrasound Systems

    PubMed Central

    Dillman, Jonathan R.; Chen, Shigao; Davenport, Matthew S.; Zhao, Heng; Urban, Matthew W.; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L.

    2014-01-01

    Background There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. Purpose To assess the repeatability and reproducibility of superficial shear wave speed (SWS) measurements acquired from elasticity phantoms at varying imaging depths using three different imaging methods, two different ultrasound systems, and multiple operators. Methods and Materials Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems, Inc. (Norfolk, VA) were utilized for our investigation. Institution #1 used an Acuson S3000 ultrasound system (Siemens Medical Solutions USA, Inc.) and three different shear wave imaging method/transducer combinations, while institution #2 used an Aixplorer ultrasound system (Supersonic Imagine) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0, 2.5, and 4.0 cm) by four operators at each institution. Student’s t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single measure intra-class correlation coefficients and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. Results For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (p=0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (p>0.05). The coefficients of variation were low (0.5–6.8%), and inter-operator agreement was near-perfect (ICCs ≥0.99). Shear wave imaging method and imaging depth significantly affected measured SWS (p<0.0001). Conclusions Superficial SWS measurements in elasticity phantoms demonstrate minimal variability across imaging method/transducer combinations, imaging depths, and between operators. The exact clinical significance of this variability is uncertain and may vary by organ and specific disease state. PMID:25249389

  5. Penetration depth measurement of near-infrared hyperspectral imaging light for milk powder

    USDA-ARS?s Scientific Manuscript database

    The increasingly common application of near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging ligh...

  6. Digital focusing of OCT images based on scalar diffraction theory and information entropy.

    PubMed

    Liu, Guozhong; Zhi, Zhongwei; Wang, Ruikang K

    2012-11-01

    This paper describes a digital method that is capable of automatically focusing optical coherence tomography (OCT) en face images without prior knowledge of the point spread function of the imaging system. The method utilizes a scalar diffraction model to simulate wave propagation from out-of-focus scatter to the focal plane, from which the propagation distance between the out-of-focus plane and the focal plane is determined automatically via an image-definition-evaluation criterion based on information entropy theory. By use of the proposed approach, we demonstrate that the lateral resolution close to that at the focal plane can be recovered from the imaging planes outside the depth of field region with minimal loss of resolution. Fresh onion tissues and mouse fat tissues are used in the experiments to show the performance of the proposed method.

  7. Lensless transport-of-intensity phase microscopy and tomography with a color LED matrix

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Sun, Jiasong; Zhang, Jialin; Hu, Yan; Chen, Qian

    2015-07-01

    We demonstrate lens-less quantitative phase microscopy and diffraction tomography based on a compact on-chip platform, using only a CMOS image sensor and a programmable color LED array. Based on multi-wavelength transport-of- intensity phase retrieval and multi-angle illumination diffraction tomography, this platform offers high quality, depth resolved images with a lateral resolution of ˜3.7μm and an axial resolution of ˜5μm, over wide large imaging FOV of 24mm2. The resolution and FOV can be further improved by using a larger image sensors with small pixels straightforwardly. This compact, low-cost, robust, portable platform with a decent imaging performance may offer a cost-effective tool for telemedicine needs, or for reducing health care costs for point-of-care diagnostics in resource-limited environments.

  8. Terahertz Imaging of Three-Dimensional Dehydrated Breast Cancer Tumors

    NASA Astrophysics Data System (ADS)

    Bowman, Tyler; Wu, Yuhao; Gauch, John; Campbell, Lucas K.; El-Shenawee, Magda

    2017-06-01

    This work presents the application of terahertz imaging to three-dimensional formalin-fixed, paraffin-embedded human breast cancer tumors. The results demonstrate the capability of terahertz for in-depth scanning to produce cross section images without the need to slice the tumor. Samples of tumors excised from women diagnosed with infiltrating ductal carcinoma and lobular carcinoma are investigated using a pulsed terahertz time domain imaging system. A time of flight estimation is used to obtain vertical and horizontal cross section images of tumor tissues embedded in paraffin block. Strong agreement is shown comparing the terahertz images obtained by electronically scanning the tumor in-depth in comparison with histopathology images. The detection of cancer tissue inside the block is found to be accurate to depths over 1 mm. Image processing techniques are applied to provide improved contrast and automation of the obtained terahertz images. In particular, unsharp masking and edge detection methods are found to be most effective for three-dimensional block imaging.

  9. Automatic recognition of lactating sow behaviors through depth image processing

    USDA-ARS?s Scientific Manuscript database

    Manual observation and classification of animal behaviors is laborious, time-consuming, and of limited ability to process large amount of data. A computer vision-based system was developed that automatically recognizes sow behaviors (lying, sitting, standing, kneeling, feeding, drinking, and shiftin...

  10. An acceptance test for chip seal projects based on image analysis.

    DOT National Transportation Integrated Search

    2016-05-01

    Chip seal is one of the most popular preventive maintenance techniques performed by many DOTs, county road departments and cities. One of the most important parameters affecting performance of a chip seal is the percent aggregate embedment depth into...

  11. Quantifying the benefits of improved rolling of chip seals : final report, June 2008.

    DOT National Transportation Integrated Search

    2008-06-01

    This report presents an improvement in the rolling protocol for chip seals based on an evaluation of aggregate : retention performance and aggregate embedment depth. The flip-over test (FOT), Vialit test, modified sand circle : test, digital image pr...

  12. Investigating a continuous shear strain function for depth-dependent properties of native and tissue engineering cartilage using pixel-size data.

    PubMed

    Motavalli, Mostafa; Whitney, G Adam; Dennis, James E; Mansour, Joseph M

    2013-12-01

    A previously developed novel imaging technique for determining the depth dependent properties of cartilage in simple shear is implemented. Shear displacement is determined from images of deformed lines photobleached on a sample, and shear strain is obtained from the derivative of the displacement. We investigated the feasibility of an alternative systematic approach to numerical differentiation for computing the shear strain that is based on fitting a continuous function to the shear displacement. Three models for a continuous shear displacement function are evaluated: polynomials, cubic splines, and non-parametric locally weighted scatter plot curves. Four independent approaches are then applied to identify the best-fit model and the accuracy of the first derivative. One approach is based on the Akaiki Information Criteria, and the Bayesian Information Criteria. The second is based on a method developed to smooth and differentiate digitized data from human motion. The third method is based on photobleaching a predefined circular area with a specific radius. Finally, we integrate the shear strain and compare it with the total shear deflection of the sample measured experimentally. Results show that 6th and 7th order polynomials are the best models for the shear displacement and its first derivative. In addition, failure of tissue-engineered cartilage, consistent with previous results, demonstrates the qualitative value of this imaging approach. © 2013 Elsevier Ltd. All rights reserved.

  13. Clear-cornea cataract surgery: pupil size and shape changes, along with anterior chamber volume and depth changes. A Scheimpflug imaging study.

    PubMed

    Kanellopoulos, Anastasios John; Asimellis, George

    2014-01-01

    To investigate, by high-precision digital analysis of data provided by Scheimpflug imaging, changes in pupil size and shape and anterior chamber (AC) parameters following cataract surgery. The study group (86 eyes, patient age 70.58±10.33 years) was subjected to cataract removal surgery with in-the-bag intraocular lens implantation (pseudophakic). A control group of 75 healthy eyes (patient age 51.14±16.27 years) was employed for comparison. Scheimpflug imaging (preoperatively and 3 months postoperatively) was employed to investigate central corneal thickness, AC depth, and AC volume. In addition, by digitally analyzing the black-and-white dotted line pupil edge marking in the Scheimpflug "large maps," the horizontal and vertical pupil diameters were individually measured and the pupil eccentricity was calculated. The correlations between AC depth and pupil shape parameters versus patient age, as well as the postoperative AC and pupil size and shape changes, were investigated. Compared to preoperative measurements, AC depth and AC volume of the pseudophakic eyes increased by 0.99±0.46 mm (39%; P<0.001) and 43.57±24.59 mm(3) (36%; P<0.001), respectively. Pupil size analysis showed that the horizontal pupil diameter was reduced by -0.27±0.22 mm (-9.7%; P=0.001) and the vertical pupil diameter was reduced by -0.32±0.24 mm (-11%; P<0.001). Pupil eccentricity was reduced by -39.56%; P<0.001. Cataract extraction surgery appears to affect pupil size and shape, possibly in correlation to AC depth increase. This novel investigation based on digital analysis of Scheimpflug imaging data suggests that the cataract postoperative photopic pupil is reduced and more circular. These changes appear to be more significant with increasing patient age.

  14. Monocular Depth Perception and Robotic Grasping of Novel Objects

    DTIC Science & Technology

    2009-06-01

    resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show

  15. Graphene-based ultrasonic detector for photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Song, Wei; Zhang, Chonglei; Fang, Hui; Min, Changjun; Yuan, Xiaocong

    2018-03-01

    Taking advantage of optical absorption imaging contrast, photoacoustic imaging technology is able to map the volumetric distribution of the optical absorption properties within biological tissues. Unfortunately, traditional piezoceramics-based transducers used in most photoacoustic imaging setups have inadequate frequency response, resulting in both poor depth resolution and inaccurate quantification of the optical absorption information. Instead of the piezoelectric ultrasonic transducer, we develop a graphene-based optical sensor for detecting photoacoustic pressure. The refractive index in the coupling medium is modulated due to photoacoustic pressure perturbation, which creates the variation of the polarization-sensitive optical absorption property of the graphene. As a result, the photoacoustic detection is realized through recording the reflectance intensity difference of polarization light. The graphene-based detector process an estimated noise-equivalentpressure (NEP) sensitivity of 550 Pa over 20-MHz bandwidth with a nearby linear pressure response from 11.0 kPa to 53.0 kPa. Further, a graphene-based photoacoustic microscopy is built, and non-invasively reveals the microvascular anatomy in mouse ears label-freely.

  16. Electromagnetic behavior of spatial terahertz wave modulators based on reconfigurable micromirror gratings in Littrow configuration.

    PubMed

    Kappa, Jan; Schmitt, Klemens M; Rahm, Marco

    2017-08-21

    Efficient, high speed spatial modulators with predictable performance are a key element in any coded aperture terahertz imaging system. For spectroscopy, the modulators must also provide a broad modulation frequency range. In this study, we numerically analyze the electromagnetic behavior of a dynamically reconfigurable spatial terahertz wave modulator based on a micromirror grating in Littrow configuration. We show that such a modulator can modulate terahertz radiation over a wide frequency range from 1.7 THz to beyond 3 THz at a modulation depth of more than 0.6. As a specific example, we numerically simulated coded aperture imaging of an object with binary transmissive properties and successfully reconstructed the image.

  17. Improvement of depth resolution on photoacoustic imaging using multiphoton absorption

    NASA Astrophysics Data System (ADS)

    Yamaoka, Yoshihisa; Fujiwara, Katsuji; Takamatsu, Tetsuro

    2007-07-01

    Commercial imaging systems, such as computed tomography and magnetic resonance imaging, are frequently used powerful tools for observing structures deep within the human body. However, they cannot precisely visualized several-tens micrometer-sized structures for lack of spatial resolution. In this presentation, we propose photoacoustic imaging using multiphoton absorption technique to generate ultrasonic waves as a means of improving depth resolution. Since the multiphoton absorption occurs at only the focus point and the employed infrared pulses deeply penetrate living tissues, it enables us to extract characteristic features of structures embedded in the living tissue. When nanosecond pulses from a 1064-nm Nd:YAG laser were focused on Rhodamine B/chloroform solution (absorption peak: 540 nm), the peak intensity of the generated photoacoustic signal was proportional to the square of the input pulse energy. This result shows that the photoacoustic signals can be induced by the two-photon absorption of infrared nanosecond pulse laser and also can be detected by a commercial low-frequency MHz transducer. Furthermore, in order to evaluate the depth resolution of multiphoton-photoacoustic imaging, we investigated the dependence of photoacoustic signal on depth position using a 1-mm-thick phantom in a water bath. We found that the depth resolution of two-photon photoacoustic imaging (1064 nm) is greater than that of one-photon photoacoustic imaging (532 nm). We conclude that evolving multiphoton-photoacoustic imaging technology renders feasible the investigation of biomedical phenomena at the deep layer in living tissue.

  18. Single exposure three-dimensional imaging of dusty plasma clusters.

    PubMed

    Hartmann, Peter; Donkó, István; Donkó, Zoltán

    2013-02-01

    We have worked out the details of a single camera, single exposure method to perform three-dimensional imaging of a finite particle cluster. The procedure is based on the plenoptic imaging principle and utilizes a commercial Lytro light field still camera. We demonstrate the capabilities of our technique on a single layer particle cluster in a dusty plasma, where the camera is aligned and inclined at a small angle to the particle layer. The reconstruction of the third coordinate (depth) is found to be accurate and even shadowing particles can be identified.

  19. Quantitative imaging of tumor vasculature using multispectral optoacoustic tomography (MSOT)

    NASA Astrophysics Data System (ADS)

    Tomaszewski, Michal R.; Quiros-Gonzalez, Isabel; Joseph, James; Bohndiek, Sarah E.

    2017-03-01

    The ability to evaluate tumor oxygenation in the clinic could indicate prognosis and enable treatment monitoring, since oxygen deficient cancer cells are often more resistant to chemotherapy and radiotherapy. MultiSpectral Optoacoustic Tomography (MSOT) is a hybrid technique combining the high contrast of optical imaging with spatial resolution and penetration depth similar to ultrasound. We hypothesized that MSOT could reveal both tumor vascular density and function based on modulation of blood oxygenation. We performed MSOT on nude mice (n=8) bearing subcutaneous xenograft PC3 tumors using an inVision 256 (iThera Medical). The mice were maintained under inhalation anesthesia during imaging and respired oxygen content was modified from 21% to 100% and back. After imaging, Hoechst 33348 was injected to indicate vascular perfusion and permeability. Tumors were then extracted for histopathological analysis and fluorescence microscopy. The acquired data was analyzed to extract a bulk measurement of blood oxygenation (SO2MSOT) from the whole tumor using different approaches. The tumors were also automatically segmented into 5 regions to investigate the effect of depth on SO2MSOT. Baseline SO2MSOT values at 21% and 100% oxygen breathing showed no relationship with ex vivo measures of vascular density or function, while the change in SO2MSOT showed a strong negative correlation to Hoechst intensity (r=- 0.92, p=0.0016). Tumor voxels responding to oxygen challenge were spatially heterogeneous. We observed a significant drop in SO2 MSOT value with tumor depth following a switch of respiratory gas from air to oxygen (0.323+/-0.017 vs. 0.11+/-0.05, p=0.009 between 0 and 1.5mm depth), but no such effect for air breathing (0.265+/-0.013 vs. 0.19+/-0.04, p=0.14 between 0 and 1.5mm depth). Our results indicate that in subcutaneous prostate tumors, baseline SO2MSOT levels do not correlate to tumor vascular density or function while the magnitude of the response to oxygen challenge provides insight into these parameters. Future work will include validation using in vivo imaging and protocol optimization for clinical application.

  20. Focus measure method based on the modulus of the gradient of the color planes for digital microscopy

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel

    2018-02-01

    The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.

  1. Monoplane 3D-2D registration of cerebral angiograms based on multi-objective stratified optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, T.; Špiclin, Ž.; Pernuš, F.; Unal, G.

    2017-12-01

    Registration of 3D pre-interventional to 2D intra-interventional medical images has an increasingly important role in surgical planning, navigation and treatment, because it enables the physician to co-locate depth information given by pre-interventional 3D images with the live information in intra-interventional 2D images such as x-ray. Most tasks during image-guided interventions are carried out under a monoplane x-ray, which is a highly ill-posed problem for state-of-the-art 3D to 2D registration methods. To address the problem of rigid 3D-2D monoplane registration we propose a novel multi-objective stratified parameter optimization, wherein a small set of high-magnitude intensity gradients are matched between the 3D and 2D images. The stratified parameter optimization matches rotation templates to depth templates, first sampled from projected 3D gradients and second from the 2D image gradients, so as to recover 3D rigid-body rotations and out-of-plane translation. The objective for matching was the gradient magnitude correlation coefficient, which is invariant to in-plane translation. The in-plane translations are then found by locating the maximum of the gradient phase correlation between the best matching pair of rotation and depth templates. On twenty pairs of 3D and 2D images of ten patients undergoing cerebral endovascular image-guided intervention the 3D to monoplane 2D registration experiments were setup with a rather high range of initial mean target registration error from 0 to 100 mm. The proposed method effectively reduced the registration error to below 2 mm, which was further refined by a fast iterative method and resulted in a high final registration accuracy (0.40 mm) and high success rate (> 96%). Taking into account a fast execution time below 10 s, the observed performance of the proposed method shows a high potential for application into clinical image-guidance systems.

  2. Multidepth imaging by chromatic dispersion confocal microscopy

    NASA Astrophysics Data System (ADS)

    Olsovsky, Cory A.; Shelton, Ryan L.; Saldua, Meagan A.; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.

    2012-03-01

    Confocal microscopy has shown potential as an imaging technique to detect precancer. Imaging cellular features throughout the depth of epithelial tissue may provide useful information for diagnosis. However, the current in vivo axial scanning techniques for confocal microscopy are cumbersome, time-consuming, and restrictive when attempting to reconstruct volumetric images acquired in breathing patients. Chromatic dispersion confocal microscopy (CDCM) exploits severe longitudinal chromatic aberration in the system to axially disperse light from a broadband source and, ultimately, spectrally encode high resolution images along the depth of the object. Hyperchromat lenses are designed to have severe and linear longitudinal chromatic aberration, but have not yet been used in confocal microscopy. We use a hyperchromat lens in a stage scanning confocal microscope to demonstrate the capability to simultaneously capture information at multiple depths without mechanical scanning. A photonic crystal fiber pumped with a 830nm wavelength Ti:Sapphire laser was used as a supercontinuum source, and a spectrometer was used as the detector. The chromatic aberration and magnification in the system give a focal shift of 140μm after the objective lens and an axial resolution of 5.2-7.6μm over the wavelength range from 585nm to 830nm. A 400x400x140μm3 volume of pig cheek epithelium was imaged in a single X-Y scan. Nuclei can be seen at several depths within the epithelium. The capability of this technique to achieve simultaneous high resolution confocal imaging at multiple depths may reduce imaging time and motion artifacts and enable volumetric reconstruction of in vivo confocal images of the epithelium.

  3. Experimental study on the sensitive depth of backwards detected light in turbid media.

    PubMed

    Zhang, Yunyao; Huang, Liqing; Zhang, Ning; Tian, Heng; Zhu, Jingping

    2018-05-28

    In the recent past, optical spectroscopy and imaging methods for biomedical diagnosis and target enhancing have been widely researched. The challenge to improve the performance of these methods is to know the sensitive depth of the backwards detected light well. Former research mainly employed a Monte Carlo method to run simulations to statistically describe the light sensitive depth. An experimental method for investigating the sensitive depth was developed and is presented here. An absorption plate was employed to remove all the light that may have travelled deeper than the plate, leaving only the light which cannot reach the plate. By measuring the received backwards light intensity and the depth between the probe and the plate, the light intensity distribution along the depth dimension can be achieved. The depth with the maximum light intensity was recorded as the sensitive depth. The experimental results showed that the maximum light intensity was nearly the same in a short depth range. It could be deduced that the sensitive depth was a range, rather than a single depth. This sensitive depth range as well as its central depth increased consistently with the increasing source-detection distance. Relationships between sensitive depth and optical properties were also investigated. It also showed that the reduced scattering coefficient affects the central sensitive depth and the range of the sensitive depth more than the absorption coefficient, so they cannot be simply added as reduced distinct coefficients to describe the sensitive depth. This study provides an efficient method for investigation of sensitive depth. It may facilitate the development of spectroscopy and imaging techniques for biomedical diagnosis and underwater imaging.

  4. UAV, DGPS, and Laser Transit Mapping of Microbial Mat Ecosystems on Little Ambergris Cay, B.W.I.

    NASA Astrophysics Data System (ADS)

    Stein, N.; Quinn, D. P.; Grotzinger, J. P.; Fischer, W. W.; Knoll, A. H.; Cantine, M.; Gomes, M. L.; Grotzinger, H. M.; Lingappa, U.; Metcalfe, K.; O'Reilly, S. S.; Orzechowski, E. A.; Riedman, L. A.; Strauss, J. V.; Trower, L.

    2016-12-01

    Little Ambergris Cay is a 6 km long, 1.6 km wide uninhabited island on the Caicos platform in the Turks and Caicos. Little Ambergris provides an analog for the study of microbial mat development in the sedimentary record. Recent field mapping during July of 2016 used UAV- and satellite-based images, differential GPS (DGPS), and total station theodolite (TST) measurements to characterize sedimentology and biofacies across the entirety of Little Ambergris Cay. Nine facies were identified in-situ during DGPS island transects including oolitic grainstone bedrock, sand flats, cutbank and mat-filled channels, hardground-lined bays with EPS-rich mat particles, mangroves, EPS mats, polygonal mats, and mats with blistered surface texture. These facies were mapped onto a 15 cm/pixel visible light orthomosaic of the island generated from more than 1500 nadir images taken by a UAV at 350 m standoff distance. A corresponding stereogrammetric digital elevation map was generated from drone images and 910 DGPS measurements acquired during several island transects. More than 1000 TST measurements provide additional facies elevation constraints, control points for satellite-based water depth calculations, and means to cross-calibrate and reconstruct the topographic profile of bedrock exposed at the beach. Additionally, the thickness of the underlying Holocene sediment fill was estimated over several island transects using a depth probe. Sub-cm resolution drone-based orthophotos of microbial mats were used to quantify polygonal mat size and textures. The mapping results highlight that sedimentary and bio-facies (including mat morphology and fabrics) correlate strongly with elevation. Notably, mat morphology was observed to be highly sensitive to cm-scale variations in topography and water depth. The productivity metric NDVI was computed for mat and vegetation facies using nadir images from a UAV-mounted two-band red-NIR camera. In combination with in situ facies mapping, these measurements provided ground truth for reduction of multispectral Landsat and Worldview-2 satellite images to evaluate mat distribution and diversity across a range of spatial and spectral facies variations.

  5. 3D imaging of cleared human skin biopsies using light-sheet microscopy: A new way to visualize in-depth skin structure.

    PubMed

    Abadie, S; Jardet, C; Colombelli, J; Chaput, B; David, A; Grolleau, J-L; Bedos, P; Lobjois, V; Descargues, P; Rouquette, J

    2018-05-01

    Human skin is composed of the superimposition of tissue layers of various thicknesses and components. Histological staining of skin sections is the benchmark approach to analyse the organization and integrity of human skin biopsies; however, this approach does not allow 3D tissue visualization. Alternatively, confocal or two-photon microscopy is an effective approach to perform fluorescent-based 3D imaging. However, owing to light scattering, these methods display limited light penetration in depth. The objectives of this study were therefore to combine optical clearing and light-sheet fluorescence microscopy (LSFM) to perform in-depth optical sectioning of 5 mm-thick human skin biopsies and generate 3D images of entire human skin biopsies. A benzyl alcohol and benzyl benzoate solution was used to successfully optically clear entire formalin fixed human skin biopsies, making them transparent. In-depth optical sectioning was performed with LSFM on the basis of tissue-autofluorescence observations. 3D image analysis of optical sections generated with LSFM was performed by using the Amira ® software. This new approach allowed us to observe in situ the different layers and compartments of human skin, such as the stratum corneum, the dermis and epidermal appendages. With this approach, we easily performed 3D reconstruction to visualise an entire human skin biopsy. Finally, we demonstrated that this method is useful to visualise and quantify histological anomalies, such as epidermal hyperplasia. The combination of optical clearing and LSFM has new applications in dermatology and dermatological research by allowing 3D visualization and analysis of whole human skin biopsies. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Combined Study of Snow Depth Determination and Winter Leaf Area Index Retrieval by Unmanned Aerial Vehicle Photogrammetry

    NASA Astrophysics Data System (ADS)

    Lendzioch, Theodora; Langhammer, Jakub; Jenicek, Michal

    2017-04-01

    A rapid and robust approach using Unmanned Aerial Vehicle (UAV) digital photogrammetry was performed for evaluating snow accumulation over different small localities (e.g. disturbed forest and open area) and for indirect field measurements of Leaf Area Index (LAI) of coniferous forest within the Šumava National Park, Czech Republic. The approach was used to reveal impacts related to changes in forest and snowpack and to determine winter effective LAI for monitoring the impact of forest canopy metrics on snow accumulation. Due to the advancement of the technique, snow depth and volumetric changes of snow depth over these selected study areas were estimated at high spatial resolution (1 cm) by subtracting a snow-free digital elevation model (DEM) from a snow-covered DEM. Both, downward-looking UAV images and upward-looking digital hemispherical photography (DHP), and additional widely used LAI-2200 canopy analyser measurements were applied to determine the winter LAI, controlling interception and transmitting radiation. For the performance of downward-looking UAV images the snow background instead of the sky fraction was used. The reliability of UAV-based LAI retrieval was tested by taking an independent data set during the snow cover mapping campaigns. The results showed the potential of digital photogrammetry for snow depth mapping and LAI determination by UAV techniques. The average difference obtained between ground-based and UAV-based measurements of snow depth was 7.1 cm with higher values obtained by UAV. The SD of 22 cm for the open area seemed competitive with the typical precision of point measurements. In contrast, the average difference in disturbed forest area was 25 cm with lower values obtained by UAV and a SD of 36 cm, which is in agreement with other studies. The UAV-based LAI measurements revealed the lowest effective LAI values and the plant canopy analyser LAI-2200 the highest effective LAI values. The biggest bias of effective LAI was observed between LAI-2200 and UAV-based analyses. Since the LAI parameter is important for snowpack modelling, this method presents the potential of simplifying LAI retrieval and mapping of snow dynamics while reducing running costs and time.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Sajan; Petty, Clayton W.; Krafcik, Karen Lee

    Electrostatic modes of atomic force microscopy have shown to be non-destructive and relatively simple methods for imaging conductors embedded in insulating polymers. Here we use electrostatic force microscopy to image the dispersion of carbon nanotubes in a latex-based conductive composite, which brings forth features not observed in previously studied systems employing linear polymer films. A fixed-potential model of the probe-nanotube electrostatics is presented which in principle gives access to the conductive nanoparticle's depth and radius, and the polymer film dielectric constant. Comparing this model to the data results in nanotube depths that appear to be slightly above the film–air interface.more » Furthermore, this result suggests that water-mediated charge build-up at the film–air interface may be the source of electrostatic phase contrast in ambient conditions.« less

  8. Recent advances in synchrotron-based hard x-ray phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Nelson, J.; Holzner, C.; Andrews, J. C.; Pianetta, P.

    2013-12-01

    Ever since the first demonstration of phase contrast imaging (PCI) in the 1930s by Frits Zernike, people have realized the significant advantage of phase contrast over conventional absorption-based imaging in terms of sensitivity to ‘transparent’ features within specimens. Thus, x-ray phase contrast imaging (XPCI) holds great potential in studies of soft biological tissues, typically containing low Z elements such as C, H, O and N. Particularly when synchrotron hard x-rays are employed, the favourable brightness, energy tunability, monochromatic characteristics and penetration depth have dramatically enhanced the quality and variety of XPCI methods, which permit detection of the phase shift associated with 3D geometry of relatively large samples in a non-destructive manner. In this paper, we review recent advances in several synchrotron-based hard x-ray XPCI methods. Challenges and key factors in methodological development are discussed, and biological and medical applications are presented.

  9. Validation of Cloud Parameters Derived from Geostationary Satellites, AVHRR, MODIS, and VIIRS Using SatCORPS Algorithms

    NASA Technical Reports Server (NTRS)

    Minnis, P.; Sun-Mack, S.; Bedka, K. M.; Yost, C. R.; Trepte, Q. Z.; Smith, W. L., Jr.; Painemal, D.; Chen, Y.; Palikonda, R.; Dong, X.; hide

    2016-01-01

    Validation is a key component of remote sensing that can take many different forms. The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) is applied to many different imager datasets including those from the geostationary satellites, Meteosat, Himiwari-8, INSAT-3D, GOES, and MTSAT, as well as from the low-Earth orbiting satellite imagers, MODIS, AVHRR, and VIIRS. While each of these imagers have similar sets of channels with wavelengths near 0.65, 3.7, 11, and 12 micrometers, many differences among them can lead to discrepancies in the retrievals. These differences include spatial resolution, spectral response functions, viewing conditions, and calibrations, among others. Even when analyzed with nearly identical algorithms, it is necessary, because of those discrepancies, to validate the results from each imager separately in order to assess the uncertainties in the individual parameters. This paper presents comparisons of various SatCORPS-retrieved cloud parameters with independent measurements and retrievals from a variety of instruments. These include surface and space-based lidar and radar data from CALIPSO and CloudSat, respectively, to assess the cloud fraction, height, base, optical depth, and ice water path; satellite and surface microwave radiometers to evaluate cloud liquid water path; surface-based radiometers to evaluate optical depth and effective particle size; and airborne in-situ data to evaluate ice water content, effective particle size, and other parameters. The results of comparisons are compared and contrasted and the factors influencing the differences are discussed.

  10. Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization

    PubMed Central

    Lee, Jong-Ha; Kim, Yoon Nyun; Park, Hee-Jun

    2015-01-01

    The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the tactile data obtained at the surface of the tissue using an optical tactile sensation imaging system (TSIS). A forward algorithm is designed to comprehensively predict the tactile data based on the mechanical properties of tissue inclusion using finite element modeling (FEM). This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the tactile data. We utilize the artificial neural network (ANN) for the inversion algorithm. The proposed estimation method was validated by a realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58%, 3.82%, and 2.51% relative errors, respectively. The obtained results prove that the proposed method has potential to become a useful screening and diagnostic method for breast cancer. PMID:25785306

  11. Stereo Correspondence Using Moment Invariants

    NASA Astrophysics Data System (ADS)

    Premaratne, Prashan; Safaei, Farzad

    Autonomous navigation is seen as a vital tool in harnessing the enormous potential of Unmanned Aerial Vehicles (UAV) and small robotic vehicles for both military and civilian use. Even though, laser based scanning solutions for Simultaneous Location And Mapping (SLAM) is considered as the most reliable for depth estimation, they are not feasible for use in UAV and land-based small vehicles due to their physical size and weight. Stereovision is considered as the best approach for any autonomous navigation solution as stereo rigs are considered to be lightweight and inexpensive. However, stereoscopy which estimates the depth information through pairs of stereo images can still be computationally expensive and unreliable. This is mainly due to some of the algorithms used in successful stereovision solutions require high computational requirements that cannot be met by small robotic vehicles. In our research, we implement a feature-based stereovision solution using moment invariants as a metric to find corresponding regions in image pairs that will reduce the computational complexity and improve the accuracy of the disparity measures that will be significant for the use in UAVs and in small robotic vehicles.

  12. Measuring the depth of the caudal epidural space to prevent dural sac puncture during caudal block in children.

    PubMed

    Lee, Hyun Jeong; Min, Ji Young; Kim, Hyun Il; Byon, Hyo-Jin

    2017-05-01

    Caudal blocks are performed through the sacral hiatus in order to provide pain control in children undergoing lower abdominal surgery. During the block, it is important to avoid advancing the needle beyond the sacrococcygeal ligament too much to prevent unintended dural puncture. This study used demographic data to establish simple guidelines for predicting a safe needle depth in the caudal epidural space in children. A total of 141 children under 12 years old who had undergone lumbar-sacral magnetic resonance imaging were included. The T2 sagittal image that provided the best view of the sacrococcygeal membrane and the dural sac was chosen. We used Picture Achieving and Communication System (Centricity ® PACS, GE Healthcare Co.) to measure the distance between the sacrococcygeal ligament and the dural sac, the length of the sacrococcygeal ligament, and the maximum depth of the caudal space. There were strong correlations between age, weight, height, and BSA, and the distance between the sacrococcygeal ligament and dural sac, as well as the length of the sacrococcygeal ligament. Based on these findings, a simple formula to calculate the distance between the sacrococcygeal ligament and dural sac was developed: 25 × BSA (mm). This simple formula can accurately calculate the safe depth of the caudal epidural space to prevent unintended dural puncture during caudal block in children. However, further clinical studies based on this formula are needed to substantiate its utility. © 2017 John Wiley & Sons Ltd.

  13. A wavelet-based Bayesian framework for 3D object segmentation in microscopy

    NASA Astrophysics Data System (ADS)

    Pan, Kangyu; Corrigan, David; Hillebrand, Jens; Ramaswami, Mani; Kokaram, Anil

    2012-03-01

    In confocal microscopy, target objects are labeled with fluorescent markers in the living specimen, and usually appear with irregular brightness in the observed images. Also, due to the existence of out-of-focus objects in the image, the segmentation of 3-D objects in the stack of image slices captured at different depth levels of the specimen is still heavily relied on manual analysis. In this paper, a novel Bayesian model is proposed for segmenting 3-D synaptic objects from given image stack. In order to solve the irregular brightness and out-offocus problems, the segmentation model employs a likelihood using the luminance-invariant 'wavelet features' of image objects in the dual-tree complex wavelet domain as well as a likelihood based on the vertical intensity profile of the image stack in 3-D. Furthermore, a smoothness 'frame' prior based on the a priori knowledge of the connections of the synapses is introduced to the model for enhancing the connectivity of the synapses. As a result, our model can successfully segment the in-focus target synaptic object from a 3D image stack with irregular brightness.

  14. Forty-five degree backscattering-mode nonlinear absorption imaging in turbid media.

    PubMed

    Cui, Liping; Knox, Wayne H

    2010-01-01

    Two-color nonlinear absorption imaging has been previously demonstrated with endogenous contrast of hemoglobin and melanin in turbid media using transmission-mode detection and a dual-laser technology approach. For clinical applications, it would be generally preferable to use backscattering mode detection and a simpler single-laser technology. We demonstrate that imaging in backscattering mode in turbid media using nonlinear absorption can be obtained with as little as 1-mW average power per beam with a single laser source. Images have been achieved with a detector receiving backscattered light at a 45-deg angle relative to the incoming beams' direction. We obtain images of capillary tube phantoms with resolution as high as 20 microm and penetration depth up to 0.9 mm for a 300-microm tube at SNR approximately 1 in calibrated scattering solutions. Simulation results of the backscattering and detection process using nonimaging optics are demonstrated. A Monte Carlo-based method shows that the nonlinear signal drops exponentially as the depth increases, which agrees well with our experimental results. Simulation also shows that with our current detection method, only 2% of the signal is typically collected with a 5-mm-radius detector.

  15. National Snow Analyses - NOHRSC - The ultimate source for snow information

    Science.gov Websites

    Equivalent Thumbnail image of Modeled Snow Water Equivalent Animate: Season --- Two weeks --- One Day Snow Depth Thumbnail image of Modeled Snow Depth Animate: Season --- Two weeks --- One Day Average Snowpack Temp Thumbnail image of Modeled Average Snowpack Temp Animate: Season --- Two weeks --- One Day SWE

  16. Ultrahigh sensitive optical microangiography reveals depth-resolved microcirculation and its longitudinal response to prolonged ischemic event within skeletal muscles in mice

    NASA Astrophysics Data System (ADS)

    Jia, Yali; Qin, Jia; Zhi, Zhongwei; Wang, Ruikang K.

    2011-08-01

    The primary pathophysiology of peripheral arterial disease is associated with impaired perfusion to the muscle tissue in the lower extremities. The lack of effective pharmacologic treatments that stimulate vessel collateralization emphasizes the need for an imaging method that can be used to dynamically visualize depth-resolved microcirculation within muscle tissues. Optical microangiography (OMAG) is a recently developed label-free imaging method capable of producing three-dimensional images of dynamic blood perfusion within microcirculatory tissue beds at an imaging depth of up to ~2 mm, with an unprecedented imaging sensitivity of blood flow at ~4 μm/s. In this paper, we demonstrate the utility of OMAG in imaging the detailed blood flow distributions, at a capillary-level resolution, within skeletal muscles of mice. By use of the mouse model of hind-limb ischemia, we show that OMAG can assess the time-dependent changes in muscle perfusion and perfusion restoration along tissue depth. These findings indicate that OMAG can represent a sensitive, consistent technique to effectively study pharmacologic therapies aimed at promoting the growth and development of collateral vessels.

  17. Estimation of object motion parameters from noisy images.

    PubMed

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  18. A perspective on high-frequency ultrasound for medical applications

    NASA Astrophysics Data System (ADS)

    Mamou, Jonathan; Aristizába, Orlando; Silverman, Ronald H.; Ketterling, Jeffrey A.

    2010-01-01

    High-frequency ultrasound (HFU, >15 MHz) is a rapidly developing field. HFU is currently used and investigated for ophthalmologic, dermatologic, intravascular, and small-animal imaging. HFU offers a non-invasive means to investigate tissue at the microscopic level with resolutions often better than 100 μm. However, fine resolution is only obtained over the limited depth-of-field (˜1 mm) of single-element spherically-focused transducers typically used for HFU applications. Another limitation is penetration depth because most biological tissues have large attenuation at high frequencies. In this study, two 5-element annular arrays with center frequencies of 17 and 34 MHz were fabricated and methods were developed to obtain images with increased penetration depth and depth-of-field. These methods were used in ophthalmologic and small-animal imaging studies. Improved blood sensitivity was obtained when a phantom mimicking a vitreous hemorrhage was imaged. Central-nervous systems of 12.5-day-old mouse embryos were imaged in utero and in three dimensions for the first time.

  19. An image-space parallel convolution filtering algorithm based on shadow map

    NASA Astrophysics Data System (ADS)

    Li, Hua; Yang, Huamin; Zhao, Jianping

    2017-07-01

    Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.

  20. Depth of focus extended microscope configuration for imaging of incorporated groups of molecules, DNA constructs and clusters inside bacterial cells

    NASA Astrophysics Data System (ADS)

    Fessl, Tomas; Ben-Yaish, Shai; Vacha, Frantisek; Adamec, Frantisek; Zalevsky, Zeev

    2009-07-01

    Imaging of small objects such as single molecules, DNA clusters and single bacterial cells is problematic not only due to the lateral resolution that is obtainable in currently existing microscopy but also, and as much fundamentally limiting, due to the lack of sufficient axial depth of focus to have the full object focused simultaneously. Extension in depth of focus is helpful also for single molecule steady state FRET measurements. In this technique it is crucial to obtain data from many well focused molecules, which are often located in different axial depths. In this paper we present the implementation of an all-optical and a real time technique of extension in the depth of focus that may be incorporated in any high NA microscope system and to be used for the above mentioned applications. We demonstrate experimentally how after the integration of special optical element in high NA 100× objective lens of a single molecule imaging microscope system, the depth of focus is significantly improved while maintaining the same lateral resolution in imaging applications of incorporated groups of molecules, DNA constructs and clusters inside bacterial cells.

Top