Sample records for achievable imaging depth

  1. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.

  2. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  3. Extended depth of field imaging for high speed object analysis

    NASA Technical Reports Server (NTRS)

    Frost, Keith (Inventor); Ortyn, William (Inventor); Basiji, David (Inventor); Bauer, Richard (Inventor); Liang, Luchuan (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)

    2011-01-01

    A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.

  4. Depth image enhancement using perceptual texture priors

    NASA Astrophysics Data System (ADS)

    Bang, Duhyeon; Shim, Hyunjung

    2015-03-01

    A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.

  5. No scanning depth imaging system based on TOF

    NASA Astrophysics Data System (ADS)

    Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo

    2016-03-01

    To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.

  6. A depth enhancement strategy for kinect depth image

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Li, Hua; Han, Cheng; Xue, Yaohong; Zhang, Chao; Hu, Hanping; Jiang, Zhengang

    2018-03-01

    Kinect is a motion sensing input device which is widely used in computer vision and other related fields. However, there are many inaccurate depth data in Kinect depth images even Kinect v2. In this paper, an algorithm is proposed to enhance Kinect v2 depth images. According to the principle of its depth measuring, the foreground and the background are considered separately. As to the background, the holes are filled according to the depth data in the neighborhood. And as to the foreground, a filling algorithm, based on the color image concerning about both space and color information, is proposed. An adaptive joint bilateral filtering method is used to reduce noise. Experimental results show that the processed depth images have clean background and clear edges. The results are better than ones of traditional Strategies. It can be applied in 3D reconstruction fields to pretreat depth image in real time and obtain accurate results.

  7. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    NASA Astrophysics Data System (ADS)

    Zheng, Yipeng; Tan, Wenjiang; Si, Jinhai; Ren, YuHu; Xu, Shichao; Tong, Junyi; Hou, Xun

    2016-09-01

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. This imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.

  8. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    USGS Publications Warehouse

    Legleiter, Carl

    2016-01-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  9. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yipeng; Tan, Wenjiang, E-mail: tanwenjiang@mail.xjtu.edu.cn; Si, Jinhai

    2016-09-07

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. Thismore » imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.« less

  10. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  11. Time multiplexing based extended depth of focus imaging.

    PubMed

    Ilovitsh, Asaf; Zalevsky, Zeev

    2016-01-01

    We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.

  12. Noise removal in extended depth of field microscope images through nonlinear signal processing.

    PubMed

    Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J

    2013-04-01

    Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.

  13. High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.

    PubMed

    Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S

    2018-03-05

    A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.

  14. Enhanced optical clearing of skin in vivo and optical coherence tomography in-depth imaging

    NASA Astrophysics Data System (ADS)

    Wen, Xiang; Jacques, Steven L.; Tuchin, Valery V.; Zhu, Dan

    2012-06-01

    The strong optical scattering of skin tissue makes it very difficult for optical coherence tomography (OCT) to achieve deep imaging in skin. Significant optical clearing of in vivo rat skin sites was achieved within 15 min by topical application of an optical clearing agent PEG-400, a chemical enhancer (thiazone or propanediol), and physical massage. Only when all three components were applied together could a 15 min treatment achieve a three fold increase in the OCT reflectance from a 300 μm depth and 31% enhancement in image depth Zthreshold.

  15. Image processing operations achievable with the Microchannel Spatial Light Modulator

    NASA Astrophysics Data System (ADS)

    Warde, C.; Fisher, A. D.; Thackara, J. I.; Weiss, A. M.

    1980-01-01

    The Microchannel Spatial Light Modulator (MSLM) is a versatile, optically-addressed, highly-sensitive device that is well suited for low-light-level, real-time, optical information processing. It consists of a photocathode, a microchannel plate (MCP), a planar acceleration grid, and an electro-optic plate in proximity focus. A framing rate of 20 Hz with full modulation depth, and 100 Hz with 20% modulation depth has been achieved in a vacuum-demountable LiTaO3 device. A halfwave exposure sensitivity of 2.2 mJ/sq cm and an optical information storage time of more than 2 months have been achieved in a similar gridless LiTaO3 device employing a visible photocathode. Image processing operations such as analog and digital thresholding, real-time image hard clipping, contrast reversal, contrast enhancement, image addition and subtraction, and binary-level logic operations such as AND, OR, XOR, and NOR can be achieved with this device. This collection of achievable image processing characteristics makes the MSLM potentially useful for a number of smart sensor applications.

  16. Nanometric depth resolution from multi-focal images in microscopy.

    PubMed

    Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H

    2011-07-06

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.

  17. Nanometric depth resolution from multi-focal images in microscopy

    PubMed Central

    Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.

    2011-01-01

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948

  18. Bayesian depth estimation from monocular natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  19. Inverse scattering pre-stack depth imaging and it's comparison to some depth migration methods for imaging rich fault complex structure

    NASA Astrophysics Data System (ADS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Mubarok, Syahrul; Deny, Agus; Widowati, Sri; Kurniadi, Rizal

    2012-06-01

    Migration is important issue for seismic imaging in complex structure. In this decade, depth imaging becomes important tools for producing accurate image in depth imaging instead of time domain imaging. The challenge of depth migration method, however, is in revealing the complex structure of subsurface. There are many methods of depth migration with their advantages and weaknesses. In this paper, we show our propose method of pre-stack depth migration based on time domain inverse scattering wave equation. Hopefully this method can be as solution for imaging complex structure in Indonesia, especially in rich thrusting fault zones. In this research, we develop a recent advance wave equation migration based on time domain inverse scattering wave which use more natural wave propagation using scattering wave. This wave equation pre-stack depth migration use time domain inverse scattering wave equation based on Helmholtz equation. To provide true amplitude recovery, an inverse of divergence procedure and recovering transmission loss are considered of pre-stack migration. Benchmarking the propose inverse scattering pre-stack depth migration with the other migration methods are also presented, i.e.: wave equation pre-stack depth migration, waveequation depth migration, and pre-stack time migration method. This inverse scattering pre-stack depth migration could image successfully the rich fault zone which consist extremely dip and resulting superior quality of seismic image. The image quality of inverse scattering migration is much better than the others migration methods.

  20. VISIDEP™: visual image depth enhancement by parallax induction

    NASA Astrophysics Data System (ADS)

    Jones, Edwin R.; McLaurin, A. P.; Cathey, LeConte

    1984-05-01

    The usual descriptions of depth perception have traditionally required the simultaneous presentation of disparate views presented to separate eyes with the concomitant demand that the resulting binocular parallax be horizontally aligned. Our work suggests that the visual input information is compared in a short-term memory buffer which permits the brain to compute depth as it is normally perceived. However, the mechanism utilized is also capable of receiving and processing the stereographic information even when it is received monocularly or when identical inputs are simultaneously fed to both eyes. We have also found that the restriction to horizontally displaced images is not a necessary requirement and that improvement in image acceptability is achieved by the use of vertical parallax. Use of these ideas permit the presentation of three-dimensional scenes on flat screens in full color without the encumbrance of glasses or other viewing aids.

  1. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    PubMed Central

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933

  2. Depth extraction method with high accuracy in integral imaging based on moving array lenslet technique

    NASA Astrophysics Data System (ADS)

    Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing

    2018-03-01

    In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.

  3. Monocular depth perception using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Hombali, Apoorv; Gorde, Vaibhav; Deshpande, Abhishek

    2011-10-01

    This paper primarily exploits some of the more obscure, but inherent properties of camera and image to propose a simpler and more efficient way of perceiving depth. The proposed method involves the use of a single stationary camera at an unknown perspective and an unknown height to determine depth of an object on unknown terrain. In achieving so a direct correlation between a pixel in an image and the corresponding location in real space has to be formulated. First, a calibration step is undertaken whereby the equation of the plane visible in the field of view is calculated along with the relative distance between camera and plane by using a set of derived spatial geometrical relations coupled with a few intrinsic properties of the system. The depth of an unknown object is then perceived by first extracting the object under observation using a series of image processing steps followed by exploiting the aforementioned mapping of pixel and real space coordinate. The performance of the algorithm is greatly enhanced by the introduction of reinforced learning making the system independent of hardware and environment. Furthermore the depth calculation function is modified with a supervised learning algorithm giving consistent improvement in results. Thus, the system uses the experience in past and optimizes the current run successively. Using the above procedure a series of experiments and trials are carried out to prove the concept and its efficacy.

  4. Total variation based image deconvolution for extended depth-of-field microscopy images

    NASA Astrophysics Data System (ADS)

    Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.

    2015-03-01

    One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.

  5. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  6. Comparing Yb-fiber and Ti:Sapphire lasers for depth resolved imaging of human skin (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Balu, Mihaela; Saytashev, Ilyas; Hou, Jue; Dantus, Marcos; Tromberg, Bruce J.

    2016-02-01

    We report on a direct comparison between Ti:Sapphire and Yb fiber lasers for depth-resolved label-free multimodal imaging of human skin. We found that the penetration depth achieved with the Yb laser was 80% greater than for the Ti:Sapphire. Third harmonic generation (THG) imaging with Yb laser excitation provides additional information about skin structure. Our results indicate the potential of fiber-based laser systems for moving into clinical use.

  7. Depth profile measurement with lenslet images of the plenoptic camera

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei

    2018-03-01

    An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.

  8. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal

  9. Depth-resolved imaging of capillary networks in retina and choroid using ultrahigh sensitive optical microangiography

    PubMed Central

    Wang, Ruikang K.; An, Lin; Francis, Peter; Wilson, David J.

    2010-01-01

    We demonstrate the depth-resolved and detailed ocular perfusion maps within retina and choroid can be obtained from an ultrahigh sensitive optical microangiography (OMAG). As opposed to the conventional OMAG, we apply the OMAG algorithm along the slow scanning axis to achieve the ultrahigh sensitive imaging to the slow flows within capillaries. We use an 840nm system operating at an imaging rate of 400 frames/sec that requires 3 sec to complete one 3D scan of ~3x3 mm2 area on retina. We show the superior imaging performance of OMAG to provide functional images of capillary level microcirculation at different land-marked depths within retina and choroid that correlate well with the standard retinal pathology. PMID:20436605

  10. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    NASA Astrophysics Data System (ADS)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative

  11. Time-of-flight depth image enhancement using variable integration time

    NASA Astrophysics Data System (ADS)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  12. Long-wavelength optical coherence tomography at 1.7 µm for enhanced imaging depth

    PubMed Central

    Sharma, Utkarsh; Chang, Ernest W.; Yun, Seok H.

    2009-01-01

    Multiple scattering in a sample presents a significant limitation to achieve meaningful structural information at deeper penetration depths in optical coherence tomography (OCT). Previous studies suggest that the spectral region around 1.7 µm may exhibit reduced scattering coefficients in biological tissues compared to the widely used wavelengths around 1.3 µm. To investigate this long-wavelength region, we developed a wavelength-swept laser at 1.7 µm wavelength and conducted OCT or optical frequency domain imaging (OFDI) for the first time in this spectral range. The constructed laser is capable of providing a wide tuning range from 1.59 to 1.75 µm over 160 nm. When the laser was operated with a reduced tuning range over 95 nm at a repetition rate of 10.9 kHz and an average output power of 12.3 mW, the OFDI imaging system exhibited a sensitivity of about 100 dB and axial and lateral resolution of 24 µm and 14 µm, respectively. We imaged several phantom and biological samples using 1.3 µm and 1.7 µm OFDI systems and found that the depth-dependent signal decay rate is substantially lower at 1.7 µm wavelength in most, if not all samples. Our results suggest that this imaging window may offer an advantage over shorter wavelengths by increasing the penetration depths as well as enhancing image contrast at deeper penetration depths where otherwise multiple scattered photons dominate over ballistic photons. PMID:19030057

  13. Depth image super-resolution via semi self-taught learning framework

    NASA Astrophysics Data System (ADS)

    Zhao, Furong; Cao, Zhiguo; Xiao, Yang; Zhang, Xiaodi; Xian, Ke; Li, Ruibo

    2017-06-01

    Depth images have recently attracted much attention in computer vision and high-quality 3D content for 3DTV and 3D movies. In this paper, we present a new semi self-taught learning application framework for enhancing resolution of depth maps without making use of ancillary color images data at the target resolution, or multiple aligned depth maps. Our framework consists of cascade random forests reaching from coarse to fine results. We learn the surface information and structure transformations both from a small high-quality depth exemplars and the input depth map itself across different scales. Considering that edge plays an important role in depth map quality, we optimize an effective regularized objective that calculates on output image space and input edge space in random forests. Experiments show the effectiveness and superiority of our method against other techniques with or without applying aligned RGB information

  14. Long-range and depth-selective imaging of macroscopic targets using low-coherence and wide-field interferometry (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Choi, Wonshik

    2016-03-01

    With the advancement of 3D display technology, 3D imaging of macroscopic objects has drawn much attention as they provide the contents to display. The most widely used imaging methods include a depth camera, which measures time of flight for the depth discrimination, and various structured illumination techniques. However, these existing methods have poor depth resolution, which makes imaging complicated structures a difficult task. In order to resolve this issue, we propose an imaging system based upon low-coherence interferometry and off-axis digital holographic imaging. By using light source with coherence length of 200 micro, we achieved the depth resolution of 100 micro. In order to map the macroscopic objects with this high axial resolution, we installed a pair of prisms in the reference beam path for the long-range scanning of the optical path length. Specifically, one prism was fixed in position, and the other prism was mounted on a translation stage and translated in parallel to the first prism. Due to the multiple internal reflections between the two prisms, the overall path length was elongated by a factor of 50. In this way, we could cover a depth range more than 1 meter. In addition, we employed multiple speckle illuminations and incoherent averaging of the acquired holographic images for reducing the specular reflections from the target surface. Using this newly developed system, we performed imaging targets with multiple different layers and demonstrated imaging targets hidden behind the scattering layers. The method was also applied to imaging targets located around the corner.

  15. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  16. Depth map generation using a single image sensor with phase masks.

    PubMed

    Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki

    2016-06-13

    Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.

  17. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images.

    PubMed

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer's own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.

  18. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793

  19. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  20. The influence of structure depth on image blurring of micrometres-thick specimens in MeV transmission electron imaging.

    PubMed

    Wang, Fang; Sun, Ying; Cao, Meng; Nishi, Ryuji

    2016-04-01

    This study investigates the influence of structure depth on image blurring of micrometres-thick films by experiment and simulation with a conventional transmission electron microscope (TEM). First, ultra-high-voltage electron microscope (ultra-HVEM) images of nanometer gold particles embedded in thick epoxy-resin films were acquired in the experiment and compared with simulated images. Then, variations of image blurring of gold particles at different depths were evaluated by calculating the particle diameter. The results showed that with a decrease in depth, image blurring increased. This depth-related property was more apparent for thicker specimens. Fortunately, larger particle depth involves less image blurring, even for a 10-μm-thick epoxy-resin film. The quality dependence on depth of a 3D reconstruction of particle structures in thick specimens was revealed by electron tomography. The evolution of image blurring with structure depth is determined mainly by multiple elastic scattering effects. Thick specimens of heavier materials produced more blurring due to a larger lateral spread of electrons after scattering from the structure. Nevertheless, increasing electron energy to 2MeV can reduce blurring and produce an acceptable image quality for thick specimens in the TEM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Development of Extended-Depth Swept Source Optical Coherence Tomography for Applications in Ophthalmic Imaging of the Anterior and Posterior Eye

    NASA Astrophysics Data System (ADS)

    Dhalla, Al-Hafeez Zahir

    Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for

  2. Layered compression for high-precision depth data.

    PubMed

    Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen

    2015-12-01

    With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.

  3. Profiling defect depth in composite materials using thermal imaging NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2018-04-01

    Sonic Infrared (IR) NDE, is a relatively new NDE technology; it has been demonstrated as a reliable and sensitive method to detect defects. SIR uses ultrasonic excitation with IR imaging to detect defects and flaws in the structures being inspected. An IR camera captures infrared radiation from the target for a period of time covering the ultrasound pulse. This period of time may be much longer than the pulse depending on the defect depth and the thermal properties of the materials. With the increasing deployment of composites in modern aerospace and automobile structures, fast, wide-area and reliable NDE methods are necessary. Impact damage is one of the major concerns in modern composites. Damage can occur at a certain depth without any visual indication on the surface. Defect depth information can influence maintenance decisions. Depth profiling relies on the time delays in the captured image sequence. We'll present our work on the defect depth profiling by using the temporal information of IR images. An analytical model is introduced to describe heat diffusion from subsurface defects in composite materials. Depth profiling using peak time is introduced as well.

  4. Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers.

    PubMed

    Kim, Yunhee; Choi, Heejin; Kim, Joohwan; Cho, Seong-Woo; Kim, Youngmin; Park, Gilbae; Lee, Byoungho

    2007-06-20

    A depth-enhanced three-dimensional integral imaging system with electrically variable image planes is proposed. For implementing the variable image planes, polymer-dispersed liquid-crystal (PDLC) films and a projector are adopted as a new display system in the integral imaging. Since the transparencies of PDLC films are electrically controllable, we can make each film diffuse the projected light successively with a different depth from the lens array. As a result, the proposed method enables control of the location of image planes electrically and enhances the depth. The principle of the proposed method is described, and experimental results are also presented.

  5. Choroidal vasculature characteristics based choroid segmentation for enhanced depth imaging optical coherence tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qiang; Niu, Sijie; Yuan, Songtao

    Purpose: In clinical research, it is important to measure choroidal thickness when eyes are affected by various diseases. The main purpose is to automatically segment choroid for enhanced depth imaging optical coherence tomography (EDI-OCT) images with five B-scans averaging. Methods: The authors present an automated choroid segmentation method based on choroidal vasculature characteristics for EDI-OCT images with five B-scans averaging. By considering the large vascular of the Haller’s layer neighbor with the choroid-sclera junction (CSJ), the authors measured the intensity ascending distance and a maximum intensity image in the axial direction from a smoothed and normalized EDI-OCT image. Then, basedmore » on generated choroidal vessel image, the authors constructed the CSJ cost and constrain the CSJ search neighborhood. Finally, graph search with smooth constraints was utilized to obtain the CSJ boundary. Results: Experimental results with 49 images from 10 eyes in 8 normal persons and 270 images from 57 eyes in 44 patients with several stages of diabetic retinopathy and age-related macular degeneration demonstrate that the proposed method can accurately segment the choroid of EDI-OCT images with five B-scans averaging. The mean choroid thickness difference and overlap ratio between the authors’ proposed method and manual segmentation drawn by experts were −11.43 μm and 86.29%, respectively. Conclusions: Good performance was achieved for normal and pathologic eyes, which proves that the authors’ method is effective for the automated choroid segmentation of the EDI-OCT images with five B-scans averaging.« less

  6. Improvement of depth resolution on photoacoustic imaging using multiphoton absorption

    NASA Astrophysics Data System (ADS)

    Yamaoka, Yoshihisa; Fujiwara, Katsuji; Takamatsu, Tetsuro

    2007-07-01

    Commercial imaging systems, such as computed tomography and magnetic resonance imaging, are frequently used powerful tools for observing structures deep within the human body. However, they cannot precisely visualized several-tens micrometer-sized structures for lack of spatial resolution. In this presentation, we propose photoacoustic imaging using multiphoton absorption technique to generate ultrasonic waves as a means of improving depth resolution. Since the multiphoton absorption occurs at only the focus point and the employed infrared pulses deeply penetrate living tissues, it enables us to extract characteristic features of structures embedded in the living tissue. When nanosecond pulses from a 1064-nm Nd:YAG laser were focused on Rhodamine B/chloroform solution (absorption peak: 540 nm), the peak intensity of the generated photoacoustic signal was proportional to the square of the input pulse energy. This result shows that the photoacoustic signals can be induced by the two-photon absorption of infrared nanosecond pulse laser and also can be detected by a commercial low-frequency MHz transducer. Furthermore, in order to evaluate the depth resolution of multiphoton-photoacoustic imaging, we investigated the dependence of photoacoustic signal on depth position using a 1-mm-thick phantom in a water bath. We found that the depth resolution of two-photon photoacoustic imaging (1064 nm) is greater than that of one-photon photoacoustic imaging (532 nm). We conclude that evolving multiphoton-photoacoustic imaging technology renders feasible the investigation of biomedical phenomena at the deep layer in living tissue.

  7. Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.

    PubMed

    Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso

    2018-07-01

    There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.

  8. Cloud Optical Depth Measured with Ground-Based, Uncooled Infrared Imagers

    NASA Technical Reports Server (NTRS)

    Shaw, Joseph A.; Nugent, Paul W.; Pust, Nathan J.; Redman, Brian J.; Piazzolla, Sabino

    2012-01-01

    Recent advances in uncooled, low-cost, long-wave infrared imagers provide excellent opportunities for remotely deployed ground-based remote sensing systems. However, the use of these imagers in demanding atmospheric sensing applications requires that careful attention be paid to characterizing and calibrating the system. We have developed and are using several versions of the ground-based "Infrared Cloud Imager (ICI)" instrument to measure spatial and temporal statistics of clouds and cloud optical depth or attenuation for both climate research and Earth-space optical communications path characterization. In this paper we summarize the ICI instruments and calibration methodology, then show ICI-derived cloud optical depths that are validated using a dual-polarization cloud lidar system for thin clouds (optical depth of approximately 4 or less).

  9. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina

    NASA Astrophysics Data System (ADS)

    An, Lin; Shen, Tueng T.; Wang, Ruikang K.

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.

  10. Extending the fundamental imaging-depth limit of multi-photon microscopy by imaging with photo-activatable fluorophores.

    PubMed

    Chen, Zhixing; Wei, Lu; Zhu, Xinxin; Min, Wei

    2012-08-13

    It is highly desirable to be able to optically probe biological activities deep inside live organisms. By employing a spatially confined excitation via a nonlinear transition, multiphoton fluorescence microscopy has become indispensable for imaging scattering samples. However, as the incident laser power drops exponentially with imaging depth due to scattering loss, the out-of-focus fluorescence eventually overwhelms the in-focal signal. The resulting loss of imaging contrast defines a fundamental imaging-depth limit, which cannot be overcome by increasing excitation intensity. Herein we propose to significantly extend this depth limit by multiphoton activation and imaging (MPAI) of photo-activatable fluorophores. The imaging contrast is drastically improved due to the created disparity of bright-dark quantum states in space. We demonstrate this new principle by both analytical theory and experiments on tissue phantoms labeled with synthetic caged fluorescein dye or genetically encodable photoactivatable GFP.

  11. Depth-Resolved Multispectral Sub-Surface Imaging Using Multifunctional Upconversion Phosphors with Paramagnetic Properties

    PubMed Central

    Ovanesyan, Zaven; Mimun, L. Christopher; Kumar, Gangadharan Ajith; Yust, Brian G.; Dannangoda, Chamath; Martirosyan, Karen S.; Sardar, Dhiraj K.

    2015-01-01

    Molecular imaging is very promising technique used for surgical guidance, which requires advancements related to properties of imaging agents and subsequent data retrieval methods from measured multispectral images. In this article, an upconversion material is introduced for subsurface near-infrared imaging and for the depth recovery of the material embedded below the biological tissue. The results confirm significant correlation between the analytical depth estimate of the material under the tissue and the measured ratio of emitted light from the material at two different wavelengths. Experiments with biological tissue samples demonstrate depth resolved imaging using the rare earth doped multifunctional phosphors. In vitro tests reveal no significant toxicity, whereas the magnetic measurements of the phosphors show that the particles are suitable as magnetic resonance imaging agents. The confocal imaging of fibroblast cells with these phosphors reveals their potential for in vivo imaging. The depth-resolved imaging technique with such phosphors has broad implications for real-time intraoperative surgical guidance. PMID:26322519

  12. Particle-Image Velocimeter Having Large Depth of Field

    NASA Technical Reports Server (NTRS)

    Bos, Brent

    2009-01-01

    An instrument that functions mainly as a particle-image velocimeter provides data on the sizes and velocities of flying opaque particles. The instrument is being developed as a means of characterizing fluxes of wind-borne dust particles in the Martian atmosphere. The instrument could also adapted to terrestrial use in measuring sizes and velocities of opaque particles carried by natural winds and industrial gases. Examples of potential terrestrial applications include monitoring of airborne industrial pollutants and airborne particles in mine shafts. The design of this instrument reflects an observation, made in field research, that airborne dust particles derived from soil and rock are opaque enough to be observable by use of bright field illumination with high contrast for highly accurate measurements of sizes and shapes. The instrument includes a source of collimated light coupled to an afocal beam expander and an imaging array of photodetectors. When dust particles travel through the collimated beam, they cast shadows. The shadows are magnified by the beam expander and relayed to the array of photodetectors. Inasmuch as the images captured by the array are of dust-particle shadows rather of the particles themselves, the depth of field of the instrument can be large: the instrument has a depth of field of about 11 mm, which is larger than the depths of field of prior particle-image velocimeters. The instrument can resolve, and measure the sizes and velocities of, particles having sizes in the approximate range of 1 to 300 m. For slowly moving particles, data from two image frames are used to calculate velocities. For rapidly moving particles, image smear lengths from a single frame are used in conjunction with particle- size measurement data to determine velocities.

  13. Extended depth of field integral imaging using multi-focus fusion

    NASA Astrophysics Data System (ADS)

    Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua

    2018-03-01

    In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.

  14. Direct local building inundation depth determination in 3-D point clouds generated from user-generated flood images

    NASA Astrophysics Data System (ADS)

    Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard

    2017-07-01

    In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.

  15. Contour sensitive saliency and depth application in image retargeting

    NASA Astrophysics Data System (ADS)

    Lu, Hongju; Yue, Pengfei; Zhao, Yanhui; Liu, Rui; Fu, Yuanbin; Zheng, Yuanjie; Cui, Jia

    2018-04-01

    Image retargeting technique requires important information preservation and less edge distortion during increasing/decreasing image size. The major existed content-aware methods perform well. However, there are two problems should be improved: the slight distortion appeared at the object edges and the structure distortion in the nonsalient area. According to psychological theories, people evaluate image quality based on multi-level judgments and comparison between different areas, both image content and image structure. The paper proposes a new standard: the structure preserving in non-salient area. After observation and image analysis, blur (slight blur) is generally existed at the edge of objects. The blur feature is used to estimate the depth cue, named blur depth descriptor. It can be used in the process of saliency computation for balanced image retargeting result. In order to keep the structure information in nonsalient area, the salient edge map is presented in Seam Carving process, instead of field-based saliency computation. The derivative saliency from x- and y-direction can avoid the redundant energy seam around salient objects causing structure distortion. After the comparison experiments between classical approaches and ours, the feasibility of our algorithm is proved.

  16. Depth resolved hyperspectral imaging spectrometer based on structured light illumination and Fourier transform interferometry

    PubMed Central

    Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.

    2014-01-01

    A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367

  17. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    PubMed

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  18. Chromatic confocal microscopy for multi-depth imaging of epithelial tissue

    PubMed Central

    Olsovsky, Cory; Shelton, Ryan; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.

    2013-01-01

    We present a novel chromatic confocal microscope capable of volumetric reflectance imaging of microstructure in non-transparent tissue. Our design takes advantage of the chromatic aberration of aspheric lenses that are otherwise well corrected. Strong chromatic aberration, generated by multiple aspheres, longitudinally disperses supercontinuum light onto the sample. The backscattered light detected with a spectrometer is therefore wavelength encoded and each spectrum corresponds to a line image. This approach obviates the need for traditional axial mechanical scanning techniques that are difficult to implement for endoscopy and susceptible to motion artifact. A wavelength range of 590-775 nm yielded a >150 µm imaging depth with ~3 µm axial resolution. The system was further demonstrated by capturing volumetric images of buccal mucosa. We believe these represent the first microstructural images in non-transparent biological tissue using chromatic confocal microscopy that exhibit long imaging depth while maintaining acceptable resolution for resolving cell morphology. Miniaturization of this optical system could bring enhanced speed and accuracy to endomicroscopic in vivo volumetric imaging of epithelial tissue. PMID:23667789

  19. Photoacoustics and speed-of-sound dual mode imaging with a long depth-of-field by using annular ultrasound array.

    PubMed

    Ding, Qiuning; Tao, Chao; Liu, Xiaojun

    2017-03-20

    Speed-of-sound and optical absorption reflect the structure and function of tissues from different aspects. A dual-mode microscopy system based on a concentric annular ultrasound array is proposed to simultaneously acquire the long depth-of-field images of speed-of-sound and optical absorption of inhomogeneous samples. First, speed-of-sound is decoded from the signal delay between each element of the annular array. The measured speed-of-sound could not only be used as an image contrast, but also improve the resolution and accuracy of spatial location of photoacoustic image in inhomogeneous acoustic media. Secondly, benefitting from dynamic focusing of annular array and the measured speed-of-sound, it is achieved an advanced acoustic-resolution photoacoustic microscopy with a precise position and a long depth-of-field. The performance of the dual-mode imaging system has been experimentally examined by using a custom-made annular array. The proposed dual-mode microscopy might have the significances in monitoring the biological physiological and pathological processes.

  20. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  1. Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.

    2018-04-01

    RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.

  2. All-near-infrared multiphoton microscopy interrogates intact tissues at deeper imaging depths than conventional single- and two-photon near-infrared excitation microscopes

    PubMed Central

    Sarder, Pinaki; Yazdanfar, Siavash; Akers, Walter J.; Tang, Rui; Sudlow, Gail P.; Egbulefu, Christopher

    2013-01-01

    Abstract. The era of molecular medicine has ushered in the development of microscopic methods that can report molecular processes in thick tissues with high spatial resolution. A commonality in deep-tissue microscopy is the use of near-infrared (NIR) lasers with single- or multiphoton excitations. However, the relationship between different NIR excitation microscopic techniques and the imaging depths in tissue has not been established. We compared such depth limits for three NIR excitation techniques: NIR single-photon confocal microscopy (NIR SPCM), NIR multiphoton excitation with visible detection (NIR/VIS MPM), and all-NIR multiphoton excitation with NIR detection (NIR/NIR MPM). Homologous cyanine dyes provided the fluorescence. Intact kidneys were harvested after administration of kidney-clearing cyanine dyes in mice. NIR SPCM and NIR/VIS MPM achieved similar maximum imaging depth of ∼100  μm. The NIR/NIR MPM enabled greater than fivefold imaging depth (>500  μm) using the harvested kidneys. Although the NIR/NIR MPM used 1550-nm excitation where water absorption is relatively high, cell viability and histology studies demonstrate that the laser did not induce photothermal damage at the low laser powers used for the kidney imaging. This study provides guidance on the imaging depth capabilities of NIR excitation-based microscopic techniques and reveals the potential to multiplex information using these platforms. PMID:24150231

  3. Fast processing of microscopic images using object-based extended depth of field.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This

  4. Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    So, Peter T.

    2016-03-01

    Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens.

  5. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  6. Deep learning-based depth estimation from a synthetic endoscopy image training set

    NASA Astrophysics Data System (ADS)

    Mahmood, Faisal; Durr, Nicholas J.

    2018-03-01

    Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.

  7. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging

    PubMed Central

    Manley, Suliana

    2015-01-01

    Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope’s pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467

  8. Very high frame rate volumetric integration of depth images on mobile devices.

    PubMed

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  9. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  10. Design of high-performance adaptive objective lens with large optical depth scanning range for ultrabroad near infrared microscopic imaging

    PubMed Central

    Lan, Gongpu; Mauger, Thomas F.; Li, Guoqiang

    2015-01-01

    We report on the theory and design of adaptive objective lens for ultra broadband near infrared light imaging with large dynamic optical depth scanning range by using an embedded tunable lens, which can find wide applications in deep tissue biomedical imaging systems, such as confocal microscope, optical coherence tomography (OCT), two-photon microscopy, etc., both in vivo and ex vivo. This design is based on, but not limited to, a home-made prototype of liquid-filled membrane lens with a clear aperture of 8mm and the thickness of 2.55mm ~3.18mm. It is beneficial to have an adaptive objective lens which allows an extended depth scanning range larger than the focal length zoom range, since this will keep the magnification of the whole system, numerical aperture (NA), field of view (FOV), and resolution more consistent. To achieve this goal, a systematic theory is presented, for the first time to our acknowledgment, by inserting the varifocal lens in between a front and a back solid lens group. The designed objective has a compact size (10mm-diameter and 15mm-length), ultrabroad working bandwidth (760nm - 920nm), a large depth scanning range (7.36mm in air) — 1.533 times of focal length zoom range (4.8mm in air), and a FOV around 1mm × 1mm. Diffraction-limited performance can be achieved within this ultrabroad bandwidth through all the scanning depth (the resolution is 2.22 μm - 2.81 μm, calculated at the wavelength of 800nm with the NA of 0.214 - 0.171). The chromatic focal shift value is within the depth of focus (field). The chromatic difference in distortion is nearly zero and the maximum distortion is less than 0.05%. PMID:26417508

  11. Depth-section imaging of swine kidney by spectrally encoded microscopy

    NASA Astrophysics Data System (ADS)

    Liao, Jiuling; Gao, Wanrong

    2016-10-01

    The kidneys are essential regulatory organs whose main function is to regulate the balance of electrolytes in the blood, along with maintaining pH homeostasis. The study of the microscopic structure of the kidney will help identify kidney diseases associated with specific renal histology change. Spectrally encoded microscopy (SEM) is a new reflectance microscopic imaging technique in which a grating is used to illuminate different positions along a line on the sample with different wavelengths, reducing the size of system and imaging time. In this paper, a SEM device is described which is based on a super luminescent diode source and a home-built spectrometer. The lateral resolution was measured by imaging the USAF resolution target. The axial response curve was obtained as a reflect mirror was scanned through the focal plane axially. In order to test the feasibility of using SEM for depth-section imaging of an excised swine kidney tissue, the images of the samples were acquired by scanning the sample at 10 μm per step along the depth direction. Architectural features of the kidney tissue could be clearly visualized in the SEM images, including glomeruli and blood vessels. Results from this study suggest that SEM may be useful for locating regions with probabilities of kidney disease or cancer.

  12. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  13. Pre-stack depth Migration imaging of the Hellenic Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hussni, S. G.; Becel, A.; Schenini, L.; Laigle, M.; Dessa, J. X.; Galve, A.; Vitard, C.

    2017-12-01

    In 365 AD, a major M>8-tsunamignic earthquake occurred along the southwestern segment of the Hellenic subduction zone. Although this is the largest seismic event ever reported in Europe, some fundamental questions remain regarding the deep geometry of the interplate megathrust, as well as other faults within the overriding plate potentially connected to it. The main objective here is to image those deep structures, whose depths range between 15 and 45 km, using leading edge seismic reflection equipment. To this end, a 210-km-long multichannel seismic profile was acquired with the 8 km-long streamer and the 6600 cu.in source of R/V Marcus Langseth. This was realized at the end of 2015, during the SISMED cruise. This survey was made possible through a collective effort gathering labs (Géoazur, LDEO, ISTEP, ENS-Paris, EOST, LDO, Dpt. Geosciences of Pau Univ). A preliminary processing sequence has first been applied using Geovation software of CGG, which yielded a post-stack time migration of collected data, as well as pre-stack time migration obtained with a model derived from velocity analyses. Using Paradigm software, a pre-stack depth migration was subsequently carried out. This step required some tuning in the pre-processing sequence in order to improve multiple removal, noise suppression and to better reveal the true geometry of reflectors in depth. This iteration of pre-processing included, the use of parabolic Radon transform, FK filtering and time variant band pass filtering. An initial velocity model was built using depth-converted RMS velocities obtained from SISMED data for the sedimentary layer, complemented at depth with a smooth version of the tomographic velocities derived from coincident wide-angle data acquired during the 2012-ULYSSE survey. Then, we performed a Kirchhoff Pre-stack depth migration with traveltimes calculated using the Eikonal equation. Velocity model were then tuned through residual velocity analyses to flatten reflections in common

  14. High resolution axicon-based endoscopic FD OCT imaging with a large depth range

    NASA Astrophysics Data System (ADS)

    Lee, Kye-Sung; Hurley, William; Deegan, John; Dean, Scott; Rolland, Jannick P.

    2010-02-01

    Endoscopic imaging in tubular structures, such as the tracheobronchial tree, could benefit from imaging optics with an extended depth of focus (DOF). This optics could accommodate for varying sizes of tubular structures across patients and along the tree within a single patient. In the paper, we demonstrate an extended DOF without sacrificing resolution showing rotational images in biological tubular samples with 2.5 μm axial resolution, 10 ìm lateral resolution, and > 4 mm depth range using a custom designed probe.

  15. Three-dimensional anterior segment imaging in patients with type 1 Boston Keratoprosthesis with switchable full depth range swept source optical coherence tomography

    PubMed Central

    Poddar, Raju; Cortés, Dennis E.; Werner, John S.; Mannis, Mark J.

    2013-01-01

    Abstract. A high-speed (100 kHz A-scans/s) complex conjugate resolved 1 μm swept source optical coherence tomography (SS-OCT) system using coherence revival of the light source is suitable for dense three-dimensional (3-D) imaging of the anterior segment. The short acquisition time helps to minimize the influence of motion artifacts. The extended depth range of the SS-OCT system allows topographic analysis of clinically relevant images of the entire depth of the anterior segment of the eye. Patients with the type 1 Boston Keratoprosthesis (KPro) require evaluation of the full anterior segment depth. Current commercially available OCT systems are not suitable for this application due to limited acquisition speed, resolution, and axial imaging range. Moreover, most commonly used research grade and some clinical OCT systems implement a commercially available SS (Axsun) that offers only 3.7 mm imaging range (in air) in its standard configuration. We describe implementation of a common swept laser with built-in k-clock to allow phase stable imaging in both low range and high range, 3.7 and 11.5 mm in air, respectively, without the need to build an external MZI k-clock. As a result, 3-D morphology of the KPro position with respect to the surrounding tissue could be investigated in vivo both at high resolution and with large depth range to achieve noninvasive and precise evaluation of success of the surgical procedure. PMID:23912759

  16. Monte Carlo simulation of the spatial resolution and depth sensitivity of two-dimensional optical imaging of the brain

    PubMed Central

    Tian, Peifang; Devor, Anna; Sakadžić, Sava; Dale, Anders M.; Boas, David A.

    2011-01-01

    Absorption or fluorescence-based two-dimensional (2-D) optical imaging is widely employed in functional brain imaging. The image is a weighted sum of the real signal from the tissue at different depths. This weighting function is defined as “depth sensitivity.” Characterizing depth sensitivity and spatial resolution is important to better interpret the functional imaging data. However, due to light scattering and absorption in biological tissues, our knowledge of these is incomplete. We use Monte Carlo simulations to carry out a systematic study of spatial resolution and depth sensitivity for 2-D optical imaging methods with configurations typically encountered in functional brain imaging. We found the following: (i) the spatial resolution is <200 μm for NA ≤0.2 or focal plane depth ≤300 μm. (ii) More than 97% of the signal comes from the top 500 μm of the tissue. (iii) For activated columns with lateral size larger than spatial resolution, changing numerical aperature (NA) and focal plane depth does not affect depth sensitivity. (iv) For either smaller columns or large columns covered by surface vessels, increasing NA and∕or focal plane depth may improve depth sensitivity at deeper layers. Our results provide valuable guidance for the optimization of optical imaging systems and data interpretation. PMID:21280912

  17. Predefined Redundant Dictionary for Effective Depth Maps Representation

    NASA Astrophysics Data System (ADS)

    Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi

    2016-01-01

    The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.

  18. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    PubMed

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  19. Analysis of airborne imaging spectrometer data for the Ruby Mountains, Montana, by use of absorption-band-depth images

    NASA Technical Reports Server (NTRS)

    Brickey, David W.; Crowley, James K.; Rowan, Lawrence C.

    1987-01-01

    Airborne Imaging Spectrometer-1 (AIS-1) data were obtained for an area of amphibolite grade metamorphic rocks that have moderate rangeland vegetation cover. Although rock exposures are sparse and patchy at this site, soils are visible through the vegetation and typically comprise 20 to 30 percent of the surface area. Channel averaged low band depth images for diagnostic soil rock absorption bands. Sets of three such images were combined to produce color composite band depth images. This relative simple approach did not require extensive calibration efforts and was effective for discerning a number of spectrally distinctive rocks and soils, including soils having high talc concentrations. The results show that the high spectral and spatial resolution of AIS-1 and future sensors hold considerable promise for mapping mineral variations in soil, even in moderately vegetated areas.

  20. Comparison of the depth of an optic nerve head obtained using stereo retinal images and HRT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Aoyama, Akira; Hara, Takeshi; Kakogawa, Masakatsu; Fujita, Hiroshi; Yamamoto, Tetsuya

    2007-03-01

    The analysis of the optic nerve head (ONH) in the retinal fundus is important for the early detection of glaucoma. In this study, we investigate an automatic reconstruction method for producing the 3-D structure of the ONH from a stereo retinal image pair; the depth value of the ONH measured by using this method was compared with the measurement results determined from the Heidelberg Retina Tomograph (HRT). We propose a technique to obtain the depth value from the stereo image pair, which mainly consists of four steps: (1) cutout of the ONH region from the retinal images, (2) registration of the stereo pair, (3) disparity detection, and (4) depth calculation. In order to evaluate the accuracy of this technique, the shape of the depression of an eyeball phantom that had a circular dent as generated from the stereo image pair and used to model the ONH was compared with a physically measured quantity. The measurement results obtained when the eyeball phantom was used were approximately consistent. The depth of the ONH obtained using the stereo retinal images was in accordance with the results obtained using the HRT. These results indicate that the stereo retinal images could be useful for assessing the depth of the ONH for the diagnosis of glaucoma.

  1. Penetration depth measurement of near-infrared hyperspectral imaging light for milk powder

    USDA-ARS?s Scientific Manuscript database

    The increasingly common application of near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging ligh...

  2. Pixel-based parametric source depth map for Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Altabella, L.; Boschi, F.; Spinelli, A. E.

    2016-01-01

    Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.

  3. An energy- and depth-dependent model for x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallas, Brandon D.; Boswell, Jonathan S.; Badano, Aldo

    In this paper, we model an x-ray imaging system, paying special attention to the energy- and depth-dependent characteristics of the inputs and interactions: x rays are polychromatic, interaction depth and conversion to optical photons is energy-dependent, optical scattering and the collection efficiency depend on the depth of interaction. The model we construct is a random function of the point process that begins with the distribution of x rays incident on the phosphor and ends with optical photons being detected by the active area of detector pixels to form an image. We show how the point-process representation can be used tomore » calculate the characteristic statistics of the model. We then simulate a Gd{sub 2}O{sub 2}S:Tb phosphor, estimate its characteristic statistics, and proceed with a signal-detection experiment to investigate the impact of the pixel fill factor on detecting spherical calcifications (the signal). The two extremes possible from this experiment are that SNR{sup 2} does not change with fill factor or changes in proportion to fill factor. In our results, the impact of fill factor is between these extremes, and depends on the diameter of the signal.« less

  4. Application of simple all-sky imagers for the estimation of aerosol optical depth

    NASA Astrophysics Data System (ADS)

    Kazantzidis, Andreas; Tzoumanikas, Panagiotis; Nikitidou, Efterpi; Salamalikis, Vasileios; Wilbert, Stefan; Prahl, Christoph

    2017-06-01

    Aerosol optical depth is a key atmospheric constituent for direct normal irradiance calculations at concentrating solar power plants. However, aerosol optical depth is typically not measured at the solar plants for financial reasons. With the recent introduction of all-sky imagers for the nowcasting of direct normal irradiance at the plants a new instrument is available which can be used for the determination of aerosol optical depth at different wavelengths. In this study, we are based on Red, Green and Blue intensities/radiances and calculations of the saturated area around the Sun, both derived from all-sky images taken with a low-cost surveillance camera at the Plataforma Solar de Almeria, Spain. The aerosol optical depth at 440, 500 and 675nm is calculated. The results are compared with collocated aerosol optical measurements and the mean/median difference and standard deviation are less than 0.01 and 0.03 respectively at all wavelengths.

  5. A method to generate soft shadows using a layered depth image and warping.

    PubMed

    Im, Yeon-Ho; Han, Chang-Young; Kim, Lee-Sup

    2005-01-01

    We present an image-based method for propagating area light illumination through a Layered Depth Image (LDI) to generate soft shadows from opaque and nonrefractive transparent objects. In our approach, using the depth peeling technique, we render an LDI from a reference light sample on a planar light source. Light illumination of all pixels in an LDI is then determined for all the other sample points via warping, an image-based rendering technique, which approximates ray tracing in our method. We use an image-warping equation and McMillan's warp ordering algorithm to find the intersections between rays and polygons and to find the order of intersections. Experiments for opaque and nonrefractive transparent objects are presented. Results indicate our approach generates soft shadows fast and effectively. Advantages and disadvantages of the proposed method are also discussed.

  6. Planarity constrained multi-view depth map reconstruction for urban scenes

    NASA Astrophysics Data System (ADS)

    Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie

    2018-05-01

    Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.

  7. RGB-D depth-map restoration using smooth depth neighborhood supports

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Xue, Haoyang; Yu, Zhongjie; Wu, Qiang; Yang, Jie

    2015-05-01

    A method to restore the depth map of an RGB-D image using smooth depth neighborhood (SDN) supports is presented. The SDN supports are computed based on the corresponding color image of the depth map. Compared with the most widely used square supports, the proposed SDN supports can well-capture the local structure of the object. Only pixels with similar depth values are allowed to be included in the support. We combine our SDN supports with the joint bilateral filter (JBF) to form the SDN-JBF and use it to restore depth maps. Experimental results show that our SDN-JBF can not only rectify the misaligned depth pixels but also preserve sharp depth discontinuities.

  8. Theoretical performance model for single image depth from defocus.

    PubMed

    Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme

    2014-12-01

    In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

  9. A 20 Mfps high frame-depth CMOS burst-mode imager with low power in-pixel NMOS-only passive amplifier

    NASA Astrophysics Data System (ADS)

    Wu, L.; San Segundo Bello, D.; Coppejans, P.; Craninckx, J.; Wambacq, P.; Borremans, J.

    2017-02-01

    This paper presents a 20 Mfps 32 × 84 pixels CMOS burst-mode imager featuring high frame depth with a passive in-pixel amplifier. Compared to the CCD alternatives, CMOS burst-mode imagers are attractive for their low power consumption and integration of circuitry such as ADCs. Due to storage capacitor size and its noise limitations, CMOS burst-mode imagers usually suffer from a lower frame depth than CCD implementations. In order to capture fast transitions over a longer time span, an in-pixel CDS technique has been adopted to reduce the required memory cells for each frame by half. Moreover, integrated with in-pixel CDS, an in-pixel NMOS-only passive amplifier alleviates the kTC noise requirements of the memory bank allowing the usage of smaller capacitors. Specifically, a dense 108-cell MOS memory bank (10fF/cell) has been implemented inside a 30μm pitch pixel, with an area of 25 × 30μm2 occupied by the memory bank. There is an improvement of about 4x in terms of frame depth per pixel area by applying in-pixel CDS and amplification. With the amplifier's gain of 3.3, an FD input-referred RMS noise of 1mV is achieved at 20 Mfps operation. While the amplification is done without burning DC current, including the pixel source follower biasing, the full pixel consumes 10μA at 3.3V supply voltage at full speed. The chip has been fabricated in imec's 130nm CMOS CIS technology.

  10. Multispectral near-infrared reflectance and transillumination imaging of occlusal carious lesions: variations in lesion contrast with lesion depth

    NASA Astrophysics Data System (ADS)

    Simon, Jacob C.; Curtis, Donald A.; Darling, Cynthia L.; Fried, Daniel

    2018-02-01

    In vivo and in vitro studies have demonstrated that near-infrared (NIR) light at λ=1300-1700-nm can be used to acquire high contrast images of enamel demineralization without interference of stains. The objective of this study was to determine if a relationship exists between the NIR image contrast of occlusal lesions and the depth of the lesion. Extracted teeth with varying amounts of natural occlusal decay were measured using a multispectral-multimodal NIR imaging system which captures λ=1300-nm occlusal transillumination, and λ=1500-1700-nm cross-polarized reflectance images. Image analysis software was used to calculate the lesion contrast detected in both images from matched positions of each imaging modality. Samples were serially sectioned across the lesion with a precision saw, and polarized light microscopy was used to measure the respective lesion depth relative to the dentinoenamel junction. Lesion contrast measured from NIR crosspolarized reflectance images positively correlated (p<0.05) with increasing lesion depth and a statistically significant difference between inner enamel and dentin lesions was observed. The lateral width of pit and fissures lesions measured in both NIR cross-polarized reflectance and NIR transillumination positively correlated with lesion depth.

  11. Removing the depth-degeneracy in optical frequency domain imaging with frequency shifting

    PubMed Central

    Yun, S. H.; Tearney, G. J.; de Boer, J. F.; Bouma, B. E.

    2009-01-01

    A novel technique using an acousto-optic frequency shifter in optical frequency domain imaging (OFDI) is presented. The frequency shift eliminates the ambiguity between positive and negative differential delays, effectively doubling the interferometric ranging depth while avoiding image cross-talk. A signal processing algorithm is demonstrated to accommodate nonlinearity in the tuning slope of the wavelength-swept OFDI laser source. PMID:19484034

  12. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  13. Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens.

    PubMed

    Shen, Xin; Javidi, Bahram

    2018-03-01

    We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.

  14. Action recognition using multi-scale histograms of oriented gradients based depth motion trail Images

    NASA Astrophysics Data System (ADS)

    Wang, Guanxi; Tie, Yun; Qi, Lin

    2017-07-01

    In this paper, we propose a novel approach based on Depth Maps and compute Multi-Scale Histograms of Oriented Gradient (MSHOG) from sequences of depth maps to recognize actions. Each depth frame in a depth video sequence is projected onto three orthogonal Cartesian planes. Under each projection view, the absolute difference between two consecutive projected maps is accumulated through a depth video sequence to form a Depth Map, which is called Depth Motion Trail Images (DMTI). The MSHOG is then computed from the Depth Maps for the representation of an action. In addition, we apply L2-Regularized Collaborative Representation (L2-CRC) to classify actions. We evaluate the proposed approach on MSR Action3D dataset and MSRGesture3D dataset. Promising experimental result demonstrates the effectiveness of our proposed method.

  15. Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.

    PubMed

    Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun

    2017-07-01

    In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.

  16. Self-interference fluorescence microscopy with three-phase detection for depth-resolved confocal epi-fluorescence imaging.

    PubMed

    Braaf, Boy; de Boer, Johannes F

    2017-03-20

    Three-dimensional confocal fluorescence imaging of in vivo tissues is challenging due to sample motion and limited imaging speeds. In this paper a novel method is therefore presented for scanning confocal epi-fluorescence microscopy with instantaneous depth-sensing based on self-interference fluorescence microscopy (SIFM). A tabletop epi-fluorescence SIFM setup was constructed with an annular phase plate in the emission path to create a spectral self-interference signal that is phase-dependent on the axial position of a fluorescent sample. A Mach-Zehnder interferometer based on a 3 × 3 fiber-coupler was developed for a sensitive phase analysis of the SIFM signal with three photon-counter detectors instead of a spectrometer. The Mach-Zehnder interferometer created three intensity signals that alternately oscillated as a function of the SIFM spectral phase and therefore encoded directly for the axial sample position. Controlled axial translation of fluorescent microsphere layers showed a linear dependence of the SIFM spectral phase with sample depth over axial image ranges of 500 µm and 80 µm (3.9 × Rayleigh range) for 4 × and 10 × microscope objectives respectively. In addition, SIFM was in good agreement with optical coherence tomography depth measurements on a sample with indocyanine green dye filled capillaries placed at multiple depths. High-resolution SIFM imaging applications are demonstrated for fluorescence angiography on a dye-filled capillary blood vessel phantom and for autofluorescence imaging on an ex vivo fly eye.

  17. Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images.

    PubMed

    Tian, Jing; Marziliano, Pina; Baskaran, Mani; Tun, Tin Aung; Aung, Tin

    2013-03-01

    Enhanced Depth Imaging (EDI) optical coherence tomography (OCT) provides high-definition cross-sectional images of the choroid in vivo, and hence is used in many clinical studies. However, the quantification of the choroid depends on the manual labelings of two boundaries, Bruch's membrane and the choroidal-scleral interface. This labeling process is tedious and subjective of inter-observer differences, hence, automatic segmentation of the choroid layer is highly desirable. In this paper, we present a fast and accurate algorithm that could segment the choroid automatically. Bruch's membrane is detected by searching the pixel with the biggest gradient value above the retinal pigment epithelium (RPE) and the choroidal-scleral interface is delineated by finding the shortest path of the graph formed by valley pixels using Dijkstra's algorithm. The experiments comparing automatic segmentation results with the manual labelings are conducted on 45 EDI-OCT images and the average of Dice's Coefficient is 90.5%, which shows good consistency of the algorithm with the manual labelings. The processing time for each image is about 1.25 seconds.

  18. Underwater image enhancement through depth estimation based on random forest

    NASA Astrophysics Data System (ADS)

    Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han

    2017-11-01

    Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.

  19. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    PubMed

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  20. Utility of spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI) to non-invasively diagnose burn depth in a porcine model☆

    PubMed Central

    Burmeister, David M.; Ponticorvo, Adrien; Yang, Bruce; Becerra, Sandra C.; Choi, Bernard; Durkin, Anthony J.; Christy, Robert J.

    2015-01-01

    Surgical intervention of second degree burns is often delayed because of the difficulty in visual diagnosis, which increases the risk of scarring and infection. Non-invasive metrics have shown promise in accurately assessing burn depth. Here, we examine the use of spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI) for predicting burn depth. Contact burn wounds of increasing severity were created on the dorsum of a Yorkshire pig, and wounds were imaged with SFDI/LSI starting immediately after-burn and then daily for the next 4 days. In addition, on each day the burn wounds were biopsied for histological analysis of burn depth, defined by collagen coagulation, apoptosis, and adnexal/vascular necrosis. Histological results show that collagen coagulation progressed from day 0 to day 1, and then stabilized. Results of burn wound imaging using non-invasive techniques were able to produce metrics that correlate to different predictors of burn depth. Collagen coagulation and apoptosis correlated with SFDI scattering coefficient parameter ( μs′) and adnexal/vascular necrosis on the day of burn correlated with blood flow determined by LSI. Therefore, incorporation of SFDI scattering coefficient and blood flow determined by LSI may provide an algorithm for accurate assessment of the severity of burn wounds in real time. PMID:26138371

  1. Depth-encoded all-fiber swept source polarization sensitive OCT

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Lee, ByungKun; Choi, WooJhon; Potsaid, Benjamin; Liu, Jonathan; Jayaraman, Vijaysekhar; Cable, Alex; Kraus, Martin F.; Liang, Kaicheng; Hornegger, Joachim; Fujimoto, James G.

    2014-01-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of conventional OCT and can assess depth-resolved tissue birefringence in addition to intensity. Most existing PS-OCT systems are relatively complex and their clinical translation remains difficult. We present a simple and robust all-fiber PS-OCT system based on swept source technology and polarization depth-encoding. Polarization multiplexing was achieved using a polarization maintaining fiber. Polarization sensitive signals were detected using fiber based polarization beam splitters and polarization controllers were used to remove the polarization ambiguity. A simplified post-processing algorithm was proposed for speckle noise reduction relaxing the demand for phase stability. We demonstrated systems design for both ophthalmic and catheter-based PS-OCT. For ophthalmic imaging, we used an optical clock frequency doubling method to extend the imaging range of a commercially available short cavity light source to improve polarization depth-encoding. For catheter based imaging, we demonstrated 200 kHz PS-OCT imaging using a MEMS-tunable vertical cavity surface emitting laser (VCSEL) and a high speed micromotor imaging catheter. The system was demonstrated in human retina, finger and lip imaging, as well as ex vivo swine esophagus and cardiovascular imaging. The all-fiber PS-OCT is easier to implement and maintain compared to previous PS-OCT systems and can be more easily translated to clinical applications due to its robust design. PMID:25401008

  2. Performance evaluation of extended depth of field microscopy in the presence of spherical aberration and noise

    NASA Astrophysics Data System (ADS)

    King, Sharon V.; Yuan, Shuai; Preza, Chrysanthe

    2018-03-01

    Effectiveness of extended depth of field microscopy (EDFM) implementation with wavefront encoding methods is reduced by depth-induced spherical aberration (SA) due to reliance of this approach on a defined point spread function (PSF). Evaluation of the engineered PSF's robustness to SA, when a specific phase mask design is used, is presented in terms of the final restored image quality. Synthetic intermediate images were generated using selected generalized cubic and cubic phase mask designs. Experimental intermediate images were acquired using the same phase mask designs projected from a liquid crystal spatial light modulator. Intermediate images were restored using the penalized space-invariant expectation maximization and the regularized linear least squares algorithms. In the presence of depth-induced SA, systems characterized by radially symmetric PSFs, coupled with model-based computational methods, achieve microscope imaging performance with fewer deviations in structural fidelity (e.g., artifacts) in simulation and experiment and 50% more accurate positioning of 1-μm beads at 10-μm depth in simulation than those with radially asymmetric PSFs. Despite a drop in the signal-to-noise ratio after processing, EDFM is shown to achieve the conventional resolution limit when a model-based reconstruction algorithm with appropriate regularization is used. These trends are also found in images of fixed fluorescently labeled brine shrimp, not adjacent to the coverslip, and fluorescently labeled mitochondria in live cells.

  3. 3D sorghum reconstructions from depth images identify QTL regulating shoot architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mccormick, Ryan F.; Truong, Sandra K.; Mullet, John E.

    Dissecting the genetic basis of complex traits is aided by frequent and nondestructive measurements. Advances in range imaging technologies enable the rapid acquisition of three-dimensional (3D) data from an imaged scene. A depth camera was used to acquire images of sorghum (Sorghum bicolor), an important grain, forage, and bioenergy crop, at multiple developmental time points from a greenhouse-grown recombinant inbred line population. A semiautomated software pipeline was developed and used to generate segmented, 3D plant reconstructions from the images. Automated measurements made from 3D plant reconstructions identified quantitative trait loci for standard measures of shoot architecture, such as shoot height,more » leaf angle, and leaf length, and for novel composite traits, such as shoot compactness. The phenotypic variability associated with some of the quantitative trait loci displayed differences in temporal prevalence; for example, alleles closely linked with the sorghum Dwarf3 gene, an auxin transporter and pleiotropic regulator of both leaf inclination angle and shoot height, influence leaf angle prior to an effect on shoot height. Furthermore, variability in composite phenotypes that measure overall shoot architecture, such as shoot compactness, is regulated by loci underlying component phenotypes like leaf angle. As such, depth imaging is an economical and rapid method to acquire shoot architecture phenotypes in agriculturally important plants like sorghum to study the genetic basis of complex traits.« less

  4. 3D sorghum reconstructions from depth images identify QTL regulating shoot architecture

    DOE PAGES

    Mccormick, Ryan F.; Truong, Sandra K.; Mullet, John E.

    2016-08-15

    Dissecting the genetic basis of complex traits is aided by frequent and nondestructive measurements. Advances in range imaging technologies enable the rapid acquisition of three-dimensional (3D) data from an imaged scene. A depth camera was used to acquire images of sorghum (Sorghum bicolor), an important grain, forage, and bioenergy crop, at multiple developmental time points from a greenhouse-grown recombinant inbred line population. A semiautomated software pipeline was developed and used to generate segmented, 3D plant reconstructions from the images. Automated measurements made from 3D plant reconstructions identified quantitative trait loci for standard measures of shoot architecture, such as shoot height,more » leaf angle, and leaf length, and for novel composite traits, such as shoot compactness. The phenotypic variability associated with some of the quantitative trait loci displayed differences in temporal prevalence; for example, alleles closely linked with the sorghum Dwarf3 gene, an auxin transporter and pleiotropic regulator of both leaf inclination angle and shoot height, influence leaf angle prior to an effect on shoot height. Furthermore, variability in composite phenotypes that measure overall shoot architecture, such as shoot compactness, is regulated by loci underlying component phenotypes like leaf angle. As such, depth imaging is an economical and rapid method to acquire shoot architecture phenotypes in agriculturally important plants like sorghum to study the genetic basis of complex traits.« less

  5. Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates

    NASA Astrophysics Data System (ADS)

    Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.

    2010-04-01

    Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.

  6. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models

    PubMed Central

    Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir

    2016-01-01

    Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570

  7. Moho Depth Variations in the Northeastern North China Craton Revealed by Receiver Function Imaging

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, L.; Yao, H.; Fang, L.

    2016-12-01

    The North China Craton (NCC), one of the oldest cratons in the world, has attracted wide attention in Earth Science for decades because of the unusual Mesozoic destruction of its cratonic lithosphere. Understanding the deep processes and mechanism of this craton destruction demands detailed knowledge about the deep structure of the region. In this study, we used two-year teleseismic receiver function data from the North China Seismic Array consisting of 200 broadband stations deployed in the northeastern NCC to image the Moho undulation of the region. A 2-D wave equation-based poststack depth migration method was employed to construct the structural images along 19 profiles, and a pseudo 3D crustal velocity model of the region based on previous ambient noise tomography and receiver function study was adopted in the migration. We considered both the Ps and PpPs phases, but in some cases we also conducted PpSs+PsPs migration using different back azimuth ranges of the data, and calculated the travel times of all the considered phases to constrain the Moho depths. By combining the structure images along the 19 profiles, we got a high-resolution Moho depth map beneath the northeastern NCC. Our results broadly consist with the results of previous active source studies [http://www.craton.cn/data], and show a good correlation of the Moho depths with geological and tectonic features. Generally, the Moho depths are distinctly different on the opposite sides of the North-South Gravity Lineament. The Moho in the west are deeper than 40 km and shows a rapid uplift from 40 km to 30 km beneath the Taihang Mountain Range in the middle. To the east in the Bohai Bay Basin, the Moho further shallows to 30-26 km depth and undulates by 3 km, coinciding well with the depressions and uplifts inside the basin. The Moho depth beneath the Yin-Yan Mountains in the north gradually decreases from 42 km in the west to 25 km in the east, varying much smoother than that to the south.

  8. Adding polarimetric imaging to depth map using improved light field camera 2.0 structure

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Yang, Yi; Du, Shaojun; Cao, Yu

    2017-06-01

    Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an "eyes array", with 3 or more polarization imaging "glasses" in front of each "eye". Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring "eyes", while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.

  9. A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion

    NASA Astrophysics Data System (ADS)

    Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen

    2017-09-01

    In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.

  10. In vivo volumetric depth-resolved vasculature imaging of human limbus and sclera with 1 μm swept source phase-variance optical coherence angiography

    NASA Astrophysics Data System (ADS)

    Poddar, Raju; Zawadzki, Robert J.; Cortés, Dennis E.; Mannis, Mark J.; Werner, John S.

    2015-06-01

    We present in vivo volumetric depth-resolved vasculature images of the anterior segment of the human eye acquired with phase-variance based motion contrast using a high-speed (100 kHz, 105 A-scans/s) swept source optical coherence tomography system (SSOCT). High phase stability SSOCT imaging was achieved by using a computationally efficient phase stabilization approach. The human corneo-scleral junction and sclera were imaged with swept source phase-variance optical coherence angiography and compared with slit lamp images from the same eyes of normal subjects. Different features of the rich vascular system in the conjunctiva and episclera were visualized and described. This system can be used as a potential tool for ophthalmological research to determine changes in the outflow system, which may be helpful for identification of abnormalities that lead to glaucoma.

  11. Extended depth measurement for a Stokes sample imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Dixon, Alexander W.; Taberner, Andrew J.; Nash, Martyn P.; Nielsen, Poul M. F.

    2018-02-01

    A non-destructive imaging technique is required for quantifying the anisotropic and heterogeneous structural arrangement of collagen in soft tissue membranes, such as bovine pericardium, which are used in the construction of bioprosthetic heart valves. Previously, our group developed a Stokes imaging polarimeter that measures the linear birefringence of samples in a transmission arrangement. With this device, linear retardance and optic axis orientation; can be estimated over a sample using simple vector algebra on Stokes vectors in the Poincaré sphere. However, this method is limited to a single path retardation of a half-wave, limiting the thickness of samples that can be imaged. The polarimeter has been extended to allow illumination of narrow bandwidth light of controllable wavelength through achromatic lenses and polarization optics. We can now take advantage of the wavelength dependence of relative retardation to remove ambiguities that arise when samples have a single path retardation of a half-wave to full-wave. This effectively doubles the imaging depth of this method. The method has been validated using films of cellulose of varied thickness, and applied to samples of bovine pericardium.

  12. Synthetic light-needle photoacoustic microscopy for extended depth of field (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Jiamiao; Gong, Lei; Xu, Xiao; Hai, Pengfei; Suzuki, Yuta; Wang, Lihong V.

    2017-03-01

    Photoacoustic microscopy (PAM) has been extensively applied in biomedical study because of its ability to visualize tissue morphology and physiology in vivo in three dimensions (3D). However, conventional PAM suffers from a rapidly decreasing resolution away from the focal plane because of the limited depth of focus of an objective lens, which deteriorates the volumetric imaging quality inevitably. Here, we propose a novel method to synthesize an ultra-long light needle to extend a microscope's depth of focus beyond its physical limitations with wavefront engineering method. Furthermore, it enables an improved lateral resolution that exceeds the diffraction limit of the objective lens. The virtual light needle can be flexibly synthesized anywhere throughout the imaging volume without mechanical scanning. Benefiting from these advantages, we developed a synthetic light needle photoacoustic microscopy (SLN-PAM) to achieve an extended depth of field (DOF), sub-diffraction and motionless volumetric imaging. The DOF of our SLN-PAM system is up to 1800 µm, more than 30-fold improvement over that gained by conventional PAM. Our system also achieves the lateral resolution of 1.8 µm (characterized at 532 nm and 0.1 NA objective), about 50% higher than the Rayleigh diffraction limit. Its superior imaging performance was demonstrated by 3D imaging of both non-biological and biological samples. This extended DOF, sub-diffraction and motionless 3D PAM will open up new opportunities for potential biomedical applications.

  13. Modeling the depth-sectioning effect in reflection-mode dynamic speckle-field interferometric microscopy

    PubMed Central

    Zhou, Renjie; Jin, Di; Hosseini, Poorya; Singh, Vijay Raj; Kim, Yang-hyo; Kuang, Cuifang; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Unlike most optical coherence microscopy (OCM) systems, dynamic speckle-field interferometric microscopy (DSIM) achieves depth sectioning through the spatial-coherence gating effect. Under high numerical aperture (NA) speckle-field illumination, our previous experiments have demonstrated less than 1 μm depth resolution in reflection-mode DSIM, while doubling the diffraction limited resolution as under structured illumination. However, there has not been a physical model to rigorously describe the speckle imaging process, in particular explaining the sectioning effect under high illumination and imaging NA settings in DSIM. In this paper, we develop such a model based on the diffraction tomography theory and the speckle statistics. Using this model, we calculate the system response function, which is used to further obtain the depth resolution limit in reflection-mode DSIM. Theoretically calculated depth resolution limit is in an excellent agreement with experiment results. We envision that our physical model will not only help in understanding the imaging process in DSIM, but also enable better designing such systems for depth-resolved measurements in biological cells and tissues. PMID:28085800

  14. Ambient Seismic Imaging of Hydraulically Active Fractures at km Depths

    NASA Astrophysics Data System (ADS)

    Malin, P. E.; Sicking, C.

    2017-12-01

    Streaming Depth Images of ambient seismic signals using numerous, densely-distributed, receivers have revealed their connection to hydraulically active fractures at 0.5 to 5 km depths. Key for this type of imaging is very high-fold stacking over both multiple receives and periods of a few hours. Also important is suppression of waveforms from fixed, repeating sources such as pumps, generators, and traffic. A typical surface-based ambient SDI survey would use a 3D seismic receiver grid. It would have 1,000 to 4,000 uniformly distributed receivers at a density of 50/km2over the target. If acquired by borehole receivers buried 100 m deep, the density can be dropped by an order of magnitude. We show examples of the acquisition and signal processing scenarios used to produce the ambient images. (Sicking et al., SEG Interpretation, Nov 2017.) While the fracture-fluid source connection of SDI has been verified by drilling and various types of hydraulic tests, the precise nature of the signal's origin is not clear. At the current level of observation, the signals do not have identifiable phases, but can be focused using P wave velocities. Suggested sources are resonances of pressures fluctuations in the fractures, or small, continuous, slips on fractures surfaces. In either case, it appears that the driving mechanism is tectonic strain in an inherently unstable crust. Solid earth tides may enhance these strains. We illustrate the value of the ambient SDI method in its industrial application by showing case histories from energy industry and carbon-capture-sequestration projects. These include ambient images taken before, during, and after hydraulic treatments in un-conventional reservoirs. The results show not only locations of active fractures, but also their time responses to stimulation and production. Time-lapse ambient imaging can forecast and track events such as well interferences and production changes that can result from nearby treatments.

  15. Double peacock eye optical element for extended focal depth imaging with ophthalmic applications.

    PubMed

    Romero, Lenny A; Millán, María S; Jaroszewicz, Zbigniew; Kolodziejczyk, Andrzej

    2012-04-01

    The aged human eye is commonly affected by presbyopia, and therefore, it gradually loses its capability to form images of objects placed at different distances. Extended depth of focus (EDOF) imaging elements can overcome this inability, despite the introduction of a certain amount of aberration. This paper evaluates the EDOF imaging performance of the so-called peacock eye phase diffractive element, which focuses an incident plane wave into a segment of the optical axis and explores the element's potential use for ophthalmic presbyopia compensation optics. Two designs of the element are analyzed: the single peacock eye, which produces one focal segment along the axis, and the double peacock eye, which is a spatially multiplexed element that produces two focal segments with partial overlapping along the axis. The performances of the peacock eye elements are compared with those of multifocal lenses through numerical simulations as well as optical experiments in the image space. The results demonstrate that the peacock eye elements form sharper images along the focal segment than the multifocal lenses and, therefore, are more suitable for presbyopia compensation. The extreme points of the depth of field in the object space, which represent the remote and the near object points, have been experimentally obtained for both the single and the double peacock eye optical elements. The double peacock eye element has better imaging quality for relatively short and intermediate distances than the single peacock eye, whereas the latter seems better for far distance vision.

  16. Dedicated phantom to study susceptibility artifacts caused by depth electrode in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Hidalgo, S. S.; Solis, S. E.; Vazquez, D.; Nuñez, J.; Rodriguez, A. O.

    2012-10-01

    The susceptibility artifacts can degrade of magnetic resonance image quality. Electrodes are an important source of artifacts when performing brain imaging. A dedicated phantom was built using a depth electrode to study the susceptibility effects under different pulse sequences. T2-weighted images were acquired with both gradient-and spin-echo sequences. The spin-echo sequences can significantly attenuate the susceptibility artifacts allowing a straightforward visualization of the regions surrounding the electrode.

  17. Depth of focus enhancement of a modified imaging quasi-fractal zone plate.

    PubMed

    Zhang, Qinqin; Wang, Jingang; Wang, Mingwei; Bu, Jing; Zhu, Siwei; Gao, Bruce Z; Yuan, Xiaocong

    2012-10-01

    We propose a new parameter w for optimization of foci distribution of conventional fractal zone plates (FZPs) with a greater depth of focus (DOF) in imaging. Numerical simulations of DOF distribution on axis directions indicate that the values of DOF can be extended by a factor of 1.5 or more by a modified quasi-FZP. In experiments, we employ a simple object-lens-image-plane arrangement to pick up images at various positions within the DOF of a conventional FZP and a quasi-FZP, respectively. Experimental results show that the parameter w improves foci distribution of FZPs in good agreement with theoretical predictions.

  18. Depth of focus enhancement of a modified imaging quasi-fractal zone plate

    PubMed Central

    Zhang, Qinqin; Wang, Jingang; Wang, Mingwei; Bu, Jing; Zhu, Siwei; Gao, Bruce Z.; Yuan, Xiaocong

    2013-01-01

    We propose a new parameter w for optimization of foci distribution of conventional fractal zone plates (FZPs) with a greater depth of focus (DOF) in imaging. Numerical simulations of DOF distribution on axis directions indicate that the values of DOF can be extended by a factor of 1.5 or more by a modified quasi-FZP. In experiments, we employ a simple object–lens–image-plane arrangement to pick up images at various positions within the DOF of a conventional FZP and a quasi-FZP, respectively. Experimental results show that the parameter w improves foci distribution of FZPs in good agreement with theoretical predictions. PMID:24285908

  19. Full-Depth Coadds of the WISE and First-Year NEOWISE-Reactivation Images

    DOE PAGES

    Meisner, Aaron M.; Lang, Dustin; Schlegel, David J.

    2017-01-03

    The Near Earth Object Wide-field Infrared Survey Explorer (NEOWISE) Reactivation mission released data from its first full year of observations in 2015. This data set includes ~2.5 million exposures in each of W1 and W2, effectively doubling the amount of WISE imaging available at 3.4 μm and 4.6 μm relative to the AllWISE release. In this paper, we have created the first ever full-sky set of coadds combining all publicly available W1 and W2 exposures from both the AllWISE and NEOWISE-Reactivation (NEOWISER) mission phases. We employ an adaptation of the unWISE image coaddition framework, which preserves the native WISE angularmore » resolution and is optimized for forced photometry. By incorporating two additional scans of the entire sky, we not only improve the W1/W2 depths, but also largely eliminate time-dependent artifacts such as off-axis scattered moonlight. We anticipate that our new coadds will have a broad range of applications, including target selection for upcoming spectroscopic cosmology surveys, identification of distant/massive galaxy clusters, and discovery of high-redshift quasars. In particular, our full-depth AllWISE+NEOWISER coadds will be an important input for the Dark Energy Spectroscopic Instrument selection of luminous red galaxy and quasar targets. Our full-depth W1/W2 coadds are already in use within the DECam Legacy Survey (DECaLS) and Mayall z-band Legacy Survey (MzLS) reduction pipelines. Finally, much more work still remains in order to fully leverage NEOWISER imaging for astrophysical applications beyond the solar system.« less

  20. Experimental study on the sensitive depth of backwards detected light in turbid media.

    PubMed

    Zhang, Yunyao; Huang, Liqing; Zhang, Ning; Tian, Heng; Zhu, Jingping

    2018-05-28

    In the recent past, optical spectroscopy and imaging methods for biomedical diagnosis and target enhancing have been widely researched. The challenge to improve the performance of these methods is to know the sensitive depth of the backwards detected light well. Former research mainly employed a Monte Carlo method to run simulations to statistically describe the light sensitive depth. An experimental method for investigating the sensitive depth was developed and is presented here. An absorption plate was employed to remove all the light that may have travelled deeper than the plate, leaving only the light which cannot reach the plate. By measuring the received backwards light intensity and the depth between the probe and the plate, the light intensity distribution along the depth dimension can be achieved. The depth with the maximum light intensity was recorded as the sensitive depth. The experimental results showed that the maximum light intensity was nearly the same in a short depth range. It could be deduced that the sensitive depth was a range, rather than a single depth. This sensitive depth range as well as its central depth increased consistently with the increasing source-detection distance. Relationships between sensitive depth and optical properties were also investigated. It also showed that the reduced scattering coefficient affects the central sensitive depth and the range of the sensitive depth more than the absorption coefficient, so they cannot be simply added as reduced distinct coefficients to describe the sensitive depth. This study provides an efficient method for investigation of sensitive depth. It may facilitate the development of spectroscopy and imaging techniques for biomedical diagnosis and underwater imaging.

  1. Real-time depth processing for embedded platforms

    NASA Astrophysics Data System (ADS)

    Rahnama, Oscar; Makarov, Aleksej; Torr, Philip

    2017-05-01

    Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.

  2. X-RAY IMAGING Achieving the third dimension using coherence

    DOE PAGES

    Robinson, Ian; Huang, Xiaojing

    2017-01-25

    X-ray imaging is extensively used in medical and materials science. Traditionally, the depth dimension is obtained by turning the sample to gain different views. The famous penetrating properties of X-rays mean that projection views of the subject sample can be readily obtained in the linear absorption regime. 180 degrees of projections can then be combined using computed tomography (CT) methods to obtain a full 3D image, a technique extensively used in medical imaging. In the work now presented in Nature Materials, Stephan Hruszkewycz and colleagues have demonstrated genuine 3D imaging by a new method called 3D Bragg projection ptychography1. Ourmore » approach combines the 'side view' capability of using Bragg diffraction from a crystalline sample with the coherence capabilities of ptychography. Thus, it results in a 3D image from a 2D raster scan of a coherent beam across a sample that does not have to be rotated.« less

  3. Development and evaluation of a hand tracker using depth images captured from an overhead perspective.

    PubMed

    Czarnuch, Stephen; Mihailidis, Alex

    2015-03-27

    We present the development and evaluation of a robust hand tracker based on single overhead depth images for use in the COACH, an assistive technology for people with dementia. The new hand tracker was designed to overcome limitations experienced by the COACH in previous clinical trials. We train a random decision forest classifier using ∼5000 manually labeled, unbalanced, training images. Hand positions from the classifier are translated into task actions based on proximity to environmental objects. Tracker performance is evaluated using a large set of ∼24 000 manually labeled images captured from 41 participants in a fully-functional washroom, and compared to the system's previous colour-based hand tracker. Precision and recall were 0.994 and 0.938 for the depth tracker compared to 0.981 and 0.822 for the colour tracker with the current data, and 0.989 and 0.466 in the previous study. The improved tracking performance supports integration of the depth-based tracker into the COACH toward unsupervised, real-world trials. Implications for Rehabilitation The COACH is an intelligent assistive technology that can enable people with cognitive disabilities to stay at home longer, supporting the concept of aging-in-place. Automated prompting systems, a type of intelligent assistive technology, can help to support the independent completion of activities of daily living, increasing the independence of people with cognitive disabilities while reducing the burden of care experienced by caregivers. Robust motion tracking using depth imaging supports the development of intelligent assistive technologies like the COACH. Robust motion tracking also has application to other forms of assistive technologies including gaming, human-computer interaction and automated assessments.

  4. The Effects of Multimedia Learning on Thai Primary Pupils' Achievement in Size and Depth of Vocabulary Knowledge

    ERIC Educational Resources Information Center

    Jingjit, Mathukorn

    2015-01-01

    This study aims to obtain more insight regarding the effect of multimedia learning on third grade of Thai primary pupils' achievement in Size and Depth Vocabulary of English. A quasi-experiment is applied using "one group pretest-posttest design" combined with "time series design," as well as data triangulation. The sample…

  5. Maximum imaging depth comparison in porcine vocal folds using 776-nm vs. 1552-nm excitation wavelengths

    NASA Astrophysics Data System (ADS)

    Yildirim, Murat; Ferhanoglu, Onur; Kobler, James B.; Zeitels, Steven M.; Ben-Yakar, Adela

    2013-02-01

    Vocal fold scarring is one of the major causes of voice disorders and may arise from overuse or post-surgical wound healing. One promising treatment utilizes the injection of soft biomaterials aimed at restoring viscoelasticity of the outermost vibratory layer of the vocal fold, superficial lamina propria (SLP). However, the density of the tissue and the required injection pressure impair proper localization of the injected biomaterial in SLP. To enhance treatment effectiveness, we are investigating a technique to image and ablate sub-epithelial planar voids in vocal folds using ultrafast laser pulses to better localize the injected biomaterial. It is challenging to optimize the excitation wavelength to perform imaging and ablation at depths suitable for clinical use. Here, we compare maximum imaging depth using two photon autofluorescence and second harmonic generation with third-harmonic generation imaging modalities for healthy porcine vocal folds. We used a home-built inverted nonlinear scanning microscope together with a high repetition rate (2 MHz) ultrafast fiber laser (Raydiance Inc.). We acquired both two-photon autofluorescence and second harmonic generation signals using 776 nm wavelength and third harmonic generation signals using 1552 nm excitation wavelength. We observed that maximum imaging depth with 776 nm wavelength is significantly improved from 114 μm to 205 μm when third harmonic generation is employed using 1552 nm wavelength, without any observable damage in the tissue.

  6. Extending the depth of field with chromatic aberration for dual-wavelength iris imaging.

    PubMed

    Fitzgerald, Niamh M; Dainty, Christopher; Goncharov, Alexander V

    2017-12-11

    We propose a method of extending the depth of field to twice that achievable by conventional lenses for the purpose of a low cost iris recognition front-facing camera in mobile phones. By introducing intrinsic primary chromatic aberration in the lens, the depth of field is doubled by means of dual wavelength illumination. The lens parameters (radius of curvature, optical power) can be found analytically by using paraxial raytracing. The effective range of distances covered increases with dispersion of the glass chosen and with larger distance for the near object point.

  7. Principal component analysis of TOF-SIMS spectra, images and depth profiles: an industrial perspective

    NASA Astrophysics Data System (ADS)

    Pacholski, Michaeleen L.

    2004-06-01

    Principal component analysis (PCA) has been successfully applied to time-of-flight secondary ion mass spectrometry (TOF-SIMS) spectra, images and depth profiles. Although SIMS spectral data sets can be small (in comparison to datasets typically discussed in literature from other analytical techniques such as gas or liquid chromatography), each spectrum has thousands of ions resulting in what can be a difficult comparison of samples. Analysis of industrially-derived samples means the identity of most surface species are unknown a priori and samples must be analyzed rapidly to satisfy customer demands. PCA enables rapid assessment of spectral differences (or lack there of) between samples and identification of chemically different areas on sample surfaces for images. Depth profile analysis helps define interfaces and identify low-level components in the system.

  8. Overcoming sampling depth variations in the analysis of broadband hyperspectral images of breast tissue (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kho, Esther; de Boer, Lisanne L.; Van de Vijver, Koen K.; Sterenborg, Henricus J. C. M.; Ruers, Theo J. M.

    2017-02-01

    Worldwide, up to 40% of the breast conserving surgeries require additional operations due to positive resection margins. We propose to reduce this percentage by using hyperspectral imaging for resection margin assessment during surgery. Spectral hypercubes were collected from 26 freshly excised breast specimens with a pushbroom camera (900-1700nm). Computer simulations of the penetration depth in breast tissue suggest a strong variation in sampling depth ( 0.5-10 mm) over this wavelength range. This was confirmed with a breast tissue mimicking phantom study. Smaller penetration depths are observed in wavelength regions with high water and/or fat absorption. Consequently, tissue classification based on spectral analysis over the whole wavelength range becomes complicated. This is especially a problem in highly inhomogeneous human tissue. We developed a method, called derivative imaging, which allows accurate tissue analysis, without the impediment of dissimilar sampling volumes. A few assumptions were made based on previous research. First, the spectra acquired with our camera from breast tissue are mainly shaped by fat and water absorption. Second, tumor tissue contains less fat and more water than healthy tissue. Third, scattering slopes of different tissue types are assumed to be alike. In derivative imaging, the derivatives are calculated of wavelengths a few nanometers apart; ensuring similar penetration depths. The wavelength choice determines the accuracy of the method and the resolution. Preliminary results on 3 breast specimens indicate a classification accuracy of 93% when using wavelength regions characterized by water and fat absorption. The sampling depths at these regions are 1mm and 5mm.

  9. Hybrid Imaging for Extended Depth of Field Microscopy

    NASA Astrophysics Data System (ADS)

    Zahreddine, Ramzi Nicholas

    An inverse relationship exists in optical systems between the depth of field (DOF) and the minimum resolvable feature size. This trade-off is especially detrimental in high numerical aperture microscopy systems where resolution is pushed to the diffraction limit resulting in a DOF on the order of 500 nm. Many biological structures and processes of interest span over micron scales resulting in significant blurring during imaging. This thesis explores a two-step computational imaging technique known as hybrid imaging to create extended DOF (EDF) microscopy systems with minimal sacrifice in resolution. In the first step a mask is inserted at the pupil plane of the microscope to create a focus invariant system over 10 times the traditional DOF, albeit with reduced contrast. In the second step the contrast is restored via deconvolution. Several EDF pupil masks from the literature are quantitatively compared in the context of biological microscopy. From this analysis a new mask is proposed, the incoherently partitioned pupil with binary phase modulation (IPP-BPM), that combines the most advantageous properties from the literature. Total variation regularized deconvolution models are derived for the various noise conditions and detectors commonly used in biological microscopy. State of the art algorithms for efficiently solving the deconvolution problem are analyzed for speed, accuracy, and ease of use. The IPP-BPM mask is compared with the literature and shown to have the highest signal-to-noise ratio and lowest mean square error post-processing. A prototype of the IPP-BPM mask is fabricated using a combination of 3D femtosecond glass etching and standard lithography techniques. The mask is compared against theory and demonstrated in biological imaging applications.

  10. Low-Achieving Readers, High Expectations: Image Theatre Encourages Critical Literacy

    ERIC Educational Resources Information Center

    Rozansky, Carol Lloyd; Aagesen, Colleen

    2010-01-01

    Students in an eighth-grade, urban, low-achieving reading class were introduced to critical literacy through engagement in Image Theatre. Developed by liberatory dramatist Augusto Boal, Image Theatre gives participants the opportunity to examine texts in the triple role of interpreter, artist, and sculptor (i.e., image creator). The researchers…

  11. A coaxially focused multi-mode beam for optical coherence tomography imaging with extended depth of focus (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yin, Biwei; Liang, Chia-Pin; Vuong, Barry; Tearney, Guillermo J.

    2017-02-01

    Conventional OCT images, obtained using a focused Gaussian beam have a lateral resolution of approximately 30 μm and a depth of focus (DOF) of 2-3 mm, defined as the confocal parameter (twice of Gaussian beam Rayleigh range). Improvement of lateral resolution without sacrificing imaging range requires techniques that can extend the DOF. Previously, we described a self-imaging wavefront division optical system that provided an estimated one order of magnitude DOF extension. In this study, we further investigate the properties of the coaxially focused multi-mode (CAFM) beam created by this self-imaging wavefront division optical system and demonstrate its feasibility for real-time biological tissue imaging. Gaussian beam and CAFM beam fiber optic probes with similar numerical apertures (objective NA≈0.5) were fabricated, providing lateral resolutions of approximately 2 μm. Rigorous lateral resolution characterization over depth was performed for both probes. The CAFM beam probe was found to be able to provide a DOF that was approximately one order of magnitude greater than that of Gaussian beam probe. By incorporating the CAFM beam fiber optic probe into a μOCT system with 1.5 μm axial resolution, we were able to acquire cross-sectional images of swine small intestine ex vivo, enabling the visualization of subcellular structures, providing high quality OCT images over more than a 300 μm depth range.

  12. Novel dental dynamic depth profilometric imaging using simultaneous frequency-domain infrared photothermal radiometry and laser luminescence

    NASA Astrophysics Data System (ADS)

    Nicolaides, Lena; Mandelis, Andreas

    2000-01-01

    A high-spatial-resolution dynamic experimental imaging setup, which can provide simultaneous measurements of laser- induced frequency-domain infrared photothermal radiometric and luminescence signals from defects in teeth, has been developed for the first time. The major findings of this work are: (1) radiometric images are complementary to (anticorrelated with) luminescence images, as a result of the nature of the two physical signal generation processes; (2) the radiometric amplitude exhibits much superior dynamic (signal resolution) range to luminescence in distinguishing between intact and cracked sub-surface structures in the enamel; (3) the radiometric signal (amplitude and phase) produces dental images with much better defect localization, delineation, and resolution; (4) radiometric images (amplitude and phase) at a fixed modulation frequency are depth profilometric, whereas luminescence images are not; and (5) luminescence frequency responses from enamel and hydroxyapatite exhibit two relaxation lifetimes, the longer of which (approximately ms) is common to all and is not sensitive to the defect state and overall quality of the enamel. Simultaneous radiometric and luminescence frequency scans for the purpose of depth profiling were performed and a quantitative theoretical two-lifetime rate model of dental luminescence was advanced.

  13. Depth Perception and the History of Three-Dimensional Art: Who Produced the First Stereoscopic Images?

    PubMed Central

    2017-01-01

    The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci’s Mona Lisa is the world’s first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí’s images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone. PMID:28203349

  14. Depth Perception and the History of Three-Dimensional Art: Who Produced the First Stereoscopic Images?

    PubMed

    Brooks, Kevin R

    2017-01-01

    The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci's Mona Lisa is the world's first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí's images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone.

  15. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2010-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  16. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Guan, Chun (Inventor); Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor)

    2008-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  17. Imaging the Juan de Fuca subduction plate using 3D Kirchoff Prestack Depth Migration

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Bodin, T.; Allen, R. M.; Tauzin, B.

    2014-12-01

    We propose a new Receiver Function migration method to image the subducting plate in the western US that utilizes the US array and regional network data. While the well-developed CCP (common conversion point) poststack migration is commonly used for such imaging; our method applies a 3D prestack depth migration approach. The traditional CCP and post-stack depth mapping approaches implement the ray tracing and moveout correction for the incoming teleseismic plane wave based on a 1D earth reference model and the assumption of horizontal discontinuities. Although this works well in mapping the reflection position of relatively flat discontinuities (such as the Moho or the LAB), CCP is known to give poor results in the presence of lateral volumetric velocity variations and dipping layers. Instead of making the flat layer assumption and 1D moveout correction, seismic rays are traced in a 3D tomographic model with the Fast Marching Method. With travel time information stored, our Kirchoff migration is done where the amplitude of the receiver function at a given time is distributed over all possible conversion points (i.e. along a semi-elipse) on the output migrated depth section. The migrated reflectors will appear where the semicircles constructively interfere, whereas destructive interference will cancel out noise. Synthetic tests show that in the case of a horizontal discontinuity, the prestack Kirchoff migration gives similar results to CCP, but without spurious multiples as this energy is stacked destructively and cancels out. For 45 degree and 60 degree dipping discontinuities, it also performs better in terms of imaging at the right boundary and dip angle. This is especially useful in the Western US case, beneath which the Juan de Fuca plate subducted to ~450km with a dipping angle that may exceed 50 degree. While the traditional CCP method will underestimate the dipping angle, our proposed imaging method will provide an accurate 3D subducting plate image without

  18. Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy

    PubMed Central

    Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S.; Yuste, Rafael; Ahrens, Misha B.

    2016-01-01

    Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning—removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416 × 832 × 160 µm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain. PMID:26974063

  19. A range/depth modulation transfer function (RMTF) framework for characterizing 3D imaging LADAR performance

    NASA Astrophysics Data System (ADS)

    Staple, Bevan; Earhart, R. P.; Slaymaker, Philip A.; Drouillard, Thomas F., II; Mahony, Thomas

    2005-05-01

    3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system"s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.

  20. Exploring High-Achieving Students' Images of Mathematicians

    ERIC Educational Resources Information Center

    Aguilar, Mario Sánchez; Rosas, Alejandro; Zavaleta, Juan Gabriel Molina; Romo-Vázquez, Avenilde

    2016-01-01

    The aim of this study is to describe the images that a group of high-achieving Mexican students hold of mathematicians. For this investigation, we used a research method based on the Draw-A-Scientist Test (DAST) with a sample of 63 Mexican high school students. The group of students' pictorial and written descriptions of mathematicians assisted us…

  1. Using axicons for depth discrimination in excitation-emission laser scanning imaging systems

    NASA Astrophysics Data System (ADS)

    Iglesias, Ignacio

    2017-10-01

    Besides generating good approximations to zero-order Bessel beams, an axicon lens coupled to a spatial filter can be used to collect light while preserving information on the depth coordinate of the source location. To demonstrate the principle, we describe an experimental excitation-emission fluorescence imaging system that uses an axicon twice: to generate an excitation Bessel beam and to collect the emitted light.

  2. Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging.

    PubMed

    Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho

    2004-12-01

    A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.

  3. Diffuse Optical Imaging and Spectroscopy of the Human Breast for Quantitative Oximetry with Depth Resolution

    NASA Astrophysics Data System (ADS)

    Yu, Yang

    Near-infrared spectral imaging for breast cancer diagnostics and monitoring has been a hot research topic for the past decade. Here we present instrumentation for diffuse optical imaging of breast tissue with tandem scan of a single source-detector pair with broadband light in transmission geometry for tissue oximetry. The efforts to develop the continuous-wave (CW) domain instrument have been described, and a frequency-domain (FD) system is also used to measure the bulk tissue optical properties and the breast thickness distribution. We also describe the efforts to improve the data processing codes in the 2D spatial domain for better noise suppression, contrast enhancement, and spectral analysis. We developed a paired-wavelength approach, which is based on finding pairs of wavelength that feature the same optical contrast, to quantify the tissue oxygenation for the absorption structures detected in the 2D structural image. A total of eighteen subjects, two of whom were bearing breast cancer on their right breasts, were measured with this hybrid CW/FD instrument and processed with the improved algorithms. We obtained an average tissue oxygenation value of 87% +/- 6% from the healthy breasts, significantly higher than that measured in the diseased breasts (69% +/- 14%) (p < 0.01). For the two diseased breasts, the tumor areas bear hypoxia signatures versus the remainder of the breast, with oxygenation values of 49 +/- 11% (diseased region) vs. 61 +/- 16% (healthy regions) for the breast with invasive ductal carcinoma, and 58 +/- 8% (diseased region) vs 77 +/- 11% (healthy regions) for ductal carcinoma in situ. Our subjects came from various ethnical/racial backgrounds, and two-thirds of our subjects were less than thirty years old, indicating a potential to apply the optical mammography to a broad population. The second part of this thesis covers the topic of depth discrimination, which is lacking with our single source-detector scan system. Based on an off

  4. Comparison of Coincident Multiangle Imaging Spectroradiometer and Moderate Resolution Imaging Spectroradiometer Aerosol Optical Depths over Land and Ocean Scenes Containing Aerosol Robotic Network Sites

    NASA Technical Reports Server (NTRS)

    Abdou, Wedad A.; Diner, David J.; Martonchik, John V.; Bruegge, Carol J.; Kahn, Ralph A.; Gaitley, Barbara J.; Crean, Kathleen A.; Remer, Lorraine A.; Holben, Brent

    2005-01-01

    The Multiangle Imaging Spectroradiometer (MISR) and the Moderate Resolution Imaging Spectroradiometer (MODIS), launched on 18 December 1999 aboard the Terra spacecraft, are making global observations of top-of-atmosphere (TOA) radiances. Aerosol optical depths and particle properties are independently retrieved from these radiances using methodologies and algorithms that make use of the instruments corresponding designs. This paper compares instantaneous optical depths retrieved from simultaneous and collocated radiances measured by the two instruments at locations containing sites within the Aerosol Robotic Network (AERONET). A set of 318 MISR and MODIS images, obtained during the months of March, June, and September 2002 at 62 AERONET sites, were used in this study. The results show that over land, MODIS aerosol optical depths at 470 and 660 nm are larger than those retrieved from MISR by about 35% and 10% on average, respectively, when all land surface types are included in the regression. The differences decrease when coastal and desert areas are excluded. For optical depths retrieved over ocean, MISR is on average about 0.1 and 0.05 higher than MODIS in the 470 and 660 nm bands, respectively. Part of this difference is due to radiometric calibration and is reduced to about 0.01 and 0.03 when recently derived band-to-band adjustments in the MISR radiometry are incorporated. Comparisons with AERONET data show similar patterns.

  5. Performance comparison between 8 and 14 bit-depth imaging in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.

    2011-03-01

    We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.

  6. Prestack depth migration for complex 2D structure using phase-screen propagators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, P.; Huang, Lian-Jie; Burch, C.

    1997-11-01

    We present results for the phase-screen propagator method applied to prestack depth migration of the Marmousi synthetic data set. The data were migrated as individual common-shot records and the resulting partial images were superposed to obtain the final complete Image. Tests were performed to determine the minimum number of frequency components required to achieve the best quality image and this in turn provided estimates of the minimum computing time. Running on a single processor SUN SPARC Ultra I, high quality images were obtained in as little as 8.7 CPU hours and adequate images were obtained in as little as 4.4more » CPU hours. Different methods were tested for choosing the reference velocity used for the background phase-shift operation and for defining the slowness perturbation screens. Although the depths of some of the steeply dipping, high-contrast features were shifted slightly the overall image quality was fairly insensitive to the choice of the reference velocity. Our jests show the phase-screen method to be a reliable and fast algorithm for imaging complex geologic structures, at least for complex 2D synthetic data where the velocity model is known.« less

  7. Depth of focus extended microscope configuration for imaging of incorporated groups of molecules, DNA constructs and clusters inside bacterial cells

    NASA Astrophysics Data System (ADS)

    Fessl, Tomas; Ben-Yaish, Shai; Vacha, Frantisek; Adamec, Frantisek; Zalevsky, Zeev

    2009-07-01

    Imaging of small objects such as single molecules, DNA clusters and single bacterial cells is problematic not only due to the lateral resolution that is obtainable in currently existing microscopy but also, and as much fundamentally limiting, due to the lack of sufficient axial depth of focus to have the full object focused simultaneously. Extension in depth of focus is helpful also for single molecule steady state FRET measurements. In this technique it is crucial to obtain data from many well focused molecules, which are often located in different axial depths. In this paper we present the implementation of an all-optical and a real time technique of extension in the depth of focus that may be incorporated in any high NA microscope system and to be used for the above mentioned applications. We demonstrate experimentally how after the integration of special optical element in high NA 100× objective lens of a single molecule imaging microscope system, the depth of focus is significantly improved while maintaining the same lateral resolution in imaging applications of incorporated groups of molecules, DNA constructs and clusters inside bacterial cells.

  8. Forward-looking infrared imaging predicts ultimate burn depth in a porcine vertical injury progression model.

    PubMed

    Miccio, Joseph; Parikh, Shruti; Marinaro, Xavier; Prasad, Atulya; McClain, Steven; Singer, Adam J; Clark, Richard A F

    2016-03-01

    Current methods of assessing burn depth are limited and are primarily based on visual assessments by burn surgeons. This technique has been shown to have only 60% accuracy and a more accurate, simple, noninvasive method is needed to determine burn wound depth. Forward-looking infrared (FLIR) thermography is both noninvasive and user-friendly with the potential to rapidly assess burn depth. The purpose of this paper is to determine if early changes in burn temperature (first 3 days) can be a predictor of burn depth as assessed by vertical scarring 28 days after injury. While under general anesthesia, 20 burns were created on the backs of two female Yorkshire swine using a 2.5cm×2.5cm×7.5cm, 150g aluminum bar, for a total of 40 burns. FLIR imaging was performed at both early (1, 2 and 3 days) and late (7, 10, 14, 17, 21, 24 and 28 days) time points. Burns were imaged from a height of 12 inches from the skin surface. FLIR ExaminIR(©) software was used to examine the infrared thermographs. One hundred temperature points from burn edge to edge across the center of the burn were collected for each burn at all time points and were exported as a comma-separated values (CSV) file. The CSV file was processed and analyzed using a MATLAB program. The temperature profiles through the center of the burns generated parabola-like curves. The lowest temperature (temperature minimum) and a line midway between the temperature minimum and ambient skin temperature at the burn edges was defined and the area of the curve calculated (the "temperature half-area"). Half-area values 2 days after burn had higher correlations with scar depth than did the minimum temperatures. However, burns that became warmer from 1 day to 2 days after injury had a lower scar depth then burns that became cooler and this trend was best predicted by temperature minima. When data were analyzed as a diagnostic test for sensitivity and specificity using >3mm scarring, i.e. a full-thickness burn, as a clinically

  9. Understanding and improving optical coherence tomography imaging depth in selective laser sintering nylon 12 parts and powder

    NASA Astrophysics Data System (ADS)

    Lewis, Adam D.; Katta, Nitesh; McElroy, Austin; Milner, Thomas; Fish, Scott; Beaman, Joseph

    2018-04-01

    Optical coherence tomography (OCT) has shown promise as a process sensor in selective laser sintering (SLS) due to its ability to yield depth-resolved data not attainable with conventional sensors. However, OCT images of nylon 12 powder and nylon 12 components fabricated via SLS contain artifacts that have not been previously investigated in the literature. A better understanding of light interactions with SLS powder and components is foundational for further research expanding the utility of OCT imaging in SLS and other additive manufacturing (AM) sensing applications. Specifically, in this work, nylon powder and sintered parts were imaged in air and in an index matching liquid. Subsequent image analysis revealed the cause of "signal-tail" OCT image artifacts to be a combination of both inter and intraparticle multiple-scattering and reflections. Then, the OCT imaging depth of nylon 12 powder and the contrast-to-noise ratio of a sintered part were improved through the use of an index matching liquid. Finally, polymer crystals were identified as the main source of intraparticle scattering in nylon 12 powder. Implications of these results on future research utilizing OCT in SLS are also given.

  10. Imaging photoplethysmography for clinical assessment of cutaneous microcirculation at two different depths

    NASA Astrophysics Data System (ADS)

    Marcinkevics, Zbignevs; Rubins, Uldis; Zaharans, Janis; Miscuks, Aleksejs; Urtane, Evelina; Ozolina-Moll, Liga

    2016-03-01

    The feasibility of bispectral imaging photoplethysmography (iPPG) system for clinical assessment of cutaneous microcirculation at two different depths is proposed. The iPPG system has been developed and evaluated for in vivo conditions during various tests: (1) topical application of vasodilatory liniment on the skin, (2) skin local heating, (3) arterial occlusion, and (4) regional anesthesia. The device has been validated by the measurements of a laser Doppler imager (LDI) as a reference. The hardware comprises four bispectral light sources (530 and 810 nm) for uniform illumination of skin, video camera, and the control unit for triggering of the system. The PPG signals were calculated and the changes of perfusion index (PI) were obtained during the tests. The results showed convincing correlations for PI obtained by iPPG and LDI at (1) topical liniment (r=0.98) and (2) heating (r=0.98) tests. The topical liniment and local heating tests revealed good selectivity of the system for superficial microcirculation monitoring. It is confirmed that the iPPG system could be used for assessment of cutaneous perfusion at two different depths, morphologically and functionally different vascular networks, and thus utilized in clinics as a cost-effective alternative to the LDI.

  11. Imaging photoplethysmography for clinical assessment of cutaneous microcirculation at two different depths.

    PubMed

    Marcinkevics, Zbignevs; Rubins, Uldis; Zaharans, Janis; Miscuks, Aleksejs; Urtane, Evelina; Ozolina-Moll, Liga

    2016-03-01

    The feasibility of bispectral imaging photoplethysmography (iPPG) system for clinical assessment of cutaneous microcirculation at two different depths is proposed. The iPPG system has been developed and evaluated for in vivo conditions during various tests: (1) topical application of vasodilatory liniment on the skin, (2) skin local heating, (3) arterial occlusion, and (4) regional anesthesia. The device has been validated by the measurements of a laser Doppler imager (LDI) as a reference. The hardware comprises four bispectral light sources (530 and 810 nm) for uniform illumination of skin, video camera, and the control unit for triggering of the system. The PPG signals were calculated and the changes of perfusion index (PI) were obtained during the tests. The results showed convincing correlations for PI obtained by iPPG530 nm and LDI at (1) topical liniment (r = 0.98) and (2) heating (r = 0.98) tests. The topical liniment and local heating tests revealed good selectivity of the system for superficial microcirculation monitoring. It is confirmed that the iPPG system could be used for assessment of cutaneous perfusion at two different depths, morphologically and functionally different vascular networks, and thus utilized in clinics as a cost-effective alternative to the LDI.

  12. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  13. Noninvasive measurement of burn wound depth applying infrared thermal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jaspers, Mariëlle E.; Maltha, Ilse M.; Klaessens, John H.; Vet, Henrica C.; Verdaasdonk, Rudolf M.; Zuijlen, Paul P.

    2016-02-01

    In burn wounds early discrimination between the different depths plays an important role in the treatment strategy. The remaining vasculature in the wound determines its healing potential. Non-invasive measurement tools that can identify the vascularization are therefore considered to be of high diagnostic importance. Thermography is a non-invasive technique that can accurately measure the temperature distribution over a large skin or tissue area, the temperature is a measure of the perfusion of that area. The aim of this study was to investigate the clinimetric properties (i.e. reliability and validity) of thermography for measuring burn wound depth. In a cross-sectional study with 50 burn wounds of 35 patients, the inter-observer reliability and the validity between thermography and Laser Doppler Imaging were studied. With ROC curve analyses the ΔT cut-off point for different burn wound depths were determined. The inter-observer reliability, expressed by an intra-class correlation coefficient of 0.99, was found to be excellent. In terms of validity, a ΔT cut-off point of 0.96°C (sensitivity 71%; specificity 79%) differentiates between a superficial partial-thickness and deep partial-thickness burn. A ΔT cut-off point of -0.80°C (sensitivity 70%; specificity 74%) could differentiate between a deep partial-thickness and a full-thickness burn wound. This study demonstrates that thermography is a reliable method in the assessment of burn wound depths. In addition, thermography was reasonably able to discriminate among different burn wound depths, indicating its potential use as a diagnostic tool in clinical burn practice.

  14. Quantitative depth resolved microcirculation imaging with optical coherence tomography angiography (Part ΙΙ): Microvascular network imaging.

    PubMed

    Gao, Wanrong

    2017-04-17

    In this work, we review the main phenomena that have been explored in OCT angiography to image the vessels of the microcirculation within living tissues with the emphasis on how the different processing algorithms were derived to circumvent specific limitations. Parameters are then discussed that can quantitatively describe the depth-resolved microvascular network for possible clinic diagnosis applications. Finally,future directions in continuing OCT development are discussed. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Theory of reflectivity blurring in seismic depth imaging

    NASA Astrophysics Data System (ADS)

    Thomson, C. J.; Kitchenside, P. W.; Fletcher, R. P.

    2016-05-01

    A subsurface extended image gather obtained during controlled-source depth imaging yields a blurred kernel of an interface reflection operator. This reflectivity kernel or reflection function is comprised of the interface plane-wave reflection coefficients and so, in principle, the gather contains amplitude versus offset or angle information. We present a modelling theory for extended image gathers that accounts for variable illumination and blurring, under the assumption of a good migration-velocity model. The method involves forward modelling as well as migration or back propagation so as to define a receiver-side blurring function, which contains the effects of the detector array for a given shot. Composition with the modelled incident wave and summation over shots then yields an overall blurring function that relates the reflectivity to the extended image gather obtained from field data. The spatial evolution or instability of blurring functions is a key concept and there is generally not just spatial blurring in the apparent reflectivity, but also slowness or angle blurring. Gridded blurring functions can be estimated with, for example, a reverse-time migration modelling engine. A calibration step is required to account for ad hoc band limitedness in the modelling and the method also exploits blurring-function reciprocity. To demonstrate the concepts, we show numerical examples of various quantities using the well-known SIGSBEE test model and a simple salt-body overburden model, both for 2-D. The moderately strong slowness/angle blurring in the latter model suggests that the effect on amplitude versus offset or angle analysis should be considered in more realistic structures. Although the description and examples are for 2-D, the extension to 3-D is conceptually straightforward. The computational cost of overall blurring functions implies their targeted use for the foreseeable future, for example, in reservoir characterization. The description is for scalar

  16. 50% duty cycle may be inappropriate to achieve a sufficient chest compression depth when cardiopulmonary resuscitation is performed by female or light rescuers.

    PubMed

    Lee, Chang Jae; Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Choi, Sung Wook; Kim, Ok Jun

    2015-03-01

    Current guidelines for cardiopulmonary resuscitation recommend chest compressions (CC) during 50% of the duty cycle (DC) in part because of the ease with which individuals may learn to achieve it with practice. However, no consideration has been given to a possible interaction between DC and depth of CC, which has been the subject of recent study. Our aim was to determine if 50% DC is inappropriate to achieve sufficient chest compression depth for female and light rescuers. Previously collected CC data, performed by senior medical students guided by metronome sounds with various down-stroke patterns and rates, were included in the analysis. Multiple linear regression analysis was performed to determine the association between average compression depth (ACD) with average compression rate (ACR), DC, and physical characteristics of the performers. Expected ACD was calculated for various settings. DC, ACR, body weight, male sex, and self-assessed physical strength were significantly associated with ACD in multivariate analysis. Based on our calculations, with 50% of DC, only men with ACR of 140/min or faster or body weight over 74 kg with ACR of 120/min can achieve sufficient ACD. A shorter DC is independently correlated with deeper CC during simulated cardiopulmonary resuscitation. The optimal DC recommended in current guidelines may be inappropriate for achieving sufficient CD, especially for female or lighter-weight rescuers.

  17. 50% duty cycle may be inappropriate to achieve a sufficient chest compression depth when cardiopulmonary resuscitation is performed by female or light rescuers

    PubMed Central

    Lee, Chang Jae; Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Choi, Sung Wook; Kim, Ok Jun

    2015-01-01

    Objective Current guidelines for cardiopulmonary resuscitation recommend chest compressions (CC) during 50% of the duty cycle (DC) in part because of the ease with which individuals may learn to achieve it with practice. However, no consideration has been given to a possible interaction between DC and depth of CC, which has been the subject of recent study. Our aim was to determine if 50% DC is inappropriate to achieve sufficient chest compression depth for female and light rescuers. Methods Previously collected CC data, performed by senior medical students guided by metronome sounds with various down-stroke patterns and rates, were included in the analysis. Multiple linear regression analysis was performed to determine the association between average compression depth (ACD) with average compression rate (ACR), DC, and physical characteristics of the performers. Expected ACD was calculated for various settings. Results DC, ACR, body weight, male sex, and self-assessed physical strength were significantly associated with ACD in multivariate analysis. Based on our calculations, with 50% of DC, only men with ACR of 140/min or faster or body weight over 74 kg with ACR of 120/min can achieve sufficient ACD. Conclusion A shorter DC is independently correlated with deeper CC during simulated cardiopulmonary resuscitation. The optimal DC recommended in current guidelines may be inappropriate for achieving sufficient CD, especially for female or lighter-weight rescuers. PMID:27752567

  18. Extended depth of focus tethered capsule OCT endomicroscopy for upper gastrointestinal tract imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Vuong, Barry; Yin, Biwei; Beaulieu-Ouellet, Emilie; Liang, Chia Pin; Beatty, Matthew; Singh, Kanwarpal; Dong, Jing; Grant, Catriona N.; Rosenberg, Mireille; Tearney, Guillermo J.

    2017-02-01

    Endoscopy, the current standard of care for the diagnosis of upper gastrointestinal (GI) diseases, is not ideal as a screening tool because it is costly, necessitates a team of medically trained personnel, and typically requires that the patient be sedated. Endoscopy is also a superficial macroscopic imaging modality and therefore is unable to provide detailed information on subsurface microscopic structure that is required to render a precise tissue diagnosis. We have overcome these limitations through the development of an optical coherence tomography tethered capsule endomicroscopy (OCT-TCE) imaging device. The OCT-TCE device has a pill-like form factor with an optically clear wall to allow the contained opto-mechanical components to scan the OCT beam along the circumference of the esophagus. Once swallowed, the OCT-TCE device traverses the esophagus naturally via peristalsis and multiple cross-sectional OCT images are obtained at 30-40 μm lateral resolution by 7 μm axial resolution. While this spatial resolution enables differentiation of squamous vs columnar mucosa, crucial microstructural features such as goblet cells ( 10 μm), which signify intestinal metaplasia in BE, and enlarged nuclei that are indicative of dysplasia cannot be resolved with the current OCT-TCE technology. In this work we demonstrate a novel design of a high lateral resolution OCT-TCE device with an extended depth of focus (EDOF). The EDOF is created by use of self-imaging wavefront division multiplexing that produces multiple focused modes at different depths into the sample. The overall size of the EDOF TCE is similar to that of the previous OCT-TCE device ( 11 mm by 26 mm) but with a lateral resolution of 8 μm over a depth range of 2 mm. Preliminary esophageal and intestinal imaging using these EDOF optics demonstrates an improvement in the ability to resolve tissue morphology including individual glands and cells. These results suggest that the use of EDOF optics may be a

  19. Airborne imaging spectrometer data of the Ruby Mountains, Montana: Mineral discrimination using relative absorption band-depth images

    USGS Publications Warehouse

    Crowley, J.K.; Brickey, D.W.; Rowan, L.C.

    1989-01-01

    Airborne imaging spectrometer data collected in the near-infrared (1.2-2.4 ??m) wavelength range were used to study the spectral expression of metamorphic minerals and rocks in the Ruby Mountains of southwestern Montana. The data were analyzed by using a new data enhancement procedure-the construction of relative absorption band-depth (RBD) images. RBD images, like bandratio images, are designed to detect diagnostic mineral absorption features, while minimizing reflectance variations related to topographic slope and albedo differences. To produce an RBD image, several data channels near an absorption band shoulder are summed and then divided by the sum of several channels located near the band minimum. RBD images are both highly specific and sensitive to the presence of particular mineral absorption features. Further, the technique does not distort or subdue spectral features as sometimes occurs when using other data normalization methods. By using RBD images, a number of rock and soil units were distinguished in the Ruby Mountains including weathered quartz - feldspar pegmatites, marbles of several compositions, and soils developed over poorly exposed mica schists. The RBD technique is especially well suited for detecting weak near-infrared spectral features produced by soils, which may permit improved mapping of subtle lithologic and structural details in semiarid terrains. The observation of soils rich in talc, an important industrial commodity in the study area, also indicates that RBD images may be useful for mineral exploration. ?? 1989.

  20. Validation of MODIS Aerosol Optical Depth Retrieval Over Land

    NASA Technical Reports Server (NTRS)

    Chu, D. A.; Kaufman, Y. J.; Ichoku, C.; Remer, L. A.; Tanre, D.; Holben, B. N.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Aerosol optical depths are derived operationally for the first time over land in the visible wavelengths by MODIS (Moderate Resolution Imaging Spectroradiometer) onboard the EOSTerra spacecraft. More than 300 Sun photometer data points from more than 30 AERONET (Aerosol Robotic Network) sites globally were used in validating the aerosol optical depths obtained during July - September 2000. Excellent agreement is found with retrieval errors within (Delta)tau=+/- 0.05 +/- 0.20 tau, as predicted, over (partially) vegetated surfaces, consistent with pre-launch theoretical analysis and aircraft field experiments. In coastal and semi-arid regions larger errors are caused predominantly by the uncertainty in evaluating the surface reflectance. The excellent fit was achieved despite the ongoing improvements in instrument characterization and calibration. This results show that MODIS-derived aerosol optical depths can be used quantitatively in many applications with cautions for residual clouds, snow/ice, and water contamination.

  1. Depth-resolved imaging of colon tumor using optical coherence tomography and fluorescence laminar optical tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tang, Qinggong; Frank, Aaron; Wang, Jianting; Chen, Chao-wei; Jin, Lily; Lin, Jon; Chan, Joanne M.; Chen, Yu

    2016-03-01

    Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is unable to detect those subsurface lesions. Since cancer development is associated with both morphological and molecular alterations, imaging technologies that can quantitative image tissue's morphological and molecular biomarkers and assess the depth extent of a lesion in real time, without the need for tissue excision, would be a major advance in GI cancer diagnostics and therapy. In this research, we investigated the feasibility of multi-modal optical imaging including high-resolution optical coherence tomography (OCT) and depth-resolved high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. APC (adenomatous polyposis coli) mice model were imaged using OCT and FLOT and the correlated histopathological diagnosis was obtained. Quantitative structural (the scattering coefficient) and molecular imaging parameters (fluorescence intensity) from OCT and FLOT images were developed for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 87.4% (87.3%) for sensitivity (specificity) which gives the most optimal diagnosis (the largest area under receiver operating characteristic (ROC) curve). This project results in a new non-invasive multi-modal imaging platform for improved GI cancer detection, which is expected to have a major impact on detection, diagnosis, and characterization of GI cancers, as well as a wide range of epithelial cancers.

  2. Seismic imaging of slab metamorphism and genesis of intermediate-depth intraslab earthquakes

    NASA Astrophysics Data System (ADS)

    Hasegawa, Akira; Nakajima, Junichi

    2017-12-01

    We review studies of intermediate-depth seismicity and seismic imaging of the interior of subducting slabs in relation to slab metamorphism and their implications for the genesis of intermediate-depth earthquakes. Intermediate-depth events form a double seismic zone in the depth range of c. 40-180 km, which occur only at locations where hydrous minerals are present, and are particularly concentrated along dehydration reaction boundaries. Recent studies have revealed detailed spatial distributions of these events and a close relationship with slab metamorphism. Pressure-temperature paths of the crust for cold slabs encounter facies boundaries with large H2O production rates and positive total volume change, which are expected to cause highly active seismicity near the facies boundaries. A belt of upper-plane seismicity in the crust nearly parallel to 80-90 km depth contours of the slab surface has been detected in the cold Pacific slab beneath eastern Japan, and is probably caused by slab crust dehydration with a large H2O production rate. A seismic low-velocity layer in the slab crust persists down to the depth of this upper-plane seismic belt, which provides evidence for phase transformation of dehydration at this depth. Similar low-velocity subducting crust closely related with intraslab seismicity has been detected in several other subduction zones. Seismic tomography studies in NE Japan and northern Chile also revealed the presence of a P-wave low-velocity layer along the lower plane of a double seismic zone. However, in contrast to predictions based on the serpentinized mantle, S-wave velocity along this layer is not low. Seismic anisotropy and pore aspect ratio may play a role in generating this unique structure. Although further validation is required, observations of these distinct low P-wave velocities along the lower seismic plane suggest the presence of hydrated rocks or fluids within that layer. These observations support the hypothesis that dehydration

  3. Single grating x-ray imaging for dynamic biological systems

    NASA Astrophysics Data System (ADS)

    Morgan, Kaye S.; Paganin, David M.; Parsons, David W.; Donnelley, Martin; Yagi, Naoto; Uesugi, Kentaro; Suzuki, Yoshio; Takeuchi, Akihisa; Siu, Karen K. W.

    2012-07-01

    Biomedical studies are already benefiting from the excellent contrast offered by phase contrast x-ray imaging, but live imaging work presents several challenges. Living samples make it particularly difficult to achieve high resolution, sensitive phase contrast images, as exposures must be short and cannot be repeated. We therefore present a single-exposure, high-flux method of differential phase contrast imaging [1, 2, 3] in the context of imaging live airways for Cystic Fibrosis (CF) treatment assessment [4]. The CF study seeks to non-invasively observe the liquid lining the airways, which should increase in depth in response to effective treatments. Both high spatial resolution and sensitivity are required in order to track micron size changes in a liquid that is not easily differentiated from the tissue on which it lies. Our imaging method achieves these goals by using a single attenuation grating or grid as a reference pattern, and analyzing how the sample deforms the pattern to quantitatively retrieve the phase depth of the sample. The deformations are mapped at each pixel in the image using local cross-correlations comparing each 'sample and pattern' image with a reference 'pattern only' image taken before the sample is introduced. This produces a differential phase image, which may be integrated to give the sample phase depth.

  4. Large field-of-view and depth-specific cortical microvascular imaging underlies regional differences in ischemic brain

    NASA Astrophysics Data System (ADS)

    Qin, Jia; Shi, Lei; Dziennis, Suzan; Wang, Ruikang K.

    2014-02-01

    Ability to non-invasively monitor and quantify of blood flow, blood vessel morphology, oxygenation and tissue morphology is important for improved diagnosis, treatment and management of various neurovascular disorders, e.g., stroke. Currently, no imaging technique is available that can satisfactorily extract these parameters from in vivo microcirculatory tissue beds, with large field of view and sufficient resolution at defined depth without any harm to the tissue. In order for more effective therapeutics, we need to determine the area of brain that is damaged but not yet dead after focal ischemia. Here we develop an integrated multi-functional imaging system, in which SDW-LSCI (synchronized dual wavelength laser speckle imaging) is used as a guiding tool for OMAG (optical microangiography) to investigate the fine detail of tissue hemodynamics, such as vessel flow, profile, and flow direction. We determine the utility of the integrated system for serial monitoring afore mentioned parameters in experimental stroke, middle cerebral artery occlusion (MCAO) in mice. For 90 min MCAO, onsite and 24 hours following reperfusion, we use SDW-LSCI to determine distinct flow and oxygenation variations for differentiation of the infarction, peri-infarct, reduced flow and contralateral regions. The blood volumes are quantifiable and distinct in afore mentioned regions. We also demonstrate the behaviors of flow and flow direction in the arterials connected to MCA play important role in the time course of MCAO. These achievements may improve our understanding of vascular involvement under pathologic and physiological conditions, and ultimately facilitate clinical diagnosis, monitoring and therapeutic interventions of neurovascular diseases, such as ischemic stroke.

  5. Feasibility of imaging epileptic seizure onset with EIT and depth electrodes.

    PubMed

    Witkowska-Wrobel, Anna; Aristovich, Kirill; Faulkner, Mayo; Avery, James; Holder, David

    2018-06-01

    Imaging ictal and interictal activity with Electrical Impedance Tomography (EIT) using intracranial electrode mats has been demonstrated in animal models of epilepsy. In human epilepsy subjects undergoing presurgical evaluation, depth electrodes are often preferred. The purpose of this work was to evaluate the feasibility of using EIT to localise epileptogenic areas with intracranial electrodes in humans. The accuracy of localisation of the ictal onset zone was evaluated in computer simulations using 9M element FEM models derived from three subjects. 5 mm radius perturbations imitating a single seizure onset event were placed in several locations forming two groups: under depth electrode coverage and in the contralateral hemisphere. Simulations were made for impedance changes of 1% expected for neuronal depolarisation over milliseconds and 10% for cell swelling over seconds. Reconstructions were compared with EEG source modelling for a radially orientated dipole with respect to the closest EEG recording contact. The best accuracy of EIT was obtained using all depth and 32 scalp electrodes, greater than the equivalent accuracy with EEG inverse source modelling. The localisation error was 5.2 ± 1.8, 4.3 ± 0 and 46.2 ± 25.8 mm for perturbations within the volume enclosed by depth electrodes and 29.6 ± 38.7, 26.1 ± 36.2, 54.0 ± 26.2 mm for those without (EIT 1%, 10% change, EEG source modelling, n = 15 in 3 subjects, p < 0.01). As EIT was insensitive to source dipole orientation, all 15 perturbations within the volume enclosed by depth electrodes were localised, whereas the standard clinical method of visual inspection of EEG voltages, only localised 8 out of 15 cases. This suggests that adding EIT to SEEG measurements could be beneficial in localising the onset of seizures. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Dual-radiolabeled nanoparticle probes for depth-independent in vivo imaging of enzyme activation

    NASA Astrophysics Data System (ADS)

    Black, Kvar C. L.; Zhou, Mingzhou; Sarder, Pinaki; Kuchuk, Maryna; Al-Yasiri, Amal Y.; Gunsten, Sean P.; Liang, Kexian; Hennkens, Heather M.; Akers, Walter J.; Laforest, Richard; Brody, Steven L.; Cutler, Cathy S.; Achilefu, Samuel

    2018-02-01

    Quantitative and noninvasive measurement of protease activities has remained an imaging challenge in deep tissues such as the lungs. Here, we designed a dual-radiolabeled probe for reporting the activities of proteases such as matrix metalloproteinases (MMPs) with multispectral single photon emission computed tomography (SPECT) imaging. A gold nanoparticle (NP) was radiolabeled with 125I and 111In and functionalized with an MMP9-cleavable peptide to form a multispectral SPECT imaging contrast agent. In another design, incorporation of 199Au radionuclide into the metal crystal structure of gold NPs provided a superior and stable reference signal in lungs, and 111In was linked to the NP surface via a protease-cleavable substrate, which can serve as an enzyme activity reporter. This work reveals strategies to correlate protease activities with diverse pathologies in a tissue-depth independent manner.

  7. Adaptive Neuro-Fuzzy Inference System (ANFIS)-Based Models for Predicting the Weld Bead Width and Depth of Penetration from the Infrared Thermal Image of the Weld Pool

    NASA Astrophysics Data System (ADS)

    Subashini, L.; Vasudevan, M.

    2012-02-01

    Type 316 LN stainless steel is the major structural material used in the construction of nuclear reactors. Activated flux tungsten inert gas (A-TIG) welding has been developed to increase the depth of penetration because the depth of penetration achievable in single-pass TIG welding is limited. Real-time monitoring and control of weld processes is gaining importance because of the requirement of remoter welding process technologies. Hence, it is essential to develop computational methodologies based on an adaptive neuro fuzzy inference system (ANFIS) or artificial neural network (ANN) for predicting and controlling the depth of penetration and weld bead width during A-TIG welding of type 316 LN stainless steel. In the current work, A-TIG welding experiments have been carried out on 6-mm-thick plates of 316 LN stainless steel by varying the welding current. During welding, infrared (IR) thermal images of the weld pool have been acquired in real time, and the features have been extracted from the IR thermal images of the weld pool. The welding current values, along with the extracted features such as length, width of the hot spot, thermal area determined from the Gaussian fit, and thermal bead width computed from the first derivative curve were used as inputs, whereas the measured depth of penetration and weld bead width were used as output of the respective models. Accurate ANFIS models have been developed for predicting the depth of penetration and the weld bead width during TIG welding of 6-mm-thick 316 LN stainless steel plates. A good correlation between the measured and predicted values of weld bead width and depth of penetration were observed in the developed models. The performance of the ANFIS models are compared with that of the ANN models.

  8. Use of LANDSAT 8 images for depth and water quality assessment of El Guájaro reservoir, Colombia

    NASA Astrophysics Data System (ADS)

    González-Márquez, Luis Carlos; Torres-Bejarano, Franklin M.; Torregroza-Espinosa, Ana Carolina; Hansen-Rodríguez, Ivette Renée; Rodríguez-Gallegos, Hugo B.

    2018-03-01

    The aim of this study was to evaluate the viability of using Landsat 8 spectral images to estimate water quality parameters and depth in El Guájaro Reservoir. On February and March 2015, two samplings were carried out in the reservoir, coinciding with the Landsat 8 images. Turbidity, dissolved oxygen, electrical conductivity, pH and depth were evaluated. Through multiple regression analysis between measured water quality parameters and the reflectance of the pixels corresponding to the sampling stations, statistical models with determination coefficients between 0.6249 and 0.9300 were generated. Results indicate that from a small number of measured parameters we can generate reliable models to estimate the spatial variation of turbidity, dissolved oxygen, pH and depth, as well the temporal variation of electrical conductivity, so models generated from Landsat 8 can be used as a tool to facilitate the environmental, economic and social management of the reservoir.

  9. Hand pose estimation in depth image using CNN and random forest

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Cao, Zhiguo; Xiao, Yang; Fang, Zhiwen

    2018-03-01

    Thanks to the availability of low cost depth cameras, like Microsoft Kinect, 3D hand pose estimation attracted special research attention in these years. Due to the large variations in hand`s viewpoint and the high dimension of hand motion, 3D hand pose estimation is still challenging. In this paper we propose a two-stage framework which joint with CNN and Random Forest to boost the performance of hand pose estimation. First, we use a standard Convolutional Neural Network (CNN) to regress the hand joints` locations. Second, using a Random Forest to refine the joints from the first stage. In the second stage, we propose a pyramid feature which merges the information flow of the CNN. Specifically, we get the rough joints` location from first stage, then rotate the convolutional feature maps (and image). After this, for each joint, we map its location to each feature map (and image) firstly, then crop features at each feature map (and image) around its location, put extracted features to Random Forest to refine at last. Experimentally, we evaluate our proposed method on ICVL dataset and get the mean error about 11mm, our method is also real-time on a desktop.

  10. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  11. Depth-estimation-enabled compound eyes

    NASA Astrophysics Data System (ADS)

    Lee, Woong-Bi; Lee, Heung-No

    2018-04-01

    Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.

  12. Robust Fusion of Color and Depth Data for RGB-D Target Tracking Using Adaptive Range-Invariant Depth Models and Spatio-Temporal Consistency Constraints.

    PubMed

    Xiao, Jingjing; Stolkin, Rustam; Gao, Yuqing; Leonardis, Ales

    2017-09-06

    This paper presents a novel robust method for single target tracking in RGB-D images, and also contributes a substantial new benchmark dataset for evaluating RGB-D trackers. While a target object's color distribution is reasonably motion-invariant, this is not true for the target's depth distribution, which continually varies as the target moves relative to the camera. It is therefore nontrivial to design target models which can fully exploit (potentially very rich) depth information for target tracking. For this reason, much of the previous RGB-D literature relies on color information for tracking, while exploiting depth information only for occlusion reasoning. In contrast, we propose an adaptive range-invariant target depth model, and show how both depth and color information can be fully and adaptively fused during the search for the target in each new RGB-D image. We introduce a new, hierarchical, two-layered target model (comprising local and global models) which uses spatio-temporal consistency constraints to achieve stable and robust on-the-fly target relearning. In the global layer, multiple features, derived from both color and depth data, are adaptively fused to find a candidate target region. In ambiguous frames, where one or more features disagree, this global candidate region is further decomposed into smaller local candidate regions for matching to local-layer models of small target parts. We also note that conventional use of depth data, for occlusion reasoning, can easily trigger false occlusion detections when the target moves rapidly toward the camera. To overcome this problem, we show how combining target information with contextual information enables the target's depth constraint to be relaxed. Our adaptively relaxed depth constraints can robustly accommodate large and rapid target motion in the depth direction, while still enabling the use of depth data for highly accurate reasoning about occlusions. For evaluation, we introduce a new RGB

  13. Non-contrast magnetic resonance imaging for bladder cancer: fused high b value diffusion-weighted imaging and T2-weighted imaging helps evaluate depth of invasion.

    PubMed

    Lee, Minsu; Shin, Su-Jin; Oh, Young Taik; Jung, Dae Chul; Cho, Nam Hoon; Choi, Young Deuk; Park, Sung Yoon

    2017-09-01

    To investigate the utility of fused high b value diffusion-weighted imaging (DWI) and T2-weighted imaging (T2WI) for evaluating depth of invasion in bladder cancer. We included 62 patients with magnetic resonance imaging (MRI) and surgically confirmed urothelial carcinoma in the urinary bladder. An experienced genitourinary radiologist analysed the depth of invasion (T stage <2 or ≥2) using T2WI, DWI, T2WI plus DWI, and fused DWI and T2WI (fusion MRI). Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and accuracy were investigated. Area under the curve (AUC) was analysed to identify T stage ≥2. The rate of patients with surgically confirmed T stage ≥2 was 41.9% (26/62). Sensitivity, specificity, PPV, NPV and accuracy were 50.0%, 55.6%, 44.8%, 60.6% and 53.2%, respectively, with T2WI; 57.7%, 77.8%, 65.2%, 71.8% and 69.4%, respectively, with DWI; 65.4%, 80.6%, 70.8%, 76.3% and 74.2%, respectively, with T2WI plus DWI and 80.8%, 77.8%, 72.4%, 84.9% and 79.0%, respectively, with fusion MRI. AUC was 0.528 with T2WI, 0.677 with DWI, 0.730 with T2WI plus DWI and 0.793 with fusion MRI for T stage ≥2. Fused high b value DWI and T2WI may be a promising non-contrast MRI technique for assessing depth of invasion in bladder cancer. • Accuracy of fusion MRI was 79.0% for T stage ≥2 in bladder cancer. • AUC of fusion MRI was 0.793 for T stage ≥2 in bladder cancer. • Diagnostic performance of fusion MRI was comparable with T2WI plus DWI. • As a non-contrast MRI technique, fusion MRI is useful for bladder cancer.

  14. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of themore » co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).« less

  15. The Relationship between University Students' Academic Achievement and Perceived Organizational Image

    ERIC Educational Resources Information Center

    Polat, Soner

    2011-01-01

    The purpose of present study was to determine the relationship between university students' academic achievement and perceived organizational image. The sample of the study was the senior students at the faculties and vocational schools in Umuttepe Campus at Kocaeli University. Because the development of organizational image is a long process, the…

  16. Achieving interlocking nails without using an image intensifier

    PubMed Central

    Ogunlusi, Johnson D.; Ine, Henry R.

    2006-01-01

    Interlocking nails are commonly performed using an image intensifier. These are expensive and are not readily available in most resource-poor countries of the world. The aim of this study was to achieve interlocking nailing without the use of an image intensifier. This is a prospective descriptive analysis of 40 consecutive cases seen with shaft fractures of the humerus, femur, and tibia. Fracture fixation was done using Surgical Implant Generation Network (SIGN) nails. Forty limbs in 34 patients were studied. There were 12 females and 22 males, giving a ratio of 1:2. The mean age (years) was 35.75±13.16 and the range was 17–70 years. The studied bones were: humerus 10%, femur 65%, and tibia 25%. The fracture lines were: transverse 40%, oblique 15%, and communited 45%. Fracture grades were: closed 90%, grade I, 5%, grade II, 2.5%, and grade IIIA, 2.5%. Surgical approaches were: antegrade 62.5% and retrograde 37.5%. Indications for fixation were: recent fracture 92.5%, non-union 5%, and malunion 3%. Methods of reductions were: open 85% and closed 15%. The mean follow-up period (years) was 1.50±0.78. The union time averaged 3 months. Complication was mainly screw loosening due to severe osteoporoses in one case. It is, therefore, concluded that, with the aid of external jigs and slot finders, interlocking can be achieved without an image intensifier. PMID:17039384

  17. Three dimensional live-cell STED microscopy at increased depth using a water immersion objective

    NASA Astrophysics Data System (ADS)

    Heine, Jörn; Wurm, Christian A.; Keller-Findeisen, Jan; Schönle, Andreas; Harke, Benjamin; Reuss, Matthias; Winter, Franziska R.; Donnert, Gerald

    2018-05-01

    Modern fluorescence superresolution microscopes are capable of imaging living cells on the nanometer scale. One of those techniques is stimulated emission depletion (STED) which increases the microscope's resolution many times in the lateral and the axial directions. To achieve these high resolutions not only close to the coverslip but also at greater depths, the choice of objective becomes crucial. Oil immersion objectives have frequently been used for STED imaging since their high numerical aperture (NA) leads to high spatial resolutions. But during live-cell imaging, especially at great penetration depths, these objectives have a distinct disadvantage. The refractive index mismatch between the immersion oil and the usually aqueous embedding media of living specimens results in unwanted spherical aberrations. These aberrations distort the point spread functions (PSFs). Notably, during z- and 3D-STED imaging, the resolution increase along the optical axis is majorly hampered if at all possible. To overcome this limitation, we here use a water immersion objective in combination with a spatial light modulator for z-STED measurements of living samples at great depths. This compact design allows for switching between objectives without having to adapt the STED beam path and enables on the fly alterations of the STED PSF to correct for aberrations. Furthermore, we derive the influence of the NA on the axial STED resolution theoretically and experimentally. We show under live-cell imaging conditions that a water immersion objective leads to far superior results than an oil immersion objective at penetration depths of 5-180 μm.

  18. Long-depth imaging of specific gene expressions in whole-mount mouse embryos with single-photon excitation confocal fluorescence microscopy and FISH.

    PubMed

    Palmes-Saloma, C; Saloma, C

    2000-07-01

    Long-depth imaging of specific gene expression in the midgestation whole-mount mouse embryo (WME) is demonstrated with single-photon excitation (1PE) confocal fluorescence microscopy and fluorescence in situ hybridization. Expression domains of Pax-6 mRNA transcripts were labeled with an in situ hybridization probe that is a RNA sequence complementary to the cloned gene fragment and were rendered visible using two fluorochrome-conjugated antibodies that fluoresce at peak wavelengths of lambda(F) = 0.525 microm and lambda(F) = 0. 580 microm, respectively. Distributions of Pax-6 mRNA domains as deep as 1000 microm in the day 9.5 WME were imaged with a long-working-distance (13.6 mm) objective lens (magnification 5x). The scattering problem posed by the optically thick WME sample is alleviated by careful control of the detector pinhole size and the application of simple but fast postdetection image enhancement techniques, such as space and wavelength averaging to produce high-quality fluorescence images. A three-dimensional reconstruction that clearly shows the Pax-6 mRNA expression domains in the forebrain, diencephalon, optic cup, and spinal cord of the day 9.5 WME is obtained. The advantages of 1PE confocal fluorescence imaging over two-photon excitation fluorescence imaging are discussed for the case of long-depth imaging in highly scattering media. Imaging in midgestation WMEs at optical depths of more than 350 microm has not yet been realized with two-photon fluorescence excitation. Copyright 2000 Academic Press.

  19. MassImager: A software for interactive and in-depth analysis of mass spectrometry imaging data.

    PubMed

    He, Jiuming; Huang, Luojiao; Tian, Runtao; Li, Tiegang; Sun, Chenglong; Song, Xiaowei; Lv, Yiwei; Luo, Zhigang; Li, Xin; Abliz, Zeper

    2018-07-26

    Mass spectrometry imaging (MSI) has become a powerful tool to probe molecule events in biological tissue. However, it is a widely held viewpoint that one of the biggest challenges is an easy-to-use data processing software for discovering the underlying biological information from complicated and huge MSI dataset. Here, a user-friendly and full-featured MSI software including three subsystems, Solution, Visualization and Intelligence, named MassImager, is developed focusing on interactive visualization, in-situ biomarker discovery and artificial intelligent pathological diagnosis. Simplified data preprocessing and high-throughput MSI data exchange, serialization jointly guarantee the quick reconstruction of ion image and rapid analysis of dozens of gigabytes datasets. It also offers diverse self-defined operations for visual processing, including multiple ion visualization, multiple channel superposition, image normalization, visual resolution enhancement and image filter. Regions-of-interest analysis can be performed precisely through the interactive visualization between the ion images and mass spectra, also the overlaid optical image guide, to directly find out the region-specific biomarkers. Moreover, automatic pattern recognition can be achieved immediately upon the supervised or unsupervised multivariate statistical modeling. Clear discrimination between cancer tissue and adjacent tissue within a MSI dataset can be seen in the generated pattern image, which shows great potential in visually in-situ biomarker discovery and artificial intelligent pathological diagnosis of cancer. All the features are integrated together in MassImager to provide a deep MSI processing solution at the in-situ metabolomics level for biomarker discovery and future clinical pathological diagnosis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  20. A Flexible Annular-Array Imaging Platform for Micro-Ultrasound

    PubMed Central

    Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei

    2013-01-01

    Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923

  1. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.

    PubMed

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-06-28

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.

  2. Review of spectral imaging technology in biomedical engineering: achievements and challenges.

    PubMed

    Li, Qingli; He, Xiaofu; Wang, Yiting; Liu, Hongying; Xu, Dongrong; Guo, Fangmin

    2013-10-01

    Spectral imaging is a technology that integrates conventional imaging and spectroscopy to get both spatial and spectral information from an object. Although this technology was originally developed for remote sensing, it has been extended to the biomedical engineering field as a powerful analytical tool for biological and biomedical research. This review introduces the basics of spectral imaging, imaging methods, current equipment, and recent advances in biomedical applications. The performance and analytical capabilities of spectral imaging systems for biological and biomedical imaging are discussed. In particular, the current achievements and limitations of this technology in biomedical engineering are presented. The benefits and development trends of biomedical spectral imaging are highlighted to provide the reader with an insight into the current technological advances and its potential for biomedical research.

  3. Diving depths

    NASA Astrophysics Data System (ADS)

    Clanet, Christophe; Guillet, Thibault; Coux, Martin; Quéré, David

    2017-11-01

    Many seabirds (gannets, pelicans, gulls, albatrosses) dive into water at high speeds (25 m/s) in order to capture underwater preys. Diving depths of 20 body lengths are reported in the literature. This value is much larger than the one achieved by men, which is typically of the order of 3. We study this difference by comparing the vertical impact of slender vs bluff bodies. We quantify the influence of wetting and of the geometry on the trajectory and discuss the different laws that govern the diving depth.

  4. Scientific Achievements of Global ENA Imaging and Future Outlook

    NASA Astrophysics Data System (ADS)

    Brandt, P. C.; Stephens, G. K.; Hsieh, S. Y. W.; Demajistre, R.; Gkioulidou, M.

    2017-12-01

    Energetic Neutral Atom (ENA) imaging is the only technique that can capture the instantaneous global state of energetic ion distributions in planetary magnetospheres and from the heliosheath. In particular at Earth, ENA imaging has been used to diagnose the morphology and dynamics of the ring current and plasma sheet down to several minutes time resolution and is therefore a critical tool to validate global ring current physics models. However, this requires a detailed understanding for how ENAs are produced from the ring current and inversion techniques that are thoroughly validated against in-situ measurements. To date, several missions have carried out planetary and heliospheric ENA imaging including Cassini, JUICE, IBEX of the heliosphere, and POLAR, Astrid-1, Double Star, TWINS and IMAGE of the terrestrial magnetosphere. Because of their path-finding successes, a future global-imaging mission concept, MEDICI, has been recommended in the Heliophysics Decadal Survey. Its core mission consists of two satellites in one circular, near-polar orbit beyond the radiation belts at around 8 RE, with ENA, EUV and FUV cameras. This recommendation has driven the definition of smaller mission concepts that address specific science aspects of the MEDICI concept. In this presentation, we review the past scientific achievements of ENA imaging with a focus on the terrestrial magnetosphere from primarily the NASA IMAGE and the TWINS missions. The highlighted achievements include the storm, sub-storm and quiet-time morphology, dynamics and pitch-angle distributions of the ring current, global differential acceleration of protons versus O+ ions, the structure of the global electrical current systems associated with the plasma pressure of protons and O+ ions up to around 200 keV, and the relation between ring current and plasmasphere. We discuss the need for future global observations of the ring current, plasma sheet and magnetosheath ion distributions based and derive their

  5. Probing neural tissue with airy light-sheet microscopy: investigation of imaging performance at depth within turbid media

    NASA Astrophysics Data System (ADS)

    Nylk, Jonathan; McCluskey, Kaley; Aggarwal, Sanya; Tello, Javier A.; Dholakia, Kishan

    2017-02-01

    Light-sheet microscopy (LSM) has received great interest for fluorescent imaging applications in biomedicine as it facilitates three-dimensional visualisation of large sample volumes with high spatiotemporal resolution whilst minimising irradiation of, and photo-damage to the specimen. Despite these advantages, LSM can only visualize superficial layers of turbid tissues, such as mammalian neural tissue. Propagation-invariant light modes have played a key role in the development of high-resolution LSM techniques as they overcome the natural divergence of a Gaussian beam, enabling uniform and thin light-sheets over large distances. Most notably, Bessel and Airy beam-based light-sheet imaging modalities have been demonstrated. In the single-photon excitation regime and in lightly scattering specimens, Airy-LSM has given competitive performance with advanced Bessel-LSM techniques. Airy and Bessel beams share the property of self-healing, the ability of the beam to regenerate its transverse beam profile after propagation around an obstacle. Bessel-LSM techniques have been shown to increase the penetration-depth of the illumination into turbid specimens but this effect has been understudied in biologically relevant tissues, particularly for Airy beams. It is expected that Airy-LSM will give a similar enhancement over Gaussian-LSM. In this paper, we report on the comparison of Airy-LSM and Gaussian-LSM imaging modalities within cleared and non-cleared mouse brain tissue. In particular, we examine image quality versus tissue depth by quantitative spatial Fourier analysis of neural structures in virally transduced fluorescent tissue sections, showing a three-fold enhancement at 50 μm depth into non-cleared tissue with Airy-LSM. Complimentary analysis is performed by resolution measurements in bead-injected tissue sections.

  6. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  7. Modelling of influence of spherical aberration coefficients on depth of focus of optical systems

    NASA Astrophysics Data System (ADS)

    Pokorný, Petr; Šmejkal, Filip; Kulmon, Pavel; Mikš, Antonín.; Novák, Jiří; Novák, Pavel

    2017-06-01

    This contribution describes how to model the influence of spherical aberration coefficients on the depth of focus of optical systems. Analytical formulas for the calculation of beam's caustics are presented. The conditions for aberration coefficients are derived for two cases when we require that either the Strehl definition or the gyration radius should be the identical in two symmetrically placed planes with respect to the paraxial image plane. One can calculate the maximum depth of focus and the minimum diameter of the circle of confusion of the optical system corresponding to chosen conditions. This contribution helps to understand how spherical aberration may affect the depth of focus and how to design such an optical system with the required depth of focus. One can perform computer modelling and design of the optical system and its spherical aberration in order to achieve the required depth of focus.

  8. The depth estimation of 3D face from single 2D picture based on manifold learning constraints

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia

    2018-04-01

    The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.

  9. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. Themore » depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)« less

  10. Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters

    NASA Technical Reports Server (NTRS)

    Bos, Brent; Memarsadeghi, Nargess; Kizhner, Semion; Antonille, Scott

    2013-01-01

    A large depth-of-field particle image velocimeter (PIV) is designed to characterize dynamic dust environments on planetary surfaces. This instrument detects lofted dust particles, and senses the number of particles per unit volume, measuring their sizes, velocities (both speed and direction), and shape factors when the particles are large. To measure these particle characteristics in-flight, the instrument gathers two-dimensional image data at a high frame rate, typically >4,000 Hz, generating large amounts of data for every second of operation, approximately 6 GB/s. To characterize a planetary dust environment that is dynamic, the instrument would have to operate for at least several minutes during an observation period, easily producing more than a terabyte of data per observation. Given current technology, this amount of data would be very difficult to store onboard a spacecraft, and downlink to Earth. Since 2007, innovators have been developing an autonomous image analysis algorithm architecture for the PIV instrument to greatly reduce the amount of data that it has to store and downlink. The algorithm analyzes PIV images and automatically reduces the image information down to only the particle measurement data that is of interest, reducing the amount of data that is handled by more than 10(exp 3). The state of development for this innovation is now fairly mature, with a functional algorithm architecture, along with several key pieces of algorithm logic, that has been proven through field test data acquired with a proof-of-concept PIV instrument.

  11. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers.

    PubMed

    Buyel, Johannes F; Gruchow, Hannah M; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m(-2) when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre-coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m(-2) with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins.

  12. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers

    PubMed Central

    Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m−2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m−2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037

  13. Diffraction enhanced kinetic depth X-ray imaging

    NASA Astrophysics Data System (ADS)

    Dicken, A.

    An increasing number of fields would benefit from a single analytical probe that can characterise bulk objects that vary in morphology and/or material composition. These fields include security screening, medicine and material science. In this study the X-ray region is shown to be an effective probe for the characterisation of materials. The most prominent analytical techniques that utilise X-radiation are reviewed. The study then focuses on methods of amalgamating the three dimensional power of kinetic depth X-ray (KDFX) imaging with the materials discrimination of angular dispersive X-ray diffraction (ADXRD), thus providing KDEX with a much needed material specific counterpart. A knowledge of the sample position is essential for the correct interpretation of diffraction signatures. Two different sensor geometries (i.e. circumferential and linear) that are able to collect end interpret multiple unknown material diffraction patterns and attribute them to their respective loci within an inspection volume are investigated. The circumferential and linear detector geometries are hypothesised, simulated and then tested in an experimental setting with the later demonstrating a greater ability at discerning between mixed diffraction patterns produced by differing materials. Factors known to confound the linear diffraction method such as sample thickness and radiation energy have been explored and quantified with a possible means of mitigation being identified (i.e. via increasing the sample to detector distance). A series of diffraction patterns (following the linear diffraction approach) were obtained from a single phantom object that was simultaneously interrogated via KDEX imaging. Areas containing diffraction signatures matched from a threat library have been highlighted in the KDEX imagery via colour encoding and match index is inferred by intensity. This union is the first example of its kind and is called diffraction enhanced KDEX imagery. Finally an additional

  14. Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding

    NASA Astrophysics Data System (ADS)

    Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito

    2015-02-01

    Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.

  15. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing.

    PubMed

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-09-30

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R²-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R²-values up to 0.77) corresponded with the OBRA

  16. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis

    PubMed Central

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687

  17. Seismic depth imaging of sequence boundaries beneath the New Jersey shelf

    NASA Astrophysics Data System (ADS)

    Riedel, M.; Reiche, S.; Aßhoff, K.; Buske, S.

    2018-06-01

    Numerical modelling of fluid flow and transport processes relies on a well-constrained geological model, which is usually provided by seismic reflection surveys. In the New Jersey shelf area a large number of 2D seismic profiles provide an extensive database for constructing a reliable geological model. However, for the purpose of modelling groundwater flow, the seismic data need to be depth-converted which is usually accomplished using complementary data from borehole logs. Due to the limited availability of such data in the New Jersey shelf, we propose a two-stage processing strategy with particular emphasis on reflection tomography and pre-stack depth imaging. We apply this workflow to a seismic section crossing the entire New Jersey shelf. Due to the tomography-based velocity modelling, the processing flow does not depend on the availability of borehole logging data. Nonetheless, we validate our results by comparing the migrated depths of selected geological horizons to borehole core data from the IODP expedition 313 drill sites, located at three positions along our seismic line. The comparison yields that in the top 450 m of the migrated section, most of the selected reflectors were positioned with an accuracy close to the seismic resolution limit (≈ 4 m) for that data. For deeper layers the accuracy still remains within one seismic wavelength for the majority of the tested horizons. These results demonstrate that the processed seismic data provide a reliable basis for constructing a hydrogeological model. Furthermore, the proposed workflow can be applied to other seismic profiles in the New Jersey shelf, which will lead to an even better constrained model.

  18. Correlation Plenoptic Imaging.

    PubMed

    D'Angelo, Milena; Pepe, Francesco V; Garuccio, Augusto; Scarcelli, Giuliano

    2016-06-03

    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.

  19. Correlation Plenoptic Imaging

    NASA Astrophysics Data System (ADS)

    D'Angelo, Milena; Pepe, Francesco V.; Garuccio, Augusto; Scarcelli, Giuliano

    2016-06-01

    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.

  20. 3D receiver function Kirchhoff depth migration image of Cascadia subduction slab weak zone

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Allen, R. M.; Bodin, T.; Tauzin, B.

    2016-12-01

    We have developed a highly computational efficient algorithm of applying 3D Kirchhoff depth migration to telesismic receiver function data. Combine primary PS arrival with later multiple arrivals we are able to reveal a better knowledge about the earth discontinuity structure (transmission and reflection). This method is highly useful compare with traditional CCP method when dipping structure is met during the imaging process, such as subduction slab. We apply our method to the reginal Cascadia subduction zone receiver function data and get a high resolution 3D migration image, for both primary and multiples. The image showed us a clear slab weak zone (slab hole) in the upper plate boundary under Northern California and the whole Oregon. Compare with previous 2D receiver function image from 2D array(CAFE and CASC93), the position of the weak zone shows interesting conherency. This weak zone is also conherent with local seismicity missing and heat rising, which lead us to think about and compare with the ocean plate stucture and the hydralic fluid process during the formation and migration of the subduction slab.

  1. Multispectral near-IR reflectance imaging of simulated early occlusal lesions: Variation of lesion contrast with lesion depth and severity

    PubMed Central

    Simon, Jacob C.; Chan, Kenneth H.; Darling, Cynthia L.; Fried, Daniel

    2014-01-01

    Background and Objectives Early demineralization appears with high contrast at near-IR wavelengths due to a ten to twenty fold difference in the magnitude of light scattering between sound and demineralized enamel. Water absorption in the near-IR has a significant effect on the lesion contrast and the highest contrast has been measured in spectral regions with higher water absorption. The purpose of this study was to determine how the lesion contrast changes with lesion severity and depth for different spectral regions in the near-IR and compare that range of contrast with visible reflectance and fluorescence. Materials and Methods Forty-four human molars were used in this in vitro study. Teeth were painted with an acid-resistant varnish, leaving a 4×4 mm window on the occlusal surface of each tooth exposed for demineralization. Artificial lesions were produced in the unprotected windows after 12–48 hr exposure to a demineralizing solution at pH-4.5. Near-IR reflectance images were acquired over several near-IR spectral distributions, visible light reflectance, and fluorescence with 405-nm excitation and detection at wavelengths greater than 500-nm. Crossed polarizers were used for reflectance measurements to reduce interference from specular reflectance. Cross polarization optical coherence tomography (CP-OCT) was used to non-destructively assess the depth and severity of demineralization in each sample window. Matching two dimensional CP-OCT images of the lesion depth and integrated reflectivity were compared with the reflectance and fluorescence images to determine how accurately the variation in the lesion contrast represents the variation in the lesion severity. Results Artificial lesions appear more uniform on tooth surfaces exposed to an acid challenge at visible wavelengths than they do in the near-IR. Measurements of the lesion depth and severity using CP-OCT show that the lesion severity varies markedly across the sample windows and that the lesion

  2. Capturing Motion and Depth Before Cinematography.

    PubMed

    Wade, Nicholas J

    2016-01-01

    Visual representations of biological states have traditionally faced two problems: they lacked motion and depth. Attempts were made to supply these wants over many centuries, but the major advances were made in the early-nineteenth century. Motion was synthesized by sequences of slightly different images presented in rapid succession and depth was added by presenting slightly different images to each eye. Apparent motion and depth were combined some years later, but they tended to be applied separately. The major figures in this early period were Wheatstone, Plateau, Horner, Duboscq, Claudet, and Purkinje. Others later in the century, like Marey and Muybridge, were stimulated to extend the uses to which apparent motion and photography could be applied to examining body movements. These developments occurred before the birth of cinematography, and significant insights were derived from attempts to combine motion and depth.

  3. Confocal spectroscopic imaging measurements of depth dependent hydration dynamics in human skin in-vivo

    NASA Astrophysics Data System (ADS)

    Behm, P.; Hashemi, M.; Hoppe, S.; Wessel, S.; Hagens, R.; Jaspers, S.; Wenck, H.; Rübhausen, M.

    2017-11-01

    We present confocal spectroscopic imaging measurements applied to in-vivo studies to determine the depth dependent hydration profiles of human skin. The observed spectroscopic signal covers the spectral range from 810 nm to 2100 nm allowing to probe relevant absorption signals that can be associated with e.g. lipid and water-absorption bands. We employ a spectrally sensitive autofocus mechanism that allows an ultrafast focusing of the measurement spot on the skin and subsequently probes the evolution of the absorption bands as a function of depth. We determine the change of the water concentration in m%. The water concentration follows a sigmoidal behavior with an increase of the water content of about 70% within 5 μm in a depth of about 14 μm. We have applied our technique to study the hydration dynamics of skin before and after treatment with different concentrations of glycerol indicating that an increase of the glycerol concentration leads to an enhanced water concentration in the stratum corneum. Moreover, in contrast to traditional corneometry we have found that the application of Aluminium Chlorohydrate has no impact to the hydration of skin.

  4. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  5. Z-depth integration: a new technique for manipulating z-depth properties in composited scenes

    NASA Astrophysics Data System (ADS)

    Steckel, Kayla; Whittinghill, David

    2014-02-01

    This paper presents a new technique in the production pipeline of asset creation for virtual environments called Z-Depth Integration (ZeDI). ZeDI is intended to reduce the time required to place elements at the appropriate z-depth within a scene. Though ZeDI is intended for use primarily in two-dimensional scene composition, depth-dependent "flat" animated objects are often critical elements of augmented and virtual reality applications (AR/VR). ZeDI is derived from "deep image compositing", a capacity implemented within the OpenEXR file format. In order to trick the human eye into perceiving overlapping scene elements as being in front of or behind one another, the developer must manually manipulate which pixels of an element are visible in relation to other objects embedded within the environment's image sequence. ZeDI improves on this process by providing a means for interacting with procedurally extracted z-depth data from a virtual environment scene. By streamlining the process of defining objects' depth characteristics, it is expected that the time and energy required for developers to create compelling AR/VR scenes will be reduced. In the proof of concept presented in this manuscript, ZeDI is implemented for pre-rendered virtual scene construction via an AfterEffects software plug-in.

  6. Tunable semiconductor laser at 1025-1095 nm range for OCT applications with an extended imaging depth

    NASA Astrophysics Data System (ADS)

    Shramenko, Mikhail V.; Chamorovskiy, Alexander; Lyu, Hong-Chou; Lobintsov, Andrei A.; Karnowski, Karol; Yakubovich, Sergei D.; Wojtkowski, Maciej

    2015-03-01

    Tunable semiconductor laser for 1025-1095 nm spectral range is developed based on the InGaAs semiconductor optical amplifier and a narrow band-pass acousto-optic tunable filter in a fiber ring cavity. Mode-hop-free sweeping with tuning speeds of up to 104 nm/s was demonstrated. Instantaneous linewidth is in the range of 0.06-0.15 nm, side-mode suppression is up to 50 dB and polarization extinction ratio exceeds 18 dB. Optical power in output single mode fiber reaches 20 mW. The laser was used in OCT system for imaging a contact lens immersed in a 0.5% intra-lipid solution. The cross-section image provided the imaging depth of more than 5mm.

  7. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  8. Improved depth estimation with the light field camera

    NASA Astrophysics Data System (ADS)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  9. Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Wimmer, Michael

    2016-02-01

    With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.

  10. Joint optic disc and cup boundary extraction from monocular fundus images.

    PubMed

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study.

    PubMed

    Li, Guang; Luo, Shouhua; Yan, Yuling; Gu, Ning

    2015-01-01

    The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion.

  12. Monocular Depth Perception and Robotic Grasping of Novel Objects

    DTIC Science & Technology

    2009-06-01

    resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show

  13. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing

    PubMed Central

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-01-01

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R2-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R2-values up to 0.77) corresponded with the OBRA findings

  14. Structure-aware depth super-resolution using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon

    2015-03-01

    This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

  15. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    NASA Astrophysics Data System (ADS)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can

  16. Noninvasive methods for determining lesion depth from vesicant exposure.

    PubMed

    Braue, Ernest H; Graham, John S; Doxzon, Bryce F; Hanssen, Kelly A; Lumpkin, Horace L; Stevenson, Robert S; Deckert, Robin R; Dalal, Stephen J; Mitcheltree, Larry W

    2007-01-01

    Before sulfur mustard (HD) injuries can be effectively treated, assessment of lesion depth must occur. Accurate depth assessment is important because it dictates how aggressive treatment needs to be to minimize or prevent cosmetic and functional deficits. Depth of injury typically is assessed by physical examination. Diagnosing very superficial and very deep lesions is relatively easy for the experienced burn surgeon. Lesions of intermediate depth, however, are often problematic in determining the need for grafting. This study was a preliminary evaluation of two noninvasive bioengineering methodologies, laser Doppler perfusion imaging (LDPI) and indocyanine green fluorescence imaging (ICGFI), to determine their ability to accurately diagnose depth of sulfur mustard lesions in a weanling swine model. Histological evaluation was used to assess the accuracy of the imaging techniques in determining burn depth. Six female weanling swine (8-12 kg) were exposed to 400 microl of neat sulfur mustard on six ventral sites for 2, 8, 30, or 60 minutes. This exposure regimen produced lesions of varying depths from superficial to deep dermal. Evaluations of lesion depth using the bioengineering techniques were conducted at 24, 48, and 72 hours after exposure. After euthanasia at 72 hours after exposure, skin biopsies were taken from each site and processed for routine hematoxylin and eosin histological evaluation to determine the true depth of the lesion. Results demonstrated that LDPI and ICGFI were useful tools to characterize skin perfusion and provided a good estimate of HD lesion depth. Traditional LDPI and the novel prototype ICGFI instrumentation used in this study produced images of blood flow through skin lesions, which provided a useful assessment of burn depth. LDPI and ICGFI accurately predicted the need for aggressive treatment (30- and 60-minute HD lesions) and nonaggressive treatment (2- and 8-minute HD lesions) for the lesions generated in this study. Histological

  17. Enhanced depth imaging optical coherence tomography of choroidal metastasis in 14 eyes.

    PubMed

    Al-Dahmash, Saad A; Shields, Carol L; Kaliki, Swathi; Johnson, Timothy; Shields, Jerry A

    2014-08-01

    To describe the imaging features of choroidal metastasis using enhanced depth imaging optical coherence tomography (EDI-OCT). This retrospective observational case series included 31 eyes with choroidal metastasis. Spectral domain EDI-OCT was performed using Heidelberg Spectralis HRA + OCT. The main outcome measures were imaging features by EDI-OCT. Of 31 eyes with choroidal metastasis imaged with EDI-OCT, 14 (45%) eyes displayed image detail suitable for study. The metastasis originated from carcinoma of the breast (n = 7, 50%), lung (n = 5, 36%), pancreas (n = 1, 7%), and thyroid gland (n = 1, 7%). The mean tumor basal diameter was 6.4 mm, and mean thickness was 2.3 mm by B-scan ultrasonography. The tumor location was submacular in 6 (43%) eyes and extramacular in 8 (57%) eyes. By EDI-OCT, the mean tumor thickness was 987 μm. The most salient EDI-OCT features of the metastasis included anterior compression/obliteration of the overlying choriocapillaris (n = 13, 93%), an irregular (lumpy bumpy) anterior contour (n = 9, 64%), and posterior shadowing (n = 12, 86%). Overlying retinal pigment epithelial abnormalities were noted (n = 11, 78%). Outer retinal features included structural loss of the interdigitation of the cone outer segment tips (n = 9, 64%), the ellipsoid portion of photoreceptors (n = 8, 57%), external limiting membrane (n = 4, 29%), outer nuclear layer (n = 1, 7%), and outer plexiform layer (n = 1, 7%). The inner retinal layers (inner nuclear layer to nerve fiber layer) were normal. Subretinal fluid (n = 11, 79%), subretinal lipofuscin pigment (n = 1, 7%), and intraretinal edema (n = 2, 14%) were identified. The EDI-OCT of choroidal metastasis shows a characteristic lumpy bumpy anterior tumor surface and outer retinal layer disruption with preservation of inner retinal layers.

  18. Wide field video-rate two-photon imaging by using spinning disk beam scanner

    NASA Astrophysics Data System (ADS)

    Maeda, Yasuhiro; Kurokawa, Kazuo; Ito, Yoko; Wada, Satoshi; Nakano, Akihiko

    2018-02-01

    The microscope technology with wider view field, deeper penetration depth, higher spatial resolution and higher imaging speed are required to investigate the intercellular dynamics or interactions of molecules and organs in cells or a tissue in more detail. The two-photon microscope with a near infrared (NIR) femtosecond laser is one of the technique to improve the penetration depth and spatial resolution. However, the video-rate or high-speed imaging with wide view field is difficult to perform with the conventional two-photon microscope. Because point-to-point scanning method is used in conventional one, so it's difficult to achieve video-rate imaging. In this study, we developed a two-photon microscope with spinning disk beam scanner and femtosecond NIR fiber laser with around 10 W average power for the microscope system to achieve above requirements. The laser is consisted of an oscillator based on mode-locked Yb fiber laser, a two-stage pre-amplifier, a main amplifier based on a Yb-doped photonic crystal fiber (PCF), and a pulse compressor with a pair of gratings. The laser generates a beam with maximally 10 W average power, 300 fs pulse width and 72 MHz repetition rate. And the beam incident to a spinning beam scanner (Yokogawa Electric) optimized for two-photon imaging. By using this system, we achieved to obtain the 3D images with over 1mm-penetration depth and video-rate image with 350 x 350 um view field from the root of Arabidopsis thaliana.

  19. Multidepth imaging by chromatic dispersion confocal microscopy

    NASA Astrophysics Data System (ADS)

    Olsovsky, Cory A.; Shelton, Ryan L.; Saldua, Meagan A.; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.

    2012-03-01

    Confocal microscopy has shown potential as an imaging technique to detect precancer. Imaging cellular features throughout the depth of epithelial tissue may provide useful information for diagnosis. However, the current in vivo axial scanning techniques for confocal microscopy are cumbersome, time-consuming, and restrictive when attempting to reconstruct volumetric images acquired in breathing patients. Chromatic dispersion confocal microscopy (CDCM) exploits severe longitudinal chromatic aberration in the system to axially disperse light from a broadband source and, ultimately, spectrally encode high resolution images along the depth of the object. Hyperchromat lenses are designed to have severe and linear longitudinal chromatic aberration, but have not yet been used in confocal microscopy. We use a hyperchromat lens in a stage scanning confocal microscope to demonstrate the capability to simultaneously capture information at multiple depths without mechanical scanning. A photonic crystal fiber pumped with a 830nm wavelength Ti:Sapphire laser was used as a supercontinuum source, and a spectrometer was used as the detector. The chromatic aberration and magnification in the system give a focal shift of 140μm after the objective lens and an axial resolution of 5.2-7.6μm over the wavelength range from 585nm to 830nm. A 400x400x140μm3 volume of pig cheek epithelium was imaged in a single X-Y scan. Nuclei can be seen at several depths within the epithelium. The capability of this technique to achieve simultaneous high resolution confocal imaging at multiple depths may reduce imaging time and motion artifacts and enable volumetric reconstruction of in vivo confocal images of the epithelium.

  20. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation

    PubMed Central

    Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei

    2017-01-01

    Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027

  1. Predictive models of turbidity and water depth in the Doñana marshes using Landsat TM and ETM+ images.

    PubMed

    Bustamante, Javier; Pacios, Fernando; Díaz-Delgado, Ricardo; Aragonés, David

    2009-05-01

    We have used Landsat-5 TM and Landsat-7 ETM+ images together with simultaneous ground-truth data at sample points in the Doñana marshes to predict water turbidity and depth from band reflectance using Generalized Additive Models. We have point samples for 12 different dates simultaneous with 7 Landsat-5 and 5 Landsat-7 overpasses. The best model for water turbidity in the marsh explained 38% of variance in ground-truth data and included as predictors band 3 (630-690 nm), band 5 (1550-1750 nm) and the ratio between bands 1 (450-520 nm) and 4 (760-900 nm). Water turbidity is easier to predict for water bodies like the Guadalquivir River and artificial ponds that are deep and not affected by bottom soil reflectance and aquatic vegetation. For the latter, a simple model using band 3 reflectance explains 78.6% of the variance. Water depth is easier to predict than turbidity. The best model for water depth in the marsh explains 78% of the variance and includes as predictors band 1, band 5, the ratio between band 2 (520-600 nm) and band 4, and bottom soil reflectance in band 4 in September, when the marsh is dry. The water turbidity and water depth models have been developed in order to reconstruct historical changes in Doñana wetlands during the last 30 years using the Landsat satellite images time series.

  2. Depth profilometry via multiplexed optical high-coherence interferometry.

    PubMed

    Kazemzadeh, Farnoud; Wong, Alexander; Behr, Bradford B; Hajian, Arsen R

    2015-01-01

    Depth Profilometry involves the measurement of the depth profile of objects, and has significant potential for various industrial applications that benefit from non-destructive sub-surface profiling such as defect detection, corrosion assessment, and dental assessment to name a few. In this study, we investigate the feasibility of depth profilometry using an Multiplexed Optical High-coherence Interferometry MOHI instrument. The MOHI instrument utilizes the spatial coherence of a laser and the interferometric properties of light to probe the reflectivity as a function of depth of a sample. The axial and lateral resolutions, as well as imaging depth, are decoupled in the MOHI instrument. The MOHI instrument is capable of multiplexing interferometric measurements into 480 one-dimensional interferograms at a location on the sample and is built with axial and lateral resolutions of 40 μm at a maximum imaging depth of 700 μm. Preliminary results, where a piece of sand-blasted aluminum, an NBK7 glass piece, and an optical phantom were successfully probed using the MOHI instrument to produce depth profiles, demonstrate the feasibility of such an instrument for performing depth profilometry.

  3. Depth Profilometry via Multiplexed Optical High-Coherence Interferometry

    PubMed Central

    Kazemzadeh, Farnoud; Wong, Alexander; Behr, Bradford B.; Hajian, Arsen R.

    2015-01-01

    Depth Profilometry involves the measurement of the depth profile of objects, and has significant potential for various industrial applications that benefit from non-destructive sub-surface profiling such as defect detection, corrosion assessment, and dental assessment to name a few. In this study, we investigate the feasibility of depth profilometry using an Multiplexed Optical High-coherence Interferometry MOHI instrument. The MOHI instrument utilizes the spatial coherence of a laser and the interferometric properties of light to probe the reflectivity as a function of depth of a sample. The axial and lateral resolutions, as well as imaging depth, are decoupled in the MOHI instrument. The MOHI instrument is capable of multiplexing interferometric measurements into 480 one-dimensional interferograms at a location on the sample and is built with axial and lateral resolutions of 40 μm at a maximum imaging depth of 700 μm. Preliminary results, where a piece of sand-blasted aluminum, an NBK7 glass piece, and an optical phantom were successfully probed using the MOHI instrument to produce depth profiles, demonstrate the feasibility of such an instrument for performing depth profilometry. PMID:25803289

  4. Computational-optical microscopy for 3D biological imaging beyond the diffraction limit

    NASA Astrophysics Data System (ADS)

    Grover, Ginni

    In recent years, super-resolution imaging has become an important fluorescent microscopy tool. It has enabled imaging of structures smaller than the optical diffraction limit with resolution less than 50 nm. Extension to high-resolution volume imaging has been achieved by integration with various optical techniques. In this thesis, development of a fluorescent microscope to enable high resolution, extended depth, three dimensional (3D) imaging is discussed; which is achieved by integration of computational methods with optical systems. In the first part of the thesis, point spread function (PSF) engineering for volume imaging is discussed. A class of PSFs, referred to as double-helix (DH) PSFs, is generated. The PSFs exhibit two focused spots in the image plane which rotate about the optical axis, encoding depth in rotation of the image. These PSFs extend the depth-of-field up to a factor of ˜5. Precision performance of the DH-PSFs, based on an information theoretical analysis, is compared with other 3D methods with conclusion that the DH-PSFs provide the best precision and the longest depth-of-field. Out of various possible DH-PSFs, a suitable PSF is obtained for super-resolution microscopy. The DH-PSFs are implemented in imaging systems, such as a microscope, with a special phase modulation at the pupil plane. Surface-relief elements which are polarization-insensitive and ˜90% light efficient are developed for phase modulation. The photon-efficient DH-PSF microscopes thus developed are used, along with optimal position estimation algorithms, for tracking and super-resolution imaging in 3D. Imaging at depths-of-field of up to 2.5 microm is achieved without focus scanning. Microtubules were imaged with 3D resolution of (6, 9, 39) nm, which is in close agreement with the theoretical limit. A quantitative study of co-localization of two proteins in volume was conducted in live bacteria. In the last part of the thesis practical aspects of the DH-PSF microscope are

  5. Compensation method for the influence of angle of view on animal temperature measurement using thermal imaging camera combined with depth image.

    PubMed

    Jiao, Leizi; Dong, Daming; Zhao, Xiande; Han, Pengcheng

    2016-12-01

    In the study, we proposed an animal surface temperature measurement method based on Kinect sensor and infrared thermal imager to facilitate the screening of animals with febrile diseases. Due to random motion and small surface temperature variation of animals, the influence of the angle of view on temperature measurement is significant. The method proposed in the present study could compensate the temperature measurement error caused by the angle of view. Firstly, we analyzed the relationship between measured temperature and angle of view and established the mathematical model for compensating the influence of the angle of view with the correlation coefficient above 0.99. Secondly, the fusion method of depth and infrared thermal images was established for synchronous image capture with Kinect sensor and infrared thermal imager and the angle of view of each pixel was calculated. According to experimental results, without compensation treatment, the temperature image measured in the angle of view of 74° to 76° showed the difference of more than 2°C compared with that measured in the angle of view of 0°. However, after compensation treatment, the temperature difference range was only 0.03-1.2°C. This method is applicable for real-time compensation of errors caused by the angle of view during the temperature measurement process with the infrared thermal imager. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Catchment-scale snow depth monitoring with balloon photogrammetry

    NASA Astrophysics Data System (ADS)

    Durand, M. T.; Li, D.; Wigmore, O.; Vanderjagt, B. J.; Molotch, N. P.; Bales, R. C.

    2016-12-01

    Field campaigns and permanent in-situ facilities provide extensive measurements of snowpack properties at catchment (or smaller) scales, and have consistently improved our understanding of snow processes and the estimation of snow water resources. However, snow depth, one of the most important snow states, has been measured almost entirely with discrete point-scale samplings in field measurements; spatiotemporally continuous snow depth measurements are nearly nonexistent, mainly due to the high cost of airborne flights and the ban of Unmanned Aerial Systems in many areas (e.g. in all the national parks). In this study, we estimate spatially continuous snow depth from photogrammetric reconstruction of aerial photos taken from a weather balloon. The study was conducted in a 0.2 km2 watershed in Wolverton, Sequoia National Park, California. We tied a point-and-shoot camera on a helium-inflated weather balloon to take aerial images; the camera was scripted to automatically capture images every 3 seconds and to record the camera position and orientation at the imaging times using a built-in GPS. With the 2D images of the snow-covered ground and the camera position and orientation data, the 3D coordinates of the snow surface were reconstructed at 10 cm resolution using photogrammetry software PhotoScan. Similar measurements were taken for the snow-free ground after snowmelt, and the snow depth was estimated from the difference between the snow-on and snow-off measurements. Comparing the photogrammetric-estimated snow depths with the 32 manually measured depths, taken at the same time as the snow-on balloon flight, we find the RMSE of the photogrammetric snow depth is 7 cm, which is 2% of the long-term peak snow depth in the study area. This study suggests that the balloon photogrammetry is a repeatable, economical, simple, and environmental-friendly method to continuously monitor snow at small-scales. Spatiotemporally continuous snow depth could be regularly measured in

  7. Penetration depth of photons in biological tissues from hyperspectral imaging in shortwave infrared in transmission and reflection geometries.

    PubMed

    Zhang, Hairong; Salo, Daniel; Kim, David M; Komarov, Sergey; Tai, Yuan-Chuan; Berezin, Mikhail Y

    2016-12-01

    Measurement of photon penetration in biological tissues is a central theme in optical imaging. A great number of endogenous tissue factors such as absorption, scattering, and anisotropy affect the path of photons in tissue, making it difficult to predict the penetration depth at different wavelengths. Traditional studies evaluating photon penetration at different wavelengths are focused on tissue spectroscopy that does not take into account the heterogeneity within the sample. This is especially critical in shortwave infrared where the individual vibration-based absorption properties of the tissue molecules are affected by nearby tissue components. We have explored the depth penetration in biological tissues from 900 to 1650 nm using Monte–Carlo simulation and a hyperspectral imaging system with Michelson spatial contrast as a metric of light penetration. Chromatic aberration-free hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 min. Relatively short recording time minimized artifacts from sample drying. Results from both transmission and reflection geometries consistently revealed that the highest spatial contrast in the wavelength range for deep tissue lies within 1300 to 1375 nm; however, in heavily pigmented tissue such as the liver, the range 1550 to 1600 nm is also prominent.

  8. Depth profiling and imaging capabilities of an ultrashort pulse laser ablation time of flight mass spectrometer

    PubMed Central

    Cui, Yang; Moore, Jerry F.; Milasinovic, Slobodan; Liu, Yaoming; Gordon, Robert J.; Hanley, Luke

    2012-01-01

    An ultrafast laser ablation time-of-flight mass spectrometer (AToF-MS) and associated data acquisition software that permits imaging at micron-scale resolution and sub-micron-scale depth profiling are described. The ion funnel-based source of this instrument can be operated at pressures ranging from 10−8 to ∼0.3 mbar. Mass spectra may be collected and stored at a rate of 1 kHz by the data acquisition system, allowing the instrument to be coupled with standard commercial Ti:sapphire lasers. The capabilities of the AToF-MS instrument are demonstrated on metal foils and semiconductor wafers using a Ti:sapphire laser emitting 800 nm, ∼75 fs pulses at 1 kHz. Results show that elemental quantification and depth profiling are feasible with this instrument. PMID:23020378

  9. Optical cryptography with biometrics for multi-depth objects.

    PubMed

    Yan, Aimin; Wei, Yang; Hu, Zhijuan; Zhang, Jingtao; Tsang, Peter Wai Ming; Poon, Ting-Chung

    2017-10-11

    We propose an optical cryptosystem for encrypting images of multi-depth objects based on the combination of optical heterodyne technique and fingerprint keys. Optical heterodyning requires two optical beams to be mixed. For encryption, each optical beam is modulated by an optical mask containing either the fingerprint of the person who is sending, or receiving the image. The pair of optical masks are taken as the encryption keys. Subsequently, the two beams are used to scan over a multi-depth 3-D object to obtain an encrypted hologram. During the decryption process, each sectional image of the 3-D object is recovered by convolving its encrypted hologram (through numerical computation) with the encrypted hologram of a pinhole image that is positioned at the same depth as the sectional image. Our proposed method has three major advantages. First, the lost-key situation can be avoided with the use of fingerprints as the encryption keys. Second, the method can be applied to encrypt 3-D images for subsequent decrypted sectional images. Third, since optical heterodyning scanning is employed to encrypt a 3-D object, the optical system is incoherent, resulting in negligible amount of speckle noise upon decryption. To the best of our knowledge, this is the first time optical cryptography of 3-D object images has been demonstrated in an incoherent optical system with biometric keys.

  10. Learning in Depth: Students as Experts

    ERIC Educational Resources Information Center

    Egan, Kieran; Madej, Krystina

    2009-01-01

    Nearly everyone who has tried to describe an image of the educated person, from Plato to the present, includes at least two requirements: first, educated people must be widely knowledgeable and, second, they must know something in depth. The authors would like to advocate a somewhat novel approach to "learning in depth" (LiD) that seems…

  11. Anatomy of the western Java plate interface from depth-migrated seismic images

    NASA Astrophysics Data System (ADS)

    Kopp, H.; Hindle, D.; Klaeschen, D.; Oncken, O.; Reichert, C.; Scholl, D.

    2009-11-01

    Newly pre-stack depth-migrated seismic images resolve the structural details of the western Java forearc and plate interface. The structural segmentation of the forearc into discrete mechanical domains correlates with distinct deformation styles. Approximately 2/3 of the trench sediment fill is detached and incorporated into frontal prism imbricates, while the floor sequence is underthrust beneath the décollement. Western Java, however, differs markedly from margins such as Nankai or Barbados, where a uniform, continuous décollement reflector has been imaged. In our study area, the plate interface reveals a spatially irregular, nonlinear pattern characterized by the morphological relief of subducted seamounts and thicker than average patches of underthrust sediment. The underthrust sediment is associated with a low velocity zone as determined from wide-angle data. Active underplating is not resolved, but likely contributes to the uplift of the large bivergent wedge that constitutes the forearc high. Our profile is located 100 km west of the 2006 Java tsunami earthquake. The heterogeneous décollement zone regulates the friction behavior of the shallow subduction environment where the earthquake occurred. The alternating pattern of enhanced frictional contact zones associated with oceanic basement relief and weak material patches of underthrust sediment influences seismic coupling and possibly contributed to the heterogeneous slip distribution. Our seismic images resolve a steeply dipping splay fault, which originates at the décollement and terminates at the sea floor and which potentially contributes to tsunami generation during co-seismic activity.

  12. Anatomy of the western Java plate interface from depth-migrated seismic images

    USGS Publications Warehouse

    Kopp, H.; Hindle, D.; Klaeschen, D.; Oncken, O.; Reichert, C.; Scholl, D.

    2009-01-01

    Newly pre-stack depth-migrated seismic images resolve the structural details of the western Java forearc and plate interface. The structural segmentation of the forearc into discrete mechanical domains correlates with distinct deformation styles. Approximately 2/3 of the trench sediment fill is detached and incorporated into frontal prism imbricates, while the floor sequence is underthrust beneath the d??collement. Western Java, however, differs markedly from margins such as Nankai or Barbados, where a uniform, continuous d??collement reflector has been imaged. In our study area, the plate interface reveals a spatially irregular, nonlinear pattern characterized by the morphological relief of subducted seamounts and thicker than average patches of underthrust sediment. The underthrust sediment is associated with a low velocity zone as determined from wide-angle data. Active underplating is not resolved, but likely contributes to the uplift of the large bivergent wedge that constitutes the forearc high. Our profile is located 100 km west of the 2006 Java tsunami earthquake. The heterogeneous d??collement zone regulates the friction behavior of the shallow subduction environment where the earthquake occurred. The alternating pattern of enhanced frictional contact zones associated with oceanic basement relief and weak material patches of underthrust sediment influences seismic coupling and possibly contributed to the heterogeneous slip distribution. Our seismic images resolve a steeply dipping splay fault, which originates at the d??collement and terminates at the sea floor and which potentially contributes to tsunami generation during co-seismic activity. ?? 2009 Elsevier B.V.

  13. Extended depth of field in an intrinsically wavefront-encoded biometric iris camera

    NASA Astrophysics Data System (ADS)

    Bergkoetter, Matthew D.; Bentley, Julie L.

    2014-12-01

    This work describes a design process which greatly increases the depth of field of a simple three-element lens system intended for biometric iris recognition. The system is optimized to produce a point spread function which is insensitive to defocus, so that recorded images may be deconvolved without knowledge of the exact object distance. This is essentially a variation on the technique of wavefront encoding, however the desired encoding effect is achieved by aberrations intrinsic to the lens system itself, without the need for a pupil phase mask.

  14. Quantifying how the combination of blur and disparity affects the perceived depth

    NASA Astrophysics Data System (ADS)

    Wang, Junle; Barkowsky, Marcus; Ricordel, Vincent; Le Callet, Patrick

    2011-03-01

    The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper. When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases. We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth comparing to the images without any blur in the background. The increase of perceived depth can be considered as a function of the relative distance between the foreground and background, while it is insensitive to the distance between the viewer and the depth plane at which the blur is added.

  15. Prototype pre-clinical PET scanner with depth-of-interaction measurements using single-layer crystal array and single-ended readout

    NASA Astrophysics Data System (ADS)

    Lee, Min Sun; Kim, Kyeong Yun; Ko, Guen Bae; Lee, Jae Sung

    2017-05-01

    In this study, we developed a proof-of-concept prototype PET system using a pair of depth-of-interaction (DOI) PET detectors based on the proposed DOI-encoding method and digital silicon photomultiplier (dSiPM). Our novel cost-effective DOI measurement method is based on a triangular-shaped reflector that requires only a single-layer pixelated crystal and single-ended signal readout. The DOI detector consisted of an 18  ×  18 array of unpolished LYSO crystal (1.47  ×  1.47  ×  15 mm3) wrapped with triangular-shaped reflectors. The DOI information was encoded by depth-dependent light distribution tailored by the reflector geometry and DOI correction was performed using four-step depth calibration data and maximum-likelihood (ML) estimation. The detector pair and the object were placed on two motorized rotation stages to demonstrate 12-block ring PET geometry with 11.15 cm diameter. Spatial resolution was measured and phantom and animal imaging studies were performed to investigate imaging performance. All images were reconstructed with and without the DOI correction to examine the impact of our DOI measurement. The pair of dSiPM-based DOI PET detectors showed good physical performances respectively: 2.82 and 3.09 peak-to-valley ratios, 14.30% and 18.95% energy resolution, and 4.28 and 4.24 mm DOI resolution averaged over all crystals and all depths. A sub-millimeter spatial resolution was achieved at the center of the field of view (FOV). After applying ML-based DOI correction, maximum 36.92% improvement was achieved in the radial spatial resolution and a uniform resolution was observed within 5 cm of transverse PET FOV. We successfully acquired phantom and animal images with improved spatial resolution and contrast by using the DOI measurement. The proposed DOI-encoding method was successfully demonstrated in the system level and exhibited good performance, showing its feasibility for animal PET applications with high spatial

  16. Confocal Imaging of the Embryonic Heart: How Deep?

    NASA Astrophysics Data System (ADS)

    Miller, Christine E.; Thompson, Robert P.; Bigelow, Michael R.; Gittinger, George; Trusk, Thomas C.; Sedmera, David

    2005-06-01

    Confocal microscopy allows for optical sectioning of tissues, thus obviating the need for physical sectioning and subsequent registration to obtain a three-dimensional representation of tissue architecture. However, practicalities such as tissue opacity, light penetration, and detector sensitivity have usually limited the available depth of imaging to 200 [mu]m. With the emergence of newer, more powerful systems, we attempted to push these limits to those dictated by the working distance of the objective. We used whole-mount immunohistochemical staining followed by clearing with benzyl alcohol-benzyl benzoate (BABB) to visualize three-dimensional myocardial architecture. Confocal imaging of entire chick embryonic hearts up to a depth of 1.5 mm with voxel dimensions of 3 [mu]m was achieved with a 10× dry objective. For the purpose of screening for congenital heart defects, we used endocardial painting with fluorescently labeled poly-L-lysine and imaged BABB-cleared hearts with a 5× objective up to a depth of 2 mm. Two-photon imaging of whole-mount specimens stained with Hoechst nuclear dye produced clear images all the way through stage 29 hearts without significant signal attenuation. Thus, currently available systems allow confocal imaging of fixed samples to previously unattainable depths, the current limiting factors being objective working distance, antibody penetration, specimen autofluorescence, and incomplete clearing.

  17. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  18. Exploiting chromatic aberration to spectrally encode depth in reflectance confocal microscopy

    NASA Astrophysics Data System (ADS)

    Carrasco-Zevallos, Oscar; Shelton, Ryan L.; Olsovsky, Cory; Saldua, Meagan; Applegate, Brian E.; Maitland, Kristen C.

    2011-06-01

    We present chromatic confocal microscopy as a technique to axially scan the sample by spectrally encoding depth information to avoid mechanical scanning of the lens or sample. We have achieved an 800 μm focal shift over a range of 680-1080 nm using a hyperchromat lens as the imaging lens. A more complex system that incorporates a water immersion objective to improve axial resolution was built and tested. We determined that increasing objective magnification decreases chromatic shift while improving axial resolution. Furthermore, collimating after the hyperchromat at longer wavelengths yields an increase in focal shift.

  19. Depth estimation using a lightfield camera

    NASA Astrophysics Data System (ADS)

    Roper, Carissa

    The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.

  20. A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study

    PubMed Central

    2015-01-01

    Background The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. Methods In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. Results By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. Conclusions The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion. PMID:25602532

  1. Depth measurements through controlled aberrations of projected patterns.

    PubMed

    Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim

    2012-03-12

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  2. Deep Tissue Photoacoustic Imaging Using a Miniaturized 2-D Capacitive Micromachined Ultrasonic Transducer Array

    PubMed Central

    Kothapalli, Sri-Rajasekhar; Ma, Te-Jen; Vaithilingam, Srikant; Oralkan, Ömer

    2014-01-01

    In this paper, we demonstrate 3-D photoacoustic imaging (PAI) of light absorbing objects embedded as deep as 5 cm inside strong optically scattering phantoms using a miniaturized (4 mm × 4 mm × 500 µm), 2-D capacitive micromachined ultrasonic transducer (CMUT) array of 16 × 16 elements with a center frequency of 5.5 MHz. Two-dimensional tomographic images and 3-D volumetric images of the objects placed at different depths are presented. In addition, we studied the sensitivity of CMUT-based PAI to the concentration of indocyanine green dye at 5 cm depth inside the phantom. Under optimized experimental conditions, the objects at 5 cm depth can be imaged with SNR of about 35 dB and a spatial resolution of approximately 500 µm. Results demonstrate that CMUTs with integrated front-end amplifier circuits are an attractive choice for achieving relatively high depth sensitivity for PAI. PMID:22249594

  3. Study on super-resolution three-dimensional range-gated imaging technology

    NASA Astrophysics Data System (ADS)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  4. UTILIZATION OF FUNDUS AUTOFLUORESCENCE, SPECTRAL DOMAIN OPTICAL COHERENCE TOMOGRAPHY, AND ENHANCED DEPTH IMAGING IN THE CHARACTERIZATION OF BIETTI CRYSTALLINE DYSTROPHY IN DIFFERENT STAGES.

    PubMed

    Li, Qian; Li, Yang; Zhang, Xiaohui; Xu, Zhangxing; Zhu, Xiaoqing; Ma, Kai; She, Haicheng; Peng, Xiaoyan

    2015-10-01

    To characterize Bietti crystalline dystrophy (BCD) in different stages using multiple imaging modalities. Sixteen participants clinically diagnosed as BCD were included in the retrospective study and were categorized into 3 stages according to fundus photography. Eleven patients were genetically confirmed. Fundus autofluorescence, spectral domain optical coherence tomography, and enhanced depth imaging features of BCD were analyzed. On fundus autofluorescence, the abnormal autofluorescence was shown to enlarge in area and decrease in intensity with stages. Using spectral domain optical coherence tomography, the abnormalities in Stage 1 were observed to localize in outer retinal layers, whereas in Stage 2 and Stage 3, more extensive retinal atrophy was seen. In enhanced depth imaging, the subfoveal choroidal layers were delineated clearly in Stage 1; in Stage 2, destructions were primarily found in the choriocapillaris with associated alterations in the outer vessels; Stage 3 BCD displayed severe choroidal thinning. Choroidal neovascularization and macular edema were exhibited with high incidence. IVS6-8del17bp/inGC of the CYP4V2 gene was the most common mutant allele. Noninvasive fundus autofluorescence, spectral domain optical coherence tomography, and enhanced depth imaging may help to characterize the chorioretinal pathology of BCD at different degrees, and therefore, we propose staging of BCD depending on those methods. Physicians should be cautious of the vision-threatening complications of the disease.

  5. A depth-of-interaction PET detector using mutual gain-equalized silicon photomultiplier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Xi, A.G, Weisenberger, H. Dong, Brian Kross, S. Lee, J. McKisson, Carl Zorn

    We developed a prototype high resolution, high efficiency depth-encoding detector for PET applications based on dual-ended readout of LYSO array with two silicon photomultipliers (SiPMs). Flood images, energy resolution, and depth-of-interaction (DOI) resolution were measured for a LYSO array - 0.7 mm in crystal pitch and 10 mm in thickness - with four unpolished parallel sides. Flood images were obtained such that individual crystal element in the array is resolved. The energy resolution of the entire array was measured to be 33%, while individual crystal pixel elements utilizing the signal from both sides ranged from 23.3% to 27%. By applyingmore » a mutual-gain equalization method, a DOI resolution of 2 mm for the crystal array was obtained in the experiments while simulations indicate {approx}1 mm DOI resolution could possibly be achieved. The experimental DOI resolution can be further improved by obtaining revised detector supporting electronics with better energy resolutions. This study provides a detailed detector calibration and DOI response characterization of the dual-ended readout SiPM-based PET detectors, which will be important in the design and calibration of a PET scanner in the future.« less

  6. Method and apparatus to measure the depth of skin burns

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.

    2002-01-01

    A new device for measuring the depth of surface tissue burns based on the rate at which the skin temperature responds to a sudden differential temperature stimulus. This technique can be performed without physical contact with the burned tissue. In one implementation, time-dependent surface temperature data is taken from subsequent frames of a video signal from an infrared-sensitive video camera. When a thermal transient is created, e.g., by turning off a heat lamp directed at the skin surface, the following time-dependent surface temperature data can be used to determine the skin burn depth. Imaging and non-imaging versions of this device can be implemented, thereby enabling laboratory-quality skin burn depth imagers for hospitals as well as hand-held skin burn depth sensors the size of a small pocket flashlight for field use and triage.

  7. A small-molecule dye for NIR-II imaging

    NASA Astrophysics Data System (ADS)

    Antaris, Alexander L.; Chen, Hao; Cheng, Kai; Sun, Yao; Hong, Guosong; Qu, Chunrong; Diao, Shuo; Deng, Zixin; Hu, Xianming; Zhang, Bo; Zhang, Xiaodong; Yaghi, Omar K.; Alamparambil, Zita R.; Hong, Xuechuan; Cheng, Zhen; Dai, Hongjie

    2016-02-01

    Fluorescent imaging of biological systems in the second near-infrared window (NIR-II) can probe tissue at centimetre depths and achieve micrometre-scale resolution at depths of millimetres. Unfortunately, all current NIR-II fluorophores are excreted slowly and are largely retained within the reticuloendothelial system, making clinical translation nearly impossible. Here, we report a rapidly excreted NIR-II fluorophore (~90% excreted through the kidneys within 24 h) based on a synthetic 970-Da organic molecule (CH1055). The fluorophore outperformed indocyanine green (ICG)--a clinically approved NIR-I dye--in resolving mouse lymphatic vasculature and sentinel lymphatic mapping near a tumour. High levels of uptake of PEGylated-CH1055 dye were observed in brain tumours in mice, suggesting that the dye was detected at a depth of ~4 mm. The CH1055 dye also allowed targeted molecular imaging of tumours in vivo when conjugated with anti-EGFR Affibody. Moreover, a superior tumour-to-background signal ratio allowed precise image-guided tumour-removal surgery.

  8. Impact of Depth and Breadth of Student Involvement on Academic Achievement

    ERIC Educational Resources Information Center

    Ivanova, Albena; Moretti, Anthony

    2018-01-01

    We investigate the direct and interaction effects of breadth and depth of student involvement in campus activities on student grade point average. Using data from the Student Engagement Transcripts on 475 students and ordinary least squares regression, we provide evidence for both direct and interaction effects. A more detailed analysis of the…

  9. The Relation of Self-Image to Academic Placement and Achievement in Hearing-Impaired Students.

    ERIC Educational Resources Information Center

    Gans, Jennifer

    The relationship between self-image and academic placement and achievement was studied with 1,072 Colorado students (ages 5-20) with hearing impairments. It was found that students who are hearing impaired with good English language skills have a more positive self-image than those whose language skills are below average. The relation between…

  10. Attenuated total reflection-Fourier transform infrared imaging of large areas using inverted prism crystals and combining imaging and mapping.

    PubMed

    Chan, K L Andrew; Kazarian, Sergei G

    2008-10-01

    Attenuated total reflection-Fourier transform infrared (ATR-FT-IR) imaging is a very useful tool for capturing chemical images of various materials due to the simple sample preparation and the ability to measure wet samples or samples in an aqueous environment. However, the size of the array detector used for image acquisition is often limited and there is usually a trade off between spatial resolution and the field of view (FOV). The combination of mapping and imaging can be used to acquire images with a larger FOV without sacrificing spatial resolution. Previous attempts have demonstrated this using an infrared microscope and a Germanium hemispherical ATR crystal to achieve images of up to 2.5 mm x 2.5 mm but with varying spatial resolution and depth of penetration across the imaged area. In this paper, we demonstrate a combination of mapping and imaging with a different approach using an external optics housing for large ATR accessories and inverted ATR prisms to achieve ATR-FT-IR images with a large FOV and reasonable spatial resolution. The results have shown that a FOV of 10 mm x 14 mm can be obtained with a spatial resolution of approximately 40-60 microm when using an accessory that gives no magnification. A FOV of 1.3 mm x 1.3 mm can be obtained with spatial resolution of approximately 15-20 microm when using a diamond ATR imaging accessory with 4x magnification. No significant change in image quality such as spatial resolution or depth of penetration has been observed across the whole FOV with this method and the measurement time was approximately 15 minutes for an image consisting of 16 image tiles.

  11. Penetration depth of photons in biological tissues from hyperspectral imaging in shortwave infrared in transmission and reflection geometries

    PubMed Central

    Zhang, Hairong; Salo, Daniel; Kim, David M.; Komarov, Sergey; Tai, Yuan-Chuan; Berezin, Mikhail Y.

    2016-01-01

    Abstract. Measurement of photon penetration in biological tissues is a central theme in optical imaging. A great number of endogenous tissue factors such as absorption, scattering, and anisotropy affect the path of photons in tissue, making it difficult to predict the penetration depth at different wavelengths. Traditional studies evaluating photon penetration at different wavelengths are focused on tissue spectroscopy that does not take into account the heterogeneity within the sample. This is especially critical in shortwave infrared where the individual vibration-based absorption properties of the tissue molecules are affected by nearby tissue components. We have explored the depth penetration in biological tissues from 900 to 1650 nm using Monte–Carlo simulation and a hyperspectral imaging system with Michelson spatial contrast as a metric of light penetration. Chromatic aberration-free hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 min. Relatively short recording time minimized artifacts from sample drying. Results from both transmission and reflection geometries consistently revealed that the highest spatial contrast in the wavelength range for deep tissue lies within 1300 to 1375 nm; however, in heavily pigmented tissue such as the liver, the range 1550 to 1600 nm is also prominent. PMID:27930773

  12. The effect of glenoid cavity depth on rotator cuff tendinitis.

    PubMed

    Malkoc, Melih; Korkmaz, Ozgur; Ormeci, Tugrul; Sever, Cem; Kara, Adna; Mahirogulları, Mahir

    2016-03-01

    Some of the most important causes of shoulder pain are inflammation and degenerative changes in the rotator cuff (RC). Magnetic resonance imaging (MRI) is a noninvasive and safe imaging modality. MRI can be used for the evaluation of cuff tendinopathy. In this study, we evaluated the relationship between glenoid cavity depth and cuff tendinopathy and we investigated glenoid cavity depth on the pathogenesis of cuff tendinopathy. We retrospectively evaluated 215 patients who underwent MRI. Of these, 60 patients showed cuff tendinopathy (group A) and 54 patients showed no pathology (group B). Glenoid cavity depth was calculated in the coronal and transverse planes. The mean axial depth was 1.7 ± 0.9 and the mean coronal depth 3.8 ± 0.9, for group A. The mean axial depth was 3.5 ± 0.7 and the mean coronal depth 1.5 ± 0.8, for group B. There were significant differences in the axial and coronal depths between the two groups. High coronal and low axial depth of the glenoid cavity can be used to diagnose RC tendinitis.

  13. Extended depth of field system for long distance iris acquisition

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Lin; Hsieh, Sheng-Hsun; Hung, Kuo-En; Yang, Shi-Wen; Li, Yung-Hui; Tien, Chung-Hao

    2012-10-01

    Using biometric signatures for identity recognition has been practiced for centuries. Recently, iris recognition system attracts much attention due to its high accuracy and high stability. The texture feature of iris provides a signature that is unique for each subject. Currently most commercial iris recognition systems acquire images in less than 50 cm, which is a serious constraint that needs to be broken if we want to use it for airport access or entrance that requires high turn-over rate . In order to capture the iris patterns from a distance, in this study, we developed a telephoto imaging system with image processing techniques. By using the cubic phase mask positioned front of the camera, the point spread function was kept constant over a wide range of defocus. With adequate decoding filter, the blurred image was restored, where the working distance between the subject and the camera can be achieved over 3m associated with 500mm focal length and aperture F/6.3. The simulation and experimental results validated the proposed scheme, where the depth of focus of iris camera was triply extended over the traditional optics, while keeping sufficient recognition accuracy.

  14. Multi-contrast light profile microscopy for the depth-resolved imaging of the properties of multi-ply thin films.

    PubMed

    Power, J F

    2009-06-01

    Light profile microscopy (LPM) is a direct method for the spectral depth imaging of thin film cross-sections on the micrometer scale. LPM uses a perpendicular viewing configuration that directly images a source beam propagated through a thin film. Images are formed in dark field contrast, which is highly sensitive to subtle interfacial structures that are invisible to reference methods. The independent focusing of illumination and imaging systems allows multiple registered optical sources to be hosted on a single platform. These features make LPM a powerful multi-contrast (MC) imaging technique, demonstrated in this work with six modes of imaging in a single instrument, based on (1) broad-band elastic scatter; (2) laser excited wideband luminescence; (3) coherent elastic scatter; (4) Raman scatter (three channels with RGB illumination); (5) wavelength resolved luminescence; and (6) spectral broadband scatter, resolved in immediate succession. MC-LPM integrates Raman images with a wider optical and morphological picture of the sample than prior art microprobes. Currently, MC-LPM resolves images at an effective spectral resolution better than 9 cm(-1), at a spatial resolution approaching 1 microm, with optics that operate in air at half the maximum numerical aperture of the prior art microprobes.

  15. Diaphragm depth in normal subjects.

    PubMed

    Shahgholi, Leili; Baria, Michael R; Sorenson, Eric J; Harper, Caitlin J; Watson, James C; Strommen, Jeffrey A; Boon, Andrea J

    2014-05-01

    Needle electromyography (EMG) of the diaphragm carries the potential risk of pneumothorax. Knowing the approximate depth of the diaphragm should increase the test's safety and accuracy. Distances from the skin to the diaphragm and from the outer surface of the rib to the diaphragm were measured using B mode ultrasound in 150 normal subjects. When measured at the lower intercostal spaces, diaphragm depth varied between 0.78 and 4.91 cm beneath the skin surface and between 0.25 and 1.48 cm below the outer surface of the rib. Using linear regression modeling, body mass index (BMI) could be used to predict diaphragm depth from the skin to within an average of 1.15 mm. Diaphragm depth from the skin can vary by more than 4 cm. When image guidance is not available to enhance accuracy and safety of diaphragm EMG, it is possible to reliably predict the depth of the diaphragm based on BMI. Copyright © 2013 Wiley Periodicals, Inc.

  16. Application of preconditioned alternating direction method of multipliers in depth from focal stack

    NASA Astrophysics Data System (ADS)

    Javidnia, Hossein; Corcoran, Peter

    2018-03-01

    Postcapture refocusing effect in smartphone cameras is achievable using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map, which has been an open issue for decades. To tackle this issue, a framework is proposed based on a preconditioned alternating direction method of multipliers for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy, the optimization function of the proposed framework can, in fact, converge faster and better than state-of-the-art methods. The qualitative evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against five other methods. Later, 10 light field image sets have been transformed into focal stacks for quantitative evaluation purposes. Preliminary results indicate that the proposed framework has a better performance in terms of structural accuracy and optimization in comparison to the current state-of-the-art methods.

  17. Integrating concept ontology and multitask learning to achieve more effective classifier training for multilevel image annotation.

    PubMed

    Fan, Jianping; Gao, Yuli; Luo, Hangzai

    2008-03-01

    In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.

  18. Clear-cornea cataract surgery: pupil size and shape changes, along with anterior chamber volume and depth changes. A Scheimpflug imaging study.

    PubMed

    Kanellopoulos, Anastasios John; Asimellis, George

    2014-01-01

    To investigate, by high-precision digital analysis of data provided by Scheimpflug imaging, changes in pupil size and shape and anterior chamber (AC) parameters following cataract surgery. The study group (86 eyes, patient age 70.58±10.33 years) was subjected to cataract removal surgery with in-the-bag intraocular lens implantation (pseudophakic). A control group of 75 healthy eyes (patient age 51.14±16.27 years) was employed for comparison. Scheimpflug imaging (preoperatively and 3 months postoperatively) was employed to investigate central corneal thickness, AC depth, and AC volume. In addition, by digitally analyzing the black-and-white dotted line pupil edge marking in the Scheimpflug "large maps," the horizontal and vertical pupil diameters were individually measured and the pupil eccentricity was calculated. The correlations between AC depth and pupil shape parameters versus patient age, as well as the postoperative AC and pupil size and shape changes, were investigated. Compared to preoperative measurements, AC depth and AC volume of the pseudophakic eyes increased by 0.99±0.46 mm (39%; P<0.001) and 43.57±24.59 mm(3) (36%; P<0.001), respectively. Pupil size analysis showed that the horizontal pupil diameter was reduced by -0.27±0.22 mm (-9.7%; P=0.001) and the vertical pupil diameter was reduced by -0.32±0.24 mm (-11%; P<0.001). Pupil eccentricity was reduced by -39.56%; P<0.001. Cataract extraction surgery appears to affect pupil size and shape, possibly in correlation to AC depth increase. This novel investigation based on digital analysis of Scheimpflug imaging data suggests that the cataract postoperative photopic pupil is reduced and more circular. These changes appear to be more significant with increasing patient age.

  19. Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Marchuk, V. I.; Fisunov, A. V.; Tokareva, S. V.; Egiazarian, K. O.

    2015-03-01

    RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.

  20. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  1. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  2. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  3. Lip boundary detection techniques using color and depth information

    NASA Astrophysics Data System (ADS)

    Kim, Gwang-Myung; Yoon, Sung H.; Kim, Jung H.; Hur, Gi Taek

    2002-01-01

    This paper presents our approach to using a stereo camera to obtain 3-D image data to be used to improve existing lip boundary detection techniques. We show that depth information as provided by our approach can be used to significantly improve boundary detection systems. Our system detects the face and mouth area in the image by using color, geometric location, and additional depth information for the face. Initially, color and depth information can be used to localize the face. Then we can determine the lip region from the intensity information and the detected eye locations. The system has successfully been used to extract approximate lip regions using RGB color information of the mouth area. Merely using color information is not robust because the quality of the results may vary depending on light conditions, background, and the human race. To overcome this problem, we used a stereo camera to obtain 3-D facial images. 3-D data constructed from the depth information along with color information can provide more accurate lip boundary detection results as compared to color only based techniques.

  4. A novel approach for automatic snow depth estimation using UAV-taken images without ground control points

    NASA Astrophysics Data System (ADS)

    Mizinski, Bartlomiej; Niedzielski, Tomasz

    2017-04-01

    Recent developments in snow depth reconstruction based on remote sensing techniques include the use of photographs of snow-covered terrain taken by unmanned aerial vehicles (UAVs). There are several approaches that utilize visible-light photos (RGB) or near infrared images (NIR). The majority of the methods in question are based on reconstructing the digital surface model (DSM) of the snow-covered area with the use of the Structure-from-Motion (SfM) algorithm and the stereo-vision software. Having reconstructed the above-mentioned DSM it is straightforward to calculate the snow depth map which may be produced as a difference between the DSM of snow-covered terrain and the snow-free DSM, known as the reference surface. In order to use the aforementioned procedure, the high spatial accuracy of the two DSMs must be ensured. Traditionally, this is done using the ground control points (GCPs), either artificial or natural terrain features that are visible on aerial images, the coordinates of which are measured in the field using the Global Navigation Satellite System (GNSS) receiver by qualified personnel. The field measurements may be time-taking (GCPs must be well distributed in the study area, therefore the field experts should travel over long distances) and dangerous (the field experts may be exposed to avalanche risk or cold). Thus, there is a need to elaborate methods that enable the above-mentioned automatic snow depth map production without the use of GCPs. One of such attempts is shown in this paper which aims to present the novel method which is based on real-time processing of snow-covered and snow-free dense point clouds produced by SfM. The two stage georeferencing is proposed. The initial (low accuracy) one assigns true geographic, and subsequently projected, coordinates to the two dense point clouds, while the said initially-registered dense point clouds are matched using the iterative closest point (ICP) algorithm in the final (high accuracy) stage. The

  5. Analysis of the potential for non-invasive imaging of oxygenation at heart depth, using ultrasound optical tomography (UOT) or photo-acoustic tomography (PAT).

    PubMed

    Walther, Andreas; Rippe, Lars; Wang, Lihong V; Andersson-Engels, Stefan; Kröll, Stefan

    2017-10-01

    Despite the important medical implications, it is currently an open task to find optical non-invasive techniques that can image deep organs in humans. Addressing this, photo-acoustic tomography (PAT) has received a great deal of attention in the past decade, owing to favorable properties like high contrast and high spatial resolution. However, even with optimal components PAT cannot penetrate beyond a few centimeters, which still presents an important limitation of the technique. Here, we calculate the absorption contrast levels for PAT and for ultrasound optical tomography (UOT) and compare them to their relevant noise sources as a function of imaging depth. The results indicate that a new development in optical filters, based on rare-earth-ion crystals, can push the UOT technique significantly ahead of PAT. Such filters allow the contrast-to-noise ratio for UOT to be up to three orders of magnitude better than for PAT at depths of a few cm into the tissue. It also translates into a significant increase of the image depth of UOT compared to PAT, enabling deep organs to be imaged in humans in real time. Furthermore, such spectral holeburning filters are not sensitive to speckle decorrelation from the tissue and can operate at nearly any angle of incident light, allowing good light collection. We theoretically demonstrate the improved performance in the medically important case of non-invasive optical imaging of the oxygenation level of the frontal part of the human myocardial tissue. Our results indicate that further studies on UOT are of interest and that the technique may have large impact on future directions of biomedical optics.

  6. Volumetric 3D display with multi-layered active screens for enhanced the depth perception (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook

    2016-09-01

    Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea

  7. IP Subsurface Imaging in the Presence of Buried Steel Infrastructure

    NASA Astrophysics Data System (ADS)

    Smart, N. H.; Everett, M. E.

    2017-12-01

    The purpose of this research is to explore the use of induced polarization to image closely-spaced steel columns at a controlled test site. Texas A&M University's Riverside Campus (RELLIS) was used as a control test site to examine the difference between actual and remotely-sensed observed depths. Known borehole depths and soil composition made this site ideal. The subsurface metal structures were assessed using a combination of ER (Electrical Resistivity) and IP (Induced Polarization), and later processed using data inversion. Surveying was set up in reference to known locations and depths of steel structures in order to maximize control data quality. In comparing of known and remotely-sensed foundation depths a series of questions is raised regarding how percent error between imaged and actual depths can be lowered. We are able to draw questions from the results of our survey, as we compare them with the known depth and width of the metal beams. As RELLIS offers a control for us to conduct research, ideal survey geometry and inversion parameters can be met to achieve optimal results and resolution

  8. Smartphone-Based Android app for Determining UVA Aerosol Optical Depth and Direct Solar Irradiances.

    PubMed

    Igoe, Damien P; Parisi, Alfio; Carter, Brad

    2014-01-01

    This research describes the development and evaluation of the accuracy and precision of an Android app specifically designed, written and installed on a smartphone for detecting and quantifying incident solar UVA radiation and subsequently, aerosol optical depth at 340 and 380 nm. Earlier studies demonstrated that a smartphone image sensor can detect UVA radiation and the responsivity can be calibrated to measured direct solar irradiance. This current research provides the data collection, calibration, processing, calculations and display all on a smartphone. A very strong coefficient of determination of 0.98 was achieved when the digital response was recalibrated and compared to the Microtops sun photometer direct UVA irradiance observations. The mean percentage discrepancy for derived direct solar irradiance was only 4% and 6% for observations at 380 and 340 nm, respectively, lessening with decreasing solar zenith angle. An 8% mean percent difference discrepancy was observed when comparing aerosol optical depth, also decreasing as solar zenith angle decreases. The results indicate that a specifically designed Android app linking and using a smartphone image sensor, calendar and clock, with additional external narrow bandpass and neutral density filters can be used as a field sensor to evaluate both direct solar UVA irradiance and low aerosol optical depths for areas with low aerosol loads. © 2013 The American Society of Photobiology.

  9. Optical drug monitoring: photoacoustic imaging of nanosensors to monitor therapeutic lithium in vivo.

    PubMed

    Cash, Kevin J; Li, Chiye; Xia, Jun; Wang, Lihong V; Clark, Heather A

    2015-02-24

    Personalized medicine could revolutionize how primary care physicians treat chronic disease and how researchers study fundamental biological questions. To realize this goal, we need to develop more robust, modular tools and imaging approaches for in vivo monitoring of analytes. In this report, we demonstrate that synthetic nanosensors can measure physiologic parameters with photoacoustic contrast, and we apply that platform to continuously track lithium levels in vivo. Photoacoustic imaging achieves imaging depths that are unattainable with fluorescence or multiphoton microscopy. We validated the photoacoustic results that illustrate the superior imaging depth and quality of photoacoustic imaging with optical measurements. This powerful combination of techniques will unlock the ability to measure analyte changes in deep tissue and will open up photoacoustic imaging as a diagnostic tool for continuous physiological tracking of a wide range of analytes.

  10. 70 nm resolution in subsurface optical imaging of silicon integrated-circuits using pupil-function engineering

    NASA Astrophysics Data System (ADS)

    Serrels, K. A.; Ramsay, E.; Reid, D. T.

    2009-02-01

    We present experimental evidence for the resolution-enhancing effect of an annular pupil-plane aperture when performing nonlinear imaging in the vectorial-focusing regime through manipulation of the focal spot geometry. By acquiring two-photon optical beam-induced current images of a silicon integrated-circuit using solid-immersion-lens microscopy at 1550 nm we achieved 70 nm resolution. This result demonstrates a reduction in the minimum effective focal spot diameter of 36%. In addition, the annular-aperture-induced extension of the depth-of-focus causes an observable decrease in the depth contrast of the resulting image and we explain the origins of this using a simulation of the imaging process.

  11. Anterior segment biometry during accommodation imaged with ultra-long scan depth optical coherence tomography

    PubMed Central

    Du, Chixin; Shen, Meixiao; Li, Ming; Zhu, Dexi; Wang, Michael R.; Wang, Jianhua

    2012-01-01

    Purpose To measure by ultra-long scan depth optical coherence tomography (UL-OCT) dimensional changes in the anterior segment of human eyes during accommodation. Design Evaluation of diagnostic test or technology. Participants Forty-one right eyes of healthy subjects with a mean age of 34 years (range, 22–41 years) and a mean refraction of −2.5±2.6 diopters (D) were imaged in two repeated measurements at minimal and maximal accommodation. Methods A specially adapted designed UL-OCT instrument was used to image from the front surface of the cornea to the back surface of the crystalline lens. Custom software corrected the optical distortion of the images and yielded the biometric measurements. The coefficient of repeatability (COR) and the intraclass correlation coefficient (ICC) were calculated to evaluate the repeatability and reliability. Main Outcome Measures Anterior segment parameters and associated repeatability and reliability upon accommodation. The dimensional results included central corneal thickness (CCT), anterior chamber depth and width (ACD, ACW), pupil diameter (PD), lens thickness (LT), anterior segment length (ASL=ACD+LT), lens central position (LCP=ACD+1/2LT) and horizontal radii of the lens anterior and posterior surface curvatures (LAC, LPC). Results Repeated measurements of each variable within each accommodative state did not differ significantly (P>0.05). The CORs and ICCs for CCT, ACW, ACD, LT, LCP, and ASL were excellent (1.2% to 3.59% and 0.998 to 0.877, respectively). They were higher for PD (18.90% to 21.63% and 0.880 to 0.874, respectively), and moderate for LAC and LPC (34.86% to 42.72% and 0.669 to 0.251, respectively) in the two accommodative states. Compared to minimal accommodation, PD, ACD, LAC, LPC, and LCP decreased and LT and ASL increased significantly at maximal accommodation (P<0.05), while CCT and ACW did not change (P>0.05). Conclusions UL-OCT measured changes in anterior segment dimensions during accommodation with

  12. Matching and correlation computations in stereoscopic depth perception.

    PubMed

    Doi, Takahiro; Tanabe, Seiji; Fujita, Ichiro

    2011-03-02

    A fundamental task of the visual system is to infer depth by using binocular disparity. To encode binocular disparity, the visual cortex performs two distinct computations: one detects matched patterns in paired images (matching computation); the other constructs the cross-correlation between the images (correlation computation). How the two computations are used in stereoscopic perception is unclear. We dissociated their contributions in near/far discrimination by varying the magnitude of the disparity across separate sessions. For small disparity (0.03°), subjects performed at chance level to a binocularly opposite-contrast (anti-correlated) random-dot stereogram (RDS) but improved their performance with the proportion of contrast-matched (correlated) dots. For large disparity (0.48°), the direction of perceived depth reversed with an anti-correlated RDS relative to that for a correlated one. Neither reversed nor normal depth was perceived when anti-correlation was applied to half of the dots. We explain the decision process as a weighted average of the two computations, with the relative weight of the correlation computation increasing with the disparity magnitude. We conclude that matching computation dominates fine depth perception, while both computations contribute to coarser depth perception. Thus, stereoscopic depth perception recruits different computations depending on the disparity magnitude.

  13. Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera

    NASA Astrophysics Data System (ADS)

    Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.

    2007-09-01

    We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.

  14. System and method for controlling depth of imaging in tissues using fluorescence microscopy under ultraviolet excitation following staining with fluorescing agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demos, Stavros; Levenson, Richard

    The present disclosure relates to a method for analyzing tissue specimens. In one implementation the method involves obtaining a tissue sample and exposing the sample to one or more fluorophores as contrast agents to enhance contrast of subcellular compartments of the tissue sample. The tissue sample is illuminated by an ultraviolet (UV) light having a wavelength between about 200 nm to about 400 nm, with the wavelength being selected to result in penetration to only a specified depth below a surface of the tissue sample. Inter-image operations between images acquired under different imaging parameters allow for improvement of the imagemore » quality via removal of unwanted image components. A microscope may be used to image the tissue sample and provide the image to an image acquisition system that makes use of a camera. The image acquisition system may create a corresponding image that is transmitted to a display system for processing and display.« less

  15. Fiber-optic annular detector array for large depth of field photoacoustic macroscopy.

    PubMed

    Bauer-Marschallinger, Johannes; Höllinger, Astrid; Jakoby, Bernhard; Burgholzer, Peter; Berer, Thomas

    2017-03-01

    We report on a novel imaging system for large depth of field photoacoustic scanning macroscopy. Instead of commonly used piezoelectric transducers, fiber-optic based ultrasound detection is applied. The optical fibers are shaped into rings and mainly receive ultrasonic signals stemming from the ring symmetry axes. Four concentric fiber-optic rings with varying diameters are used in order to increase the image quality. Imaging artifacts, originating from the off-axis sensitivity of the rings, are reduced by coherence weighting. We discuss the working principle of the system and present experimental results on tissue mimicking phantoms. The lateral resolution is estimated to be below 200 μm at a depth of 1.5 cm and below 230 μm at a depth of 4.5 cm. The minimum detectable pressure is in the order of 3 Pa. The introduced method has the potential to provide larger imaging depths than acoustic resolution photoacoustic microscopy and an imaging resolution similar to that of photoacoustic computed tomography.

  16. Achieving desired images while avoiding undesired images: exploring the role of self-monitoring in impression management.

    PubMed

    Turnley, W H; Bolino, M C

    2001-04-01

    A study was conducted to test the hypothesis that high self-monitors more effectively manage impressions than low self-monitors do. Students in work groups indicated the extent to which they used 5 impression-management tactics over the course of a semester-long project. At the project's conclusion, students provided their perceptions of the other members of their group. The relationship between impression management and image favorability was then examined across 339 student-student dyads. The results generally suggest that high self-monitors can use impression-management tactics more effectively than can low self-monitors. In particular, high self-monitors appear to be more adept than low self-monitors at using ingratiation, self-promotion, and exemplification to achieve favorable images among their colleagues.

  17. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  18. Dynamic Transmit-Receive Beamforming by Spatial Matched Filtering for Ultrasound Imaging with Plane Wave Transmission.

    PubMed

    Chen, Yuling; Lou, Yang; Yen, Jesse

    2017-07-01

    During conventional ultrasound imaging, the need for multiple transmissions for one image and the time of flight for a desired imaging depth limit the frame rate of the system. Using a single plane wave pulse during each transmission followed by parallel receive processing allows for high frame rate imaging. However, image quality is degraded because of the lack of transmit focusing. Beamforming by spatial matched filtering (SMF) is a promising method which focuses ultrasonic energy using spatial filters constructed from the transmit-receive impulse response of the system. Studies by other researchers have shown that SMF beamforming can provide dynamic transmit-receive focusing throughout the field of view. In this paper, we apply SMF beamforming to plane wave transmissions (PWTs) to achieve both dynamic transmit-receive focusing at all imaging depths and high imaging frame rate (>5000 frames per second). We demonstrated the capability of the combined method (PWT + SMF) of achieving two-way focusing mathematically through analysis based on the narrowband Rayleigh-Sommerfeld diffraction theory. Moreover, the broadband performance of PWT + SMF was quantified in terms of lateral resolution and contrast from both computer simulations and experimental data. Results were compared between SMF beamforming and conventional delay-and-sum (DAS) beamforming in both simulations and experiments. At an imaging depth of 40 mm, simulation results showed a 29% lateral resolution improvement and a 160% contrast improvement with PWT + SMF. These improvements were 17% and 48% for experimental data with noise.

  19. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  20. Latest advances in molecular imaging instrumentation.

    PubMed

    Pichler, Bernd J; Wehrl, Hans F; Judenhofer, Martin S

    2008-06-01

    This review concentrates on the latest advances in molecular imaging technology, including PET, MRI, and optical imaging. In PET, significant improvements in tumor detection and image resolution have been achieved by introducing new scintillation materials, iterative image reconstruction, and correction methods. These advances enabled the first clinical scanners capable of time-of-flight detection and incorporating point-spread-function reconstruction to compensate for depth-of-interaction effects. In the field of MRI, the most important developments in recent years have mainly been MRI systems with higher field strengths and improved radiofrequency coil technology. Hyperpolarized imaging, functional MRI, and MR spectroscopy provide molecular information in vivo. A special focus of this review article is multimodality imaging and, in particular, the emerging field of combined PET/MRI.

  1. High-frequency ultrasound annular array imaging. Part II: digital beamformer design and imaging.

    PubMed

    Hu, Chang-Hong; Snook, Kevin A; Cao, Pei-Jie; Shung, K Kirk

    2006-02-01

    This is the second part of a two-paper series reporting a recent effort in the development of a high-frequency annular array ultrasound imaging system. In this paper an imaging system composed of a six-element, 43 MHz annular array transducer, a six-channel analog front-end, a field programmable gate array (FPGA)-based beamformer, and a digital signal processor (DSP) microprocessor-based scan converter will be described. A computer is used as the interface for image display. The beamformer that applies delays to the echoes for each channel is implemented with the strategy of combining the coarse and fine delays. The coarse delays that are integer multiples of the clock periods are achieved by using a first-in-first-out (FIFO) structure, and the fine delays are obtained with a fractional delay (FD) filter. Using this principle, dynamic receiving focusing is achieved. The image from a wire phantom obtained with the imaging system was compared to that from a prototype ultrasonic backscatter microscope with a 45 MHz single-element transducer. The improved lateral resolution and depth of field from the wire phantom image were observed. Images from an excised rabbit eye sample also were obtained, and fine anatomical structures were discerned.

  2. Achieving quality in cardiovascular imaging: proceedings from the American College of Cardiology-Duke University Medical Center Think Tank on Quality in Cardiovascular Imaging.

    PubMed

    Douglas, Pamela; Iskandrian, Ami E; Krumholz, Harlan M; Gillam, Linda; Hendel, Robert; Jollis, James; Peterson, Eric; Chen, Jersey; Masoudi, Frederick; Mohler, Emile; McNamara, Robert L; Patel, Manesh R; Spertus, John

    2006-11-21

    Cardiovascular imaging has enjoyed both rapid technological advances and sustained growth, yet less attention has been focused on quality than in other areas of cardiovascular medicine. To address this deficit, representatives from cardiovascular imaging societies, private payers, government agencies, the medical imaging industry, and experts in quality measurement met, and this report provides an overview of the discussions. A consensus definition of quality in imaging and a convergence of opinion on quality measures across imaging modalities was achieved and are intended to be the start of a process culminating in the development, dissemination, and adoption of quality measures for all cardiovascular imaging modalities.

  3. Estimating Lunar Pyroclastic Deposit Depth from Imaging Radar Data: Applications to Lunar Resource Assessment

    NASA Technical Reports Server (NTRS)

    Campbell, B. A.; Stacy, N. J.; Campbell, D. B.; Zisk, S. H.; Thompson, T. W.; Hawke, B. R.

    1992-01-01

    Lunar pyroclastic deposits represent one of the primary anticipated sources of raw materials for future human settlements. These deposits are fine-grained volcanic debris layers produced by explosive volcanism contemporaneous with the early stage of mare infilling. There are several large regional pyroclastic units on the Moon (for example, the Aristarchus Plateau, Rima Bode, and Sulpicius Gallus formations), and numerous localized examples, which often occur as dark-halo deposits around endogenic craters (such as in the floor of Alphonsus Crater). Several regional pyroclastic deposits were studied with spectral reflectance techniques: the Aristarchus Plateau materials were found to be a relatively homogeneous blanket of iron-rich glasses. One such deposit was sampled at the Apollo 17 landing site, and was found to have ferrous oxide and titanium dioxide contents of 12 percent and 5 percent, respectively. While the areal extent of these deposits is relatively well defined from orbital photographs, their depths have been constrained only by a few studies of partially filled impact craters and by imaging radar data. A model for radar backscatter from mantled units applicable to both 70-cm and 12.6-cm wavelength radar data is presented. Depth estimates from such radar observations may be useful in planning future utilization of lunar pyroclastic deposits.

  4. Motionless active depth from defocus system using smart optics for camera autofocus applications

    NASA Astrophysics Data System (ADS)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  5. Photothermal optical coherence tomography for depth-resolved imaging of mesenchymal stem cells via single wall carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Subhash, Hrebesh M.; Connolly, Emma; Murphy, Mary; Barron, Valerie; Leahy, Martin

    2014-03-01

    The progress in stem cell research over the past decade holds promise and potential to address many unmet clinical therapeutic needs. Tracking stem cell with modern imaging modalities are critically needed for optimizing stem cell therapy, which offers insight into various underlying biological processes such as cell migration, engraftment, homing, differentiation, and functions etc. In this study we report the feasibility of photothermal optical coherence tomography (PT-OCT) to image human mesenchymal stem cells (hMSCs) labeled with single-walled carbon nanotubes (SWNTs) for in vitro cell tracking in three dimensional scaffolds. PT-OCT is a functional extension of conventional OCT with extended capability of localized detection of absorbing targets from scattering background to provide depth-resolved molecular contrast imaging. A 91 kHz line rate, spectral domain PT-OCT system at 1310nm was developed to detect the photothermal signal generated by 800nm excitation laser. In general, MSCs do not have obvious optical absorption properties and cannot be directly visualized using PT-OCT imaging. However, the optical absorption properties of hMSCs can me modified by labeling with SWNTs. Using this approach, MSC were labeled with SWNT and the cell distribution imaged in a 3D polymer scaffold using PT-OCT.

  6. Flexible non-diffractive vortex microscope for three-dimensional depth-enhanced super-localization of dielectric, metal and fluorescent nanoparticles

    NASA Astrophysics Data System (ADS)

    Bouchal, Petr; Bouchal, Zdeněk

    2017-10-01

    In the past decade, probe-based super-resolution using temporally resolved localization of emitters became a groundbreaking imaging strategy in fluorescence microscopy. Here we demonstrate a non-diffractive vortex microscope (NVM), enabling three-dimensional super-resolution fluorescence imaging and localization and tracking of metal and dielectric nanoparticles. The NVM benefits from vortex non-diffractive beams (NBs) creating a double-helix point spread function that rotates under defocusing while maintaining its size and shape unchanged. Using intrinsic properties of the NBs, the dark-field localization of weakly scattering objects is achieved in a large axial range exceeding the depth of field of the microscope objective up to 23 times. The NVM was developed using an upright microscope Nikon Eclipse E600 operating with a spiral lithographic mask optimized using Fisher information and built into an add-on imaging module or microscope objective. In evaluation of the axial localization accuracy the root mean square error below 18 nm and 280 nm was verified over depth ranges of 3.5 μm and 13.6 μm, respectively. Subwavelength gold and polystyrene beads were localized with isotropic precision below 10 nm in the axial range of 3.5 μm and the axial precision reduced to 30 nm in the extended range of 13.6 μm. In the fluorescence imaging, the localization with isotropic precision below 15 nm was demonstrated in the range of 2.5 μm, whereas in the range of 8.3 μm, the precision of 15 nm laterally and 30-50 nm axially was achieved. The tracking of nanoparticles undergoing Brownian motion was demonstrated in the volume of 14 × 10 × 16 μm3. Applicability of the NVM was tested by fluorescence imaging of LW13K2 cells and localization of cellular proteins.

  7. Optical Drug Monitoring: Photoacoustic Imaging of Nanosensors to Monitor Therapeutic Lithium In Vivo

    PubMed Central

    Cash, Kevin J.; Li, Chiye; Xia, Jun; Wang, Lihong V.; Clark, Heather A.

    2015-01-01

    Personalized medicine could revolutionize how primary care physicians treat chronic disease and how researchers study fundamental biological questions. To realize this goal we need to develop more robust, modular tools and imaging approaches for in vivo monitoring of analytes. In this report, we demonstrate that synthetic nanosensors can measure physiologic parameters with photoacoustic contrast, and we apply that platform to continuously track lithium levels in vivo. Photoacoustic imaging achieves imaging depths that are unattainable with fluorescence or multiphoton microscopy. We validated the photoacoustic results that illustrate the superior imaging depth and quality of photoacoustic imaging with optical measurements. This powerful combination of techniques will unlock the ability to measure analyte changes in deep tissue and will open up photoacoustic imaging as a diagnostic tool for continuous physiological tracking of a wide range of analytes. PMID:25588028

  8. Depth-Based Detection of Standing-Pigs in Moving Noise Environments.

    PubMed

    Kim, Jinseong; Chung, Yeonwoo; Choi, Younchang; Sa, Jaewon; Kim, Heegon; Chung, Yongwha; Park, Daihee; Kim, Hakjae

    2017-11-29

    In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with "moving noises", which appear every night in a commercial pig farm, but have not been reported yet. We first apply a spatiotemporal interpolation technique to remove the moving noises occurring in the depth images. Then, we detect the standing-pigs by utilizing the undefined depth values around them. Our experimental results show that this method is effective for detecting standing-pigs at night, in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (i.e., 94.47%), even with severe moving noises occluding up to half of an input depth image. Furthermore, without any time-consuming technique, the proposed method can be executed in real-time.

  9. High-speed image processing system and its micro-optics application

    NASA Astrophysics Data System (ADS)

    Ohba, Kohtaro; Ortega, Jesus C. P.; Tanikawa, Tamio; Tanie, Kazuo; Tajima, Kenji; Nagai, Hiroshi; Tsuji, Masataka; Yamada, Shigeru

    2003-07-01

    In this paper, a new application system with high speed photography, i.e. an observational system for the tele-micro-operation, has been proposed with a dynamic focusing system and a high-speed image processing system using the "Depth From Focus (DFF)" criteria. In micro operation, such as for the microsurgery, DNA operation and etc., the small depth of a focus on the microscope makes bad observation. For example, if the focus is on the object, the actuator cannot be seen with the microscope. On the other hand, if the focus is on the actuator, the object cannot be observed. In this sense, the "all-in-focus image," which holds the in-focused texture all over the image, is useful to observe the microenvironments on the microscope. It is also important to obtain the "depth map" which could show the 3D micro virtual environments in real-time to actuate the micro objects, intuitively. To realize the real-time micro operation with DFF criteria, which has to integrate several images to obtain "all-in-focus image" and "depth map," at least, the 240 frames par second based image capture and processing system should be required. At first, this paper briefly reviews the criteria of "depth from focus" to achieve the all-in-focus image and the 3D microenvironments' reconstruction, simultaneously. After discussing the problem in our past system, a new frame-rate system is constructed with the high-speed video camera and FPGA hardware with 240 frames par second. To apply this system in the real microscope, a new criterion "ghost filtering" technique to reconstruct the all-in-focus image is proposed. Finally, the micro observation shows the validity of this system.

  10. TU-AB-BRA-12: Quality Assurance of An Integrated Magnetic Resonance Image Guided Adaptive Radiotherapy Machine Using Cherenkov Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreozzi, J; Bruza, P; Saunders, S

    Purpose: To investigate the viability of using Cherenkov imaging as a fast and robust method for quality assurance tests in the presence of a magnetic field, where other instruments can be limited. Methods: Water tank measurements were acquired from a clinically utilized adaptive magnetic resonance image guided radiation therapy (MR-IGRT) machine with three multileaf-collimator equipped 60Co sources. Cherenkov imaging used an intensified charge coupled device (ICCD) camera placed 3.5m from the treatment isocenter, looking down the bore of the 0.35T MRI into a water tank. Images were post-processed to make quantitative comparison between Cherenkov light intensity with both film andmore » treatment planning system predictions, in terms of percent depth dose curves as well as lateral beam profile measurements. A TG-119 commissioning test plan (C4: C-Shape) was imaged in real-time at 6.33 frames per second to investigate the temporal and spatial resolution of the Cherenkov imaging technique. Results: A .33mm/pixel Cherenkov image resolution was achieved across 1024×1024 pixels in this setup. Analysis of the Cherenkov image of a 10.5×10.5cm treatment beam in the water tank successfully measured the beam width at the depth of maximum dose within 1.2% of the film measurement at the same point. The percent depth dose curve for the same beam was on average within 2% of ionization chamber measurements for corresponding depths between 3–100mm. Cherenkov video of the TG-119 test plan provided qualitative agreement with the treatment planning system dose predictions, and a novel temporal verification of the treatment. Conclusions: Cherenkov imaging was successfully used to make QA measurements of percent depth dose curves and cross beam profiles of MRI-IGRT radiotherapy machines after only several seconds of beam-on time and data capture; both curves were extracted from the same data set. Video-rate imaging of a dynamic treatment plan provided new information regarding

  11. Missing depth cues in virtual reality limit performance and quality of three dimensional reaching movements

    PubMed Central

    Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter

    2018-01-01

    Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance

  12. 3D imaging of cleared human skin biopsies using light-sheet microscopy: A new way to visualize in-depth skin structure.

    PubMed

    Abadie, S; Jardet, C; Colombelli, J; Chaput, B; David, A; Grolleau, J-L; Bedos, P; Lobjois, V; Descargues, P; Rouquette, J

    2018-05-01

    Human skin is composed of the superimposition of tissue layers of various thicknesses and components. Histological staining of skin sections is the benchmark approach to analyse the organization and integrity of human skin biopsies; however, this approach does not allow 3D tissue visualization. Alternatively, confocal or two-photon microscopy is an effective approach to perform fluorescent-based 3D imaging. However, owing to light scattering, these methods display limited light penetration in depth. The objectives of this study were therefore to combine optical clearing and light-sheet fluorescence microscopy (LSFM) to perform in-depth optical sectioning of 5 mm-thick human skin biopsies and generate 3D images of entire human skin biopsies. A benzyl alcohol and benzyl benzoate solution was used to successfully optically clear entire formalin fixed human skin biopsies, making them transparent. In-depth optical sectioning was performed with LSFM on the basis of tissue-autofluorescence observations. 3D image analysis of optical sections generated with LSFM was performed by using the Amira ® software. This new approach allowed us to observe in situ the different layers and compartments of human skin, such as the stratum corneum, the dermis and epidermal appendages. With this approach, we easily performed 3D reconstruction to visualise an entire human skin biopsy. Finally, we demonstrated that this method is useful to visualise and quantify histological anomalies, such as epidermal hyperplasia. The combination of optical clearing and LSFM has new applications in dermatology and dermatological research by allowing 3D visualization and analysis of whole human skin biopsies. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Long-range depth profiling of camouflaged targets using single-photon detection

    NASA Astrophysics Data System (ADS)

    Tobin, Rachael; Halimi, Abderrahim; McCarthy, Aongus; Ren, Ximing; McEwan, Kenneth J.; McLaughlin, Stephen; Buller, Gerald S.

    2018-03-01

    We investigate the reconstruction of depth and intensity profiles from data acquired using a custom-designed time-of-flight scanning transceiver based on the time-correlated single-photon counting technique. The system had an operational wavelength of 1550 nm and used a Peltier-cooled InGaAs/InP single-photon avalanche diode detector. Measurements were made of human figures, in plain view and obscured by camouflage netting, from a stand-off distance of 230 m in daylight using only submilliwatt average optical powers. These measurements were analyzed using a pixelwise cross correlation approach and compared to analysis using a bespoke algorithm designed for the restoration of multilayered three-dimensional light detection and ranging images. This algorithm is based on the optimization of a convex cost function composed of a data fidelity term and regularization terms, and the results obtained show that it achieves significant improvements in image quality for multidepth scenarios and for reduced acquisition times.

  14. Full range line-field parallel swept source imaging utilizing digital refocusing

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-12-01

    We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.

  15. Quantitative, depth-resolved determination of particle motion using multi-exposure, spatial frequency domain laser speckle imaging.

    PubMed

    Rice, Tyler B; Kwan, Elliott; Hayakawa, Carole K; Durkin, Anthony J; Choi, Bernard; Tromberg, Bruce J

    2013-01-01

    Laser Speckle Imaging (LSI) is a simple, noninvasive technique for rapid imaging of particle motion in scattering media such as biological tissue. LSI is generally used to derive a qualitative index of relative blood flow due to unknown impact from several variables that affect speckle contrast. These variables may include optical absorption and scattering coefficients, multi-layer dynamics including static, non-ergodic regions, and systematic effects such as laser coherence length. In order to account for these effects and move toward quantitative, depth-resolved LSI, we have developed a method that combines Monte Carlo modeling, multi-exposure speckle imaging (MESI), spatial frequency domain imaging (SFDI), and careful instrument calibration. Monte Carlo models were used to generate total and layer-specific fractional momentum transfer distributions. This information was used to predict speckle contrast as a function of exposure time, spatial frequency, layer thickness, and layer dynamics. To verify with experimental data, controlled phantom experiments with characteristic tissue optical properties were performed using a structured light speckle imaging system. Three main geometries were explored: 1) diffusive dynamic layer beneath a static layer, 2) static layer beneath a diffuse dynamic layer, and 3) directed flow (tube) submerged in a dynamic scattering layer. Data fits were performed using the Monte Carlo model, which accurately reconstructed the type of particle flow (diffusive or directed) in each layer, the layer thickness, and absolute flow speeds to within 15% or better.

  16. Snow Depth Depicted on Mt. Lyell by NASA Airborne Snow Observatory

    NASA Image and Video Library

    2013-05-02

    A natural color image of Mt. Lyell, the highest point in the Tuolumne River Basin top image is compared with a three-dimensional color composite image of Mt. Lyell from NASA Airborne Snow Observatory depicting snow depth bottom image.

  17. Oriented modulation for watermarking in direct binary search halftone images.

    PubMed

    Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der

    2012-09-01

    In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.

  18. Automatic laser welding and milling with in situ inline coherent imaging.

    PubMed

    Webster, P J L; Wright, L G; Ji, Y; Galbraith, C M; Kinross, A W; Van Vlack, C; Fraser, J M

    2014-11-01

    Although new affordable high-power laser technologies enable many processing applications in science and industry, depth control remains a serious technical challenge. In this Letter we show that inline coherent imaging (ICI), with line rates up to 312 kHz and microsecond-duration capture times, is capable of directly measuring laser penetration depth, in a process as violent as kW-class keyhole welding. We exploit ICI's high speed, high dynamic range, and robustness to interference from other optical sources to achieve automatic, adaptive control of laser welding, as well as ablation, achieving 3D micron-scale sculpting in vastly different heterogeneous biological materials.

  19. Wavelet-based statistical classification of skin images acquired with reflectance confocal microscopy

    PubMed Central

    Halimi, Abdelghafour; Batatia, Hadj; Le Digabel, Jimmy; Josse, Gwendal; Tourneret, Jean Yves

    2017-01-01

    Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50μm and 60μm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction. PMID:29296480

  20. Ultra-deep Large Binocular Camera U-band Imaging of the GOODS-North Field: Depth Versus Resolution

    NASA Astrophysics Data System (ADS)

    Ashcraft, Teresa A.; Windhorst, Rogier A.; Jansen, Rolf A.; Cohen, Seth H.; Grazian, Andrea; Paris, Diego; Fontana, Adriano; Giallongo, Emanuele; Speziali, Roberto; Testa, Vincenzo; Boutsia, Konstantina; O’Connell, Robert W.; Rutkowski, Michael J.; Ryan, Russell E.; Scarlata, Claudia; Weiner, Benjamin

    2018-06-01

    We present a study of the trade-off between depth and resolution using a large number of U-band imaging observations in the GOODS-North field from the Large Binocular Camera (LBC) on the Large Binocular Telescope (LBT). Having acquired over 30 hr of data (315 images with 5–6 minutes exposures), we generated multiple image mosaics, starting with the best atmospheric seeing images (FWHM ≲ 0.″8), which constitute ∼10% of the total data set. For subsequent mosaics, we added in data with larger seeing values until the final, deepest mosaic included all images with FWHM ≲ 1.″8 (∼94% of the total data set). From the mosaics, we made object catalogs to compare the optimal-resolution, yet shallower image to the lower-resolution but deeper image. We show that the number counts for both images are ∼90% complete to U AB ≲ 26 mag. Fainter than U AB ∼ 27 mag, the object counts from the optimal-resolution image start to drop-off dramatically (90% between U AB = 27 and 28 mag), while the deepest image with better surface-brightness sensitivity ({μ }U{AB} ≲ 32 mag arcsec‑2) show a more gradual drop (10% between U AB ≃ 27 and 28 mag). For the brightest galaxies within the GOODS-N field, structure and clumpy features within the galaxies are more prominent in the optimal-resolution image compared to the deeper mosaics. We conclude that for studies of brighter galaxies and features within them, the optimal-resolution image should be used. However, to fully explore and understand the faintest objects, the deeper imaging with lower resolution are also required. Finally, we find—for 220 brighter galaxies with U AB ≲ 23 mag—only marginal differences in total flux between the optimal-resolution and lower-resolution light-profiles to {μ }U{AB} ≲ 32 mag arcsec‑2. In only 10% of the cases are the total-flux differences larger than 0.5 mag. This helps constrain how much flux can be missed from galaxy outskirts, which is important for studies of the

  1. High resolution and deep tissue imaging using a near infrared acoustic resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Moothanchery, Mohesh; Sharma, Arunima; Periyasamy, Vijitha; Pramanik, Manojit

    2018-02-01

    It is always a great challenge for pure optical techniques to maintain good resolution and imaging depth at the same time. Photoacoustic imaging is an emerging technique which can overcome the limitation by pulsed light illumination and acoustic detection. Here, we report a Near Infrared Acoustic-Resolution Photoacoustic Microscopy (NIR-AR-PAM) systm with 30 MHz transducer and 1064 nm illumination which can achieve a lateral resolution of around 88 μm and imaging depth of 9.2 mm. Compared to visible light NIR beam can penetrate deeper in biological tissue due to weaker optical attenuation. In this work, we also demonstrated the in vivo imaging capabilty of NIRARPAM by near infrared detection of SLN with black ink as exogenous photoacoustic contrast agent in a rodent model.

  2. An efficient hole-filling method based on depth map in 3D view generation

    NASA Astrophysics Data System (ADS)

    Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.

  3. 40 MHz high-frequency ultrafast ultrasound imaging.

    PubMed

    Huang, Chih-Chung; Chen, Pei-Yu; Peng, Po-Hsun; Lee, Po-Yang

    2017-06-01

    Ultrafast high-frame-rate ultrasound imaging based on coherent-plane-wave compounding has been developed for many biomedical applications. Most coherent-plane-wave compounding systems typically operate at 3-15 MHz, and the image resolution for this frequency range is not sufficient for visualizing microstructure tissues. Therefore, the purpose of this study was to implement a high-frequency ultrafast ultrasound imaging operating at 40 MHz. The plane-wave compounding imaging and conventional multifocus B-mode imaging were performed using the Field II toolbox of MATLAB in simulation study. In experiments, plane-wave compounding images were obtained from a 256 channel ultrasound research platform with a 40 MHz array transducer. All images were produced by point-spread functions and cyst phantoms. The in vivo experiment was performed from zebrafish. Since high-frequency ultrasound exhibits a lower penetration, chirp excitation was applied to increase the imaging depth in simulation. The simulation results showed that a lateral resolution of up to 66.93 μm and a contrast of up to 56.41 dB were achieved when using 75-angles plane waves in compounding imaging. The experimental results showed that a lateral resolution of up to 74.83 μm and a contrast of up to 44.62 dB were achieved when using 75-angles plane waves in compounding imaging. The dead zone and compounding noise are about 1.2 mm and 2.0 mm in depth for experimental compounding imaging, respectively. The structure of zebrafish heart was observed clearly using plane-wave compounding imaging. The use of fewer than 23 angles for compounding allowed a frame rate higher than 1000 frames per second. However, the compounding imaging exhibits a similar lateral resolution of about 72 μm as the angle of plane wave is higher than 10 angles. This study shows the highest operational frequency for ultrafast high-frame-rate ultrasound imaging. © 2017 American Association of Physicists in Medicine.

  4. Integrated interpretation of overlapping AEM datasets achieved through standardisation

    NASA Astrophysics Data System (ADS)

    Sørensen, Camilla C.; Munday, Tim; Heinson, Graham

    2015-12-01

    Numerous airborne electromagnetic surveys have been acquired in Australia using a variety of systems. It is not uncommon to find two or more surveys covering the same ground, but acquired using different systems and at different times. Being able to combine overlapping datasets and get a spatially coherent resistivity-depth image of the ground can assist geological interpretation, particularly when more subtle geophysical responses are important. Combining resistivity-depth models obtained from the inversion of airborne electromagnetic (AEM) data can be challenging, given differences in system configuration, geometry, flying height and preservation or monitoring of system acquisition parameters such as waveform. In this study, we define and apply an approach to overlapping AEM surveys, acquired by fixed wing and helicopter time domain electromagnetic (EM) systems flown in the vicinity of the Goulds Dam uranium deposit in the Frome Embayment, South Australia, with the aim of mapping the basement geometry and the extent of the Billeroo palaeovalley. Ground EM soundings were used to standardise the AEM data, although results indicated that only data from the REPTEM system needed to be corrected to bring the two surveys into agreement and to achieve coherent spatial resistivity-depth intervals.

  5. Hydrologic controls on equilibrium soil depths

    NASA Astrophysics Data System (ADS)

    Nicótina, L.; Tarboton, D. G.; Tesfa, T. K.; Rinaldo, A.

    2011-04-01

    This paper deals with modeling the mutual feedbacks between runoff production and geomorphological processes and attributes that lead to patterns of equilibrium soil depth. Our primary goal is an attempt to describe spatial patterns of soil depth resulting from long-term interactions between hydrologic forcings and soil production, erosion, and sediment transport processes under the framework of landscape dynamic equilibrium. Another goal is to set the premises for exploiting the role of soil depths in shaping the hydrologic response of a catchment. The relevance of the study stems from the massive improvement in hydrologic predictions for ungauged basins that would be achieved by using directly soil depths derived from geomorphic features remotely measured and objectively manipulated. Hydrological processes are here described by explicitly accounting for local soil depths and detailed catchment topography. Geomorphological processes are described by means of well-studied geomorphic transport laws. The modeling approach is applied to the semiarid Dry Creek Experimental Watershed, located near Boise, Idaho. Modeled soil depths are compared with field data obtained from an extensive survey of the catchment. Our results show the ability of the model to describe properly the mean soil depth and the broad features of the distribution of measured data. However, local comparisons show significant scatter whose origins are discussed.

  6. Improved Boundary Layer Depth Retrievals from MPLNET

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Welton, Ellsworth J.; Molod, Andrea M.; Joseph, Everette

    2013-01-01

    Continuous lidar observations of the planetary boundary layer (PBL) depth have been made at the Micropulse Lidar Network (MPLNET) site in Greenbelt, MD since April 2001. However, because of issues with the operational PBL depth algorithm, the data is not reliable for determining seasonal and diurnal trends. Therefore, an improved PBL depth algorithm has been developed which uses a combination of the wavelet technique and image processing. The new algorithm is less susceptible to contamination by clouds and residual layers, and in general, produces lower PBL depths. A 2010 comparison shows the operational algorithm overestimates the daily mean PBL depth when compared to the improved algorithm (1.85 and 1.07 km, respectively). The improved MPLNET PBL depths are validated using radiosonde comparisons which suggests the algorithm performs well to determine the depth of a fully developed PBL. A comparison with the Goddard Earth Observing System-version 5 (GEOS-5) model suggests that the model may underestimate the maximum daytime PBL depth by 410 m during the spring and summer. The best agreement between MPLNET and GEOS-5 occurred during the fall and they diered the most in the winter.

  7. Focusing and depth of field in photography: application in dermatology practice.

    PubMed

    Taheri, Arash; Yentzer, Brad A; Feldman, Steven R

    2013-11-01

    Conventional photography obtains a sharp image of objects within a given 'depth of field'; objects not within the depth of field are out of focus. In recent years, digital photography revolutionized the way pictures are taken, edited, and stored. However, digital photography does not result in a deeper depth of field or better focusing. In this article, we briefly review the concept of depth of field and focus in photography as well as new technologies in this area. A deep depth of field is used to have more objects in focus; a shallow depth of field can emphasize a subject by blurring the foreground and background objects. The depth of field can be manipulated by adjusting the aperture size of the camera, with smaller apertures increasing the depth of field at the cost of lower levels of light capture. Light-field cameras are a new generation of digital cameras that offer several new features, including the ability to change the focus on any object in the image after taking the photograph. Understanding depth of field and camera technology helps dermatologists to capture their subjects in focus more efficiently. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy

    PubMed Central

    Greenbaum, Alon; Luo, Wei; Su, Ting-Wei; Göröcs, Zoltán; Xue, Liang; Isikman, Serhan O; Coskun, Ahmet F; Mudanyali, Onur; Ozcan, Aydogan

    2012-01-01

    We discuss unique features of lens-free computational imaging tools and report some of their emerging results for wide-field on-chip microscopy, such as the achievement of a numerical aperture (NA) of ~0.8–0.9 across a field of view (FOV) of more than 20 mm2 or an NA of ~0.1 across a FOV of ~18 cm2, which corresponds to an image with more than 1.5 gigapixels. We also discuss the current challenges that these computational on-chip microscopes face, shedding light on their future directions and applications. PMID:22936170

  9. Enhanced Depth SD-OCT Images Reveal Characteristic Choroidal Changes in Patients With Vogt-Koyanagi-Harada Disease.

    PubMed

    Li, Mei; Liu, Qiuhui; Luo, Yan; Li, Yonghao; Lin, Shaofen; Lian, Ping; Yang, Qiufen; Li, Xiaofang; Liu, Xialin; Sadda, SriniVas; Lu, Lin

    2016-11-01

    To identify characteristic choroidal changes of patients with Vogt-Koyanagi-Harada (VKH) disease at different stages. Fifty-four patients with VKH in the acute uveitic or convalescent stages, 24 patients with central serous chorioretinopathy (CSC), and 54 normal participants were enrolled in this prospective, observational study. Enhanced depth imaging spectral-domain optical coherence tomography scans were captured for all subjects to allow for comparison of choroidal morphological findings. Numerous round or oval hyperreflective profiles with hyporeflective cores, corresponding to choroidal vessels, were observed in the choroid of control participants and patients with CSC; whereas the numbers of these profiles were markedly decreased in the choroid of VKH patients in both the acute uveitic and convalescent stages. A reduction in vascular profiles in the choroid is observed in VKH and may aid in the differentiation with disorders such as CSC. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:1004-1012.]. Copyright 2016, SLACK Incorporated.

  10. Processing vertical size disparities in distinct depth planes.

    PubMed

    Duke, Philip A; Howard, Ian P

    2012-08-17

    A textured surface appears slanted about a vertical axis when the image in one eye is horizontally enlarged relative to the image in the other eye. The surface appears slanted in the opposite direction when the same image is vertically enlarged. Two superimposed textured surfaces with different horizontal size disparities appear as two surfaces that differ in slant. Superimposed textured surfaces with equal and opposite vertical size disparities appear as a single frontal surface. The vertical disparities are averaged. We investigated whether vertical size disparities are averaged across two superimposed textured surfaces in different depth planes or whether they induce distinct slants in the two depth planes. In Experiment 1, two superimposed textured surfaces with different vertical size disparities were presented in two depth planes defined by horizontal disparity. The surfaces induced distinct slants when the horizontal disparity was more than ±5 arcmin. Thus, vertical size disparities are not averaged over surfaces with different horizontal disparities. In Experiment 2 we confirmed that vertical size disparities are processed in surfaces away from the horopter, so the results of Experiment 1 cannot be explained by the processing of vertical size disparities in a fixated surface only. Together, these results show that vertical size disparities are processed separately in distinct depth planes. The results also suggest that vertical size disparities are not used to register slant globally by their effect on the registration of binocular direction of gaze.

  11. Detailed imaging of flowing structures at depth using microseismicity: a tool for site investigation?

    NASA Astrophysics Data System (ADS)

    Pytharouli, S.; Lunn, R. J.; Shipton, Z. K.

    2011-12-01

    Field evidence shows that faults and fractures can act as focused pathways or barriers for fluid migration. This is an important property for modern engineering problems, e.g., CO2 sequestration, geological radioactive waste disposal, geothermal energy exploitation, land reclamation and remediation. For such applications the detailed characterization of the location, orientation and hydraulic properties of existing fractures is necessary. These investigations are expensive, requiring the hire of expensive equipment (excavator or drill rigs), which incur standing charges when not in use. In addition, they only provide information for discrete sample 'windows'. Non-intrusive methods have the ability to gather information across an entire area. Methods including electrical resistivity/conductivity and ground penetrating radar (GRP), have been used as tools for site investigations. Their imaging ability is often restricted due to unfavourable on-site conditions e.g. GRP is not useful in cases where a layer of clay or reinforced concrete is present. Our research has shown that high quality seismic data can be successfully used in the detailed imaging of sub-surface structures at depth; using induced microseismicity data recorded beneath the Açu reservoir in Brazil we identified orientations and values of average permeability of open shear fractures at depths up to 2.5km. Could microseismicity also provide information on the fracture width in terms of stress drops? First results from numerical simulations showed that higher stress drop values correspond to narrower fractures. These results were consistent with geological field observations. This study highlights the great potential of using microseismicity data as a supplementary tool for site investigation. Individual large-scale shear fractures in large rock volumes cannot currently be identified by any other geophysical dataset. The resolution of the method is restricted by the detection threshold of the local

  12. X-ray imaging for security applications

    NASA Astrophysics Data System (ADS)

    Evans, J. Paul

    2004-01-01

    The X-ray screening of luggage by aviation security personnel may be badly hindered by the lack of visual cues to depth in an image that has been produced by transmitted radiation. Two-dimensional "shadowgraphs" with "organic" and "metallic" objects encoded using two different colors (usually orange and blue) are still in common use. In the context of luggage screening there are no reliable cues to depth present in individual shadowgraph X-ray images. Therefore, the screener is required to convert the 'zero depth resolution' shadowgraph into a three-dimensional mental picture to be able to interpret the relative spatial relationship of the objects under inspection. Consequently, additional cognitive processing is required e.g. integration, inference and memory. However, these processes can lead to serious misinterpretations of the actual physical structure being examined. This paper describes the development of a stereoscopic imaging technique enabling the screener to utilise binocular stereopsis and kinetic depth to enhance their interpretation of the actual nature of the objects under examination. Further work has led to the development of a technique to combine parallax data (to calculate the thickness of a target material) with the results of a basis material subtraction technique to approximate the target's effective atomic number and density. This has been achieved in preliminary experiments with a novel spatially interleaved dual-energy sensor which reduces the number of scintillation elements required by 50% in comparison to conventional sensor configurations.

  13. In vivo deep tissue fluorescence imaging of the murine small intestine and colon

    NASA Astrophysics Data System (ADS)

    Crosignani, Viera; Dvornikov, Alexander; Aguilar, Jose S.; Stringari, Chiara; Edwards, Roberts; Mantulin, Williams; Gratton, Enrico

    2012-03-01

    Recently we described a novel technical approach with enhanced fluorescence detection capabilities in two-photon microscopy that achieves deep tissue imaging, while maintaining micron resolution. This technique was applied to in vivo imaging of murine small intestine and colon. Individuals with Inflammatory Bowel Disease (IBD), commonly presenting as Crohn's disease or Ulcerative Colitis, are at increased risk for developing colorectal cancer. We have developed a Giα2 gene knock out mouse IBD model that develops colitis and colon cancer. The challenge is to study the disease in the whole animal, while maintaining high resolution imaging at millimeter depth. In the Giα2-/- mice, we have been successful in imaging Lgr5-GFP positive stem cell reporters that are found in crypts of niche structures, as well as deeper structures, in the small intestine and colon at depths greater than 1mm. In parallel with these in vivo deep tissue imaging experiments, we have also pursued autofluorescence FLIM imaging of the colon and small intestine-at more shallow depths (roughly 160μm)- on commercial two photon microscopes with excellent structural correlation (in overlapping tissue regions) between the different technologies.

  14. Analysis of Rapid Multi-Focal Zone ARFI Imaging

    PubMed Central

    Rosenzweig, Stephen; Palmeri, Mark; Nightingale, Kathryn

    2015-01-01

    Acoustic radiation force impulse (ARFI) imaging has shown promise for visualizing structure and pathology within multiple organs; however, because the contrast depends on the push beam excitation width, image quality suffers outside of the region of excitation. Multi-focal zone ARFI imaging has previously been used to extend the region of excitation (ROE), but the increased acquisition duration and acoustic exposure have limited its utility. Supersonic shear wave imaging has previously demonstrated that through technological improvements in ultrasound scanners and power supplies, it is possible to rapidly push at multiple locations prior to tracking displacements, facilitating extended depth of field shear wave sources. Similarly, ARFI imaging can utilize these same radiation force excitations to achieve tight pushing beams with a large depth of field. Finite element method simulations and experimental data are presented demonstrating that single- and rapid multi-focal zone ARFI have comparable image quality (less than 20% loss in contrast), but the multi-focal zone approach has an extended axial region of excitation. Additionally, as compared to single push sequences, the rapid multi-focal zone acquisitions improve the contrast to noise ratio by up to 40% in an example 4 mm diameter lesion. PMID:25643078

  15. In vivo rat deep brain imaging using photoacoustic computed tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lin, Li; Li, Lei; Zhu, Liren; Hu, Peng; Wang, Lihong V.

    2017-03-01

    The brain has been likened to a great stretch of unknown territory consisting of a number of unexplored continents. Small animal brain imaging plays an important role charting that territory. By using 1064 nm illumination from the side, we imaged the full coronal depth of rat brains in vivo. The experiment was performed using a real-time full-ring-array photoacoustic computed tomography (PACT) imaging system, which achieved an imaging depth of 11 mm and a 100 μm radial resolution. Because of the fast imaging speed of the full-ring-array PACT system, no animal motion artifact was induced. The frame rate of the system was limited by the laser repetition rate (50 Hz). In addition to anatomical imaging of the blood vessels in the brain, we continuously monitored correlations between the two brain hemispheres in one of the coronal planes. The resting states in the coronal plane were measured before and after stroke ligation surgery at a neck artery.

  16. [Research and realization of signal processing algorithms based on FPGA in digital ophthalmic ultrasonography imaging].

    PubMed

    Fang, Simin; Zhou, Sheng; Wang, Xiaochun; Ye, Qingsheng; Tian, Ling; Ji, Jianjun; Wang, Yanqun

    2015-01-01

    To design and improve signal processing algorithms of ophthalmic ultrasonography based on FPGA. Achieved three signal processing modules: full parallel distributed dynamic filter, digital quadrature demodulation, logarithmic compression, using Verilog HDL hardware language in Quartus II. Compared to the original system, the hardware cost is reduced, the whole image shows clearer and more information of the deep eyeball contained in the image, the depth of detection increases from 5 cm to 6 cm. The new algorithms meet the design requirements and achieve the system's optimization that they can effectively improve the image quality of existing equipment.

  17. Focal depth measurement of scanning helium ion microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Hongxuan, E-mail: Guo.hongxuan@nims.go.jp; Itoh, Hiroshi; Wang, Chunmei

    2014-07-14

    When facing the challenges of critical dimension measurement of complicated nanostructures, such as of the three dimension integrated circuit, characterization of the focal depth of microscopes is important. In this Letter, we developed a method for characterizing the focal depth of a scanning helium ion microscope (HIM) by using an atomic force microscope tip characterizer (ATC). The ATC was tilted in a sample chamber at an angle to the scanning plan. Secondary electron images (SEIs) were obtained at different positions of the ATC. The edge resolution of the SEIs shows the nominal diameters of the helium ion beam at differentmore » focal levels. With this method, the nominal shapes of the helium ion beams were obtained with different apertures. Our results show that a small aperture is necessary to get a high spatial resolution and high depth of field images with HIM. This work provides a method for characterizing and improving the performance of HIM.« less

  18. Focal depth measurement of scanning helium ion microscope

    NASA Astrophysics Data System (ADS)

    Guo, Hongxuan; Itoh, Hiroshi; Wang, Chunmei; Zhang, Han; Fujita, Daisuke

    2014-07-01

    When facing the challenges of critical dimension measurement of complicated nanostructures, such as of the three dimension integrated circuit, characterization of the focal depth of microscopes is important. In this Letter, we developed a method for characterizing the focal depth of a scanning helium ion microscope (HIM) by using an atomic force microscope tip characterizer (ATC). The ATC was tilted in a sample chamber at an angle to the scanning plan. Secondary electron images (SEIs) were obtained at different positions of the ATC. The edge resolution of the SEIs shows the nominal diameters of the helium ion beam at different focal levels. With this method, the nominal shapes of the helium ion beams were obtained with different apertures. Our results show that a small aperture is necessary to get a high spatial resolution and high depth of field images with HIM. This work provides a method for characterizing and improving the performance of HIM.

  19. Spatiotemporal Characteristics for the Depth from Luminance Contrast

    PubMed Central

    Matsubara, Kazuya; Matsumiya, Kazumichi; Shioiri, Satoshi; Takahashi, Shuichi; Hyodo, Yasuhide; Ohashi, Isao

    2011-01-01

    Images with higher luminance contrast tend to be perceived closer in depth. To investigate a spatiotemporal characteristic of this effect, we evaluated subjective depth of a test stimulus with various spatial and temporal frequencies. For the purpose, the depth of a reference stimulus was matched to that of the test stimulus by changing the binocular disparity. The results showed that the test stimulus was perceived closer with higher luminance contrast for all conditions. Contrast efficiency was obtained from the contrast that provided the subjective depth for each spatiotemporal frequency. The shape of the contrast efficiency function was spatially low-pass and temporally band-pass. This characteristic is different from the one measure for a detection task. This suggests that only subset of contrast signals are used for depth from contrast.

  20. Evaluation of the Normal Cochlear Second Interscalar Ridge Angle and Depth on 3D T2-Weighted Images: A Tool for the Diagnosis of Scala Communis and Incomplete Partition Type II.

    PubMed

    Booth, T N; Wick, C; Clarke, R; Kutz, J W; Medina, M; Gorsage, D; Xi, Y; Isaacson, B

    2018-05-01

    Cochlear malformations may be be subtle on imaging studies. The purpose of this study was to evaluate the angle and depth of the lateral second interscalar ridge or notch in ears without sensorineural hearing loss (normal ears) and compare them with ears that have a documented incomplete type II partition malformation. The second interscalar ridge notch angle and depth were measured on MR imaging in normal ears by a single experienced neuroradiologist. The images of normal and incomplete partition II malformation ears were then randomly mixed for 2 novice evaluators to measure both the second interscalar ridge notch angle and depth in a blinded manner. For the mixed group, interobserver agreement was calculated, normal and abnormal ear measurements were compared, and receiver operating characteristic curves were generated. The 94 normal ears had a mean second interscalar ridge angle of 80.86° ± 11.4° and depth of 0.54 ± 0.14 mm with the 98th percentile for an angle of 101° and a depth of 0.3 mm. In the mixed group, agreement between the 2 readers was excellent, with significant differences for angle and depth found between normal and incomplete partition type II ears for angle and depth on average ( P < .001). Receiver operating characteristic cutoffs for delineating normal from abnormal ears were similar for both readers (depth, 0.31/0.34 mm; angle, 114°/104°). A measured angle of >114° and a depth of the second interscalar ridge notch of ≤0.31 mm suggest the diagnosis of incomplete partition type II malformation and scala communis. These measurements can be accurately made by novice readers. © 2018 by American Journal of Neuroradiology.

  1. 110 °C range athermalization of wavefront coding infrared imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong

    2017-09-01

    110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.

  2. Imaging Mass Spectrometry on the Nanoscale with Cluster Ion Beams

    PubMed Central

    2015-01-01

    Imaging with cluster secondary ion mass spectrometry (SIMS) is reaching a mature level of development. Using a variety of molecular ion projectiles to stimulate desorption, 3-dimensional imaging with the selectivity of mass spectrometry can now be achieved with submicrometer spatial resolution and <10 nm depth resolution. In this Perspective, stock is taken regarding what it will require to routinely achieve these remarkable properties. Issues include the chemical nature of the projectile, topography formation, differential erosion rates, and perhaps most importantly, ionization efficiency. Shortcomings of existing instrumentation are also noted. Speculation about how to successfully resolve these issues is a key part of the discussion. PMID:25458665

  3. Visual Depth from Motion Parallax and Eye Pursuit

    PubMed Central

    Stroyan, Keith; Nawrot, Mark

    2012-01-01

    A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531

  4. Evaluation of the lamina cribrosa in patients with diabetes mellitus using enhanced depth imaging spectral-domain optical coherence tomography.

    PubMed

    Akkaya, Serkan; Küçük, Bekir; Doğan, Hatice Karaköse; Can, Ertuğrul

    2018-06-01

    To compare the lamina cribrosa thickness and anterior lamina cribrosa depth between patients with and without diabetes mellitus and to investigate the effect of metabolic control and duration of diabetes mellitus on lamina cribrosa thickness and anterior lamina cribrosa depth using enhanced depth imaging spectral-domain optical coherence tomography. A total of 70 patients were enrolled in this cross-sectional study and were divided into the diabetes and control groups. Intraocular pressure, circumpapillary retinal nerve fibre layer thickness, anterior lamina cribrosa depth and lamina cribrosa thickness were compared between the groups. In the control group, the mean intraocular pressure was 14.6 ± 3.1 (mean ± standard deviation) mmHg, mean circumpapillary retinal nerve fibre layer thickness was 105.41 ± 5.86 μm, mean anterior lamina cribrosa depth was 420.3 ± 90.2 μm and mean lamina cribrosa thickness was 248.5 ± 5.4 μm. In the diabetes group, the mean intraocular pressure was 13.9 ± 2.2 mmHg, mean circumpapillary retinal nerve fibre layer thickness was 101.37 ± 10.97 μm, mean anterior lamina cribrosa depth was 351.4 ± 58.6 μm and mean lamina cribrosa thickness was 271.6 ± 33.9 μm. Lamina cribrosa thickness was significantly higher ( p < 0.001) and anterior lamina cribrosa depth was significantly lower ( p = 0.003) in the diabetes group. There was no statistical difference between the groups with regard to age, spherical equivalent, axial length, circumpapillary retinal nerve fibre layer thickness and intraocular pressure ( p  = 0.69, 0.26, 0.47, 0.06 and 0.46, respectively). Lamina cribrosa thickness and anterior lamina cribrosa depth were not significantly correlated with duration of diabetes mellitus (lamina cribrosa thickness: r = -0.078, p = 0.643; anterior lamina cribrosa depth: r = -0.062, p = 0.710) or HbA1c levels (lamina cribrosa thickness: r = -0.078, p

  5. Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.

    PubMed

    Zalvidea, D; Sicre, E E

    1998-06-10

    A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.

  6. X-ray Radiation-Controlled NO-Release for On-Demand Depth-Independent Hypoxic Radiosensitization.

    PubMed

    Fan, Wenpei; Bu, Wenbo; Zhang, Zhen; Shen, Bo; Zhang, Hui; He, Qianjun; Ni, Dalong; Cui, Zhaowen; Zhao, Kuaile; Bu, Jiwen; Du, Jiulin; Liu, Jianan; Shi, Jianlin

    2015-11-16

    Multifunctional stimuli-responsive nanotheranostic systems are highly desirable for realizing simultaneous biomedical imaging and on-demand therapy with minimized adverse effects. Herein, we present the construction of an intelligent X-ray-controlled NO-releasing upconversion nanotheranostic system (termed as PEG-USMSs-SNO) by engineering UCNPs with S-nitrosothiol (R-SNO)-grafted mesoporous silica. The PEG-USMSs-SNO is designed to respond sensitively to X-ray radiation for breaking down the S-N bond of SNO to release NO, which leads to X-ray dose-controlled NO release for on-demand hypoxic radiosensitization besides upconversion luminescent imaging through UCNPs in vitro and in vivo. Thanks to the high live-body permeability of X-ray, our developed PEG-USMSs-SNO may provide a new technique for achieving depth-independent controlled NO release and positioned radiotherapy enhancement against deep-seated solid tumors. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Optical coherence microscope for invariant high resolution in vivo skin imaging

    NASA Astrophysics Data System (ADS)

    Murali, S.; Lee, K. S.; Meemon, P.; Rolland, J. P.

    2008-02-01

    A non-invasive, reliable and affordable imaging system with the capability of detecting skin pathologies such as skin cancer would be a valuable tool to use for pre-screening and diagnostic applications. Optical Coherence Microscopy (OCM) is emerging as a building block for in vivo optical diagnosis, where high numerical aperture optics is introduced in the sample arm to achieve high lateral resolution. While high numerical aperture optics enables realizing high lateral resolution at the focus point, dynamic focusing is required to maintain the target lateral resolution throughout the depth of the sample being imaged. In this paper, we demonstrate the ability to dynamically focus in real-time with no moving parts to a depth of up to 2mm in skin-equivalent tissue in order to achieve 3.5μm lateral resolution throughout an 8 cubic millimeter sample. The built-in dynamic focusing ability is provided by an addressable liquid lens embedded in custom-designed optics which was designed for a broadband laser source of 120 nm bandwidth centered at around 800nm. The imaging probe was designed to be low-cost and portable. Design evaluation and tolerance analysis results show that the probe is robust to manufacturing errors and produces consistent high performance throughout the imaging volume.

  8. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    PubMed Central

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  9. Depth-varying density and organization of chondrocytes in immature and mature bovine articular cartilage assessed by 3d imaging and analysis.

    PubMed

    Jadin, Kyle D; Wong, Benjamin L; Bae, Won C; Li, Kelvin W; Williamson, Amanda K; Schumacher, Barbara L; Price, Jeffrey H; Sah, Robert L

    2005-09-01

    Articular cartilage is a heterogeneous tissue, with cell density and organization varying with depth from the surface. The objectives of the present study were to establish a method for localizing individual cells in three-dimensional (3D) images of cartilage and quantifying depth-associated variation in cellularity and cell organization at different stages of growth. Accuracy of nucleus localization was high, with 99% sensitivity relative to manual localization. Cellularity (million cells per cm3) decreased from 290, 310, and 150 near the articular surface in fetal, calf, and adult samples, respectively, to 120, 110, and 50 at a depth of 1.0 mm. The distance/angle to the nearest neighboring cell was 7.9 microm/31 degrees , 7.1 microm/31 degrees , and 9.1 microm/31 degrees for cells at the articular surface of fetal, calf, and adult samples, respectively, and increased/decreased to 11.6 microm/31 degrees , 12.0 microm/30 degrees , and 19.2 microm/25 degrees at a depth of 0.7 mm. The methodologies described here may be useful for analyzing the 3D cellular organization of cartilage during growth, maturation, aging, degeneration, and regeneration.

  10. Depth-varying density and organization of chondrocytes in immature and mature bovine articular cartilage assessed by 3d imaging and analysis

    NASA Technical Reports Server (NTRS)

    Jadin, Kyle D.; Wong, Benjamin L.; Bae, Won C.; Li, Kelvin W.; Williamson, Amanda K.; Schumacher, Barbara L.; Price, Jeffrey H.; Sah, Robert L.

    2005-01-01

    Articular cartilage is a heterogeneous tissue, with cell density and organization varying with depth from the surface. The objectives of the present study were to establish a method for localizing individual cells in three-dimensional (3D) images of cartilage and quantifying depth-associated variation in cellularity and cell organization at different stages of growth. Accuracy of nucleus localization was high, with 99% sensitivity relative to manual localization. Cellularity (million cells per cm3) decreased from 290, 310, and 150 near the articular surface in fetal, calf, and adult samples, respectively, to 120, 110, and 50 at a depth of 1.0 mm. The distance/angle to the nearest neighboring cell was 7.9 microm/31 degrees , 7.1 microm/31 degrees , and 9.1 microm/31 degrees for cells at the articular surface of fetal, calf, and adult samples, respectively, and increased/decreased to 11.6 microm/31 degrees , 12.0 microm/30 degrees , and 19.2 microm/25 degrees at a depth of 0.7 mm. The methodologies described here may be useful for analyzing the 3D cellular organization of cartilage during growth, maturation, aging, degeneration, and regeneration.

  11. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  12. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    PubMed

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  13. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    PubMed Central

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-01-01

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942

  14. The implementation of depth measurement and related algorithms based on binocular vision in embedded AM5728

    NASA Astrophysics Data System (ADS)

    Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan

    2018-01-01

    Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.

  15. Optical-domain subsampling for data efficient depth ranging in Fourier-domain optical coherence tomography

    PubMed Central

    Siddiqui, Meena; Vakoc, Benjamin J.

    2012-01-01

    Recent advances in optical coherence tomography (OCT) have led to higher-speed sources that support imaging over longer depth ranges. Limitations in the bandwidth of state-of-the-art acquisition electronics, however, prevent adoption of these advances into the clinical applications. Here, we introduce optical-domain subsampling as a method for imaging at high-speeds and over extended depth ranges but with a lower acquisition bandwidth than that required using conventional approaches. Optically subsampled laser sources utilize a discrete set of wavelengths to alias fringe signals along an extended depth range into a bandwidth limited frequency window. By detecting the complex fringe signals and under the assumption of a depth-constrained signal, optical-domain subsampling enables recovery of the depth-resolved scattering signal without overlapping artifacts from this bandwidth-limited window. We highlight key principles behind optical-domain subsampled imaging, and demonstrate this principle experimentally using a polygon-filter based swept-source laser that includes an intra-cavity Fabry-Perot (FP) etalon. PMID:23038343

  16. Spectrally resolved chromatic confocal interferometry for one-shot nano-scale surface profilometry with several tens of micrometric depth range

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Chen, Yi-Shiuan; Chang, Yi-Wei; Lin, Shyh-Tsong; Yeh, Sheng Lih

    2013-01-01

    In this research, new nano-scale measurement methodology based on spectrally-resolved chromatic confocal interferometry (SRCCI) was successfully developed by employing integration of chromatic confocal sectioning and spectrally-resolve white light interferometry (SRWLI) for microscopic three dimensional surface profilometry. The proposed chromatic confocal method (CCM) using a broad band while light in combination with a specially designed chromatic dispersion objective is capable of simultaneously acquiring multiple images at a large range of object depths to perform surface 3-D reconstruction by single image shot without vertical scanning and correspondingly achieving a high measurement depth range up to hundreds of micrometers. A Linnik-type interferometric configuration based on spectrally resolved white light interferometry is developed and integrated with the CCM to simultaneously achieve nanoscale axis resolution for the detection point. The white-light interferograms acquired at the exit plane of the spectrometer possess a continuous variation of wavelength along the chromaticity axis, in which the light intensity reaches to its peak when the optical path difference equals to zero between two optical arms. To examine the measurement accuracy of the developed system, a pre-calibrated accurate step height target with a total step height of 10.10 μm was measured. The experimental result shows that the maximum measurement error was verified to be less than 0.3% of the overall measuring height.

  17. Improved image processing of road pavement defect by infrared thermography

    NASA Astrophysics Data System (ADS)

    Sim, Jun-Gi

    2018-03-01

    This paper intends to achieve improved image processing for the clear identification of defects in damaged road pavement structure using infrared thermography non-destructive testing (NDT). To that goal, 4 types of pavement specimen including internal defects were fabricated to exploit the results obtained by heating the specimens by natural light. The results showed that defects located down to a depth of 3 cm could be detected by infrared thermography NDT using the improved image processing method.

  18. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    PubMed

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  19. Depth perception not found in human observers for static or dynamic anti-correlated random dot stereograms.

    PubMed

    Hibbard, Paul B; Scott-Brown, Kenneth C; Haigh, Emma C; Adrain, Melanie

    2014-01-01

    One of the greatest challenges in visual neuroscience is that of linking neural activity with perceptual experience. In the case of binocular depth perception, important insights have been achieved through comparing neural responses and the perception of depth, for carefully selected stimuli. One of the most important types of stimulus that has been used here is the anti-correlated random dot stereogram (ACRDS). In these stimuli, the contrast polarity of one half of a stereoscopic image is reversed. While neurons in cortical area V1 respond reliably to the binocular disparities in ACRDS, they do not create a sensation of depth. This discrepancy has been used to argue that depth perception must rely on neural activity elsewhere in the brain. Currently, the psychophysical results on which this argument rests are not clear-cut. While it is generally assumed that ACRDS do not support the perception of depth, some studies have reported that some people, some of the time, perceive depth in some types of these stimuli. Given the importance of these results for understanding the neural correlates of stereopsis, we studied depth perception in ACRDS using a large number of observers, in order to provide an unambiguous conclusion about the extent to which these stimuli support the perception of depth. We presented observers with random dot stereograms in which correlated dots were presented in a surrounding annulus and correlated or anti-correlated dots were presented in a central circular region. While observers could reliably report the depth of the central region for correlated stimuli, we found no evidence for depth perception in static or dynamic anti-correlated stimuli. Confidence ratings for stereoscopic perception were uniformly low for anti-correlated stimuli, but showed normal variation with disparity for correlated stimuli. These results establish that the inability of observers to perceive depth in ACRDS is a robust phenomenon.

  20. Depth Perception Not Found in Human Observers for Static or Dynamic Anti-Correlated Random Dot Stereograms

    PubMed Central

    Hibbard, Paul B.; Scott-Brown, Kenneth C.; Haigh, Emma C.; Adrain, Melanie

    2014-01-01

    One of the greatest challenges in visual neuroscience is that of linking neural activity with perceptual experience. In the case of binocular depth perception, important insights have been achieved through comparing neural responses and the perception of depth, for carefully selected stimuli. One of the most important types of stimulus that has been used here is the anti-correlated random dot stereogram (ACRDS). In these stimuli, the contrast polarity of one half of a stereoscopic image is reversed. While neurons in cortical area V1 respond reliably to the binocular disparities in ACRDS, they do not create a sensation of depth. This discrepancy has been used to argue that depth perception must rely on neural activity elsewhere in the brain. Currently, the psychophysical results on which this argument rests are not clear-cut. While it is generally assumed that ACRDS do not support the perception of depth, some studies have reported that some people, some of the time, perceive depth in some types of these stimuli. Given the importance of these results for understanding the neural correlates of stereopsis, we studied depth perception in ACRDS using a large number of observers, in order to provide an unambiguous conclusion about the extent to which these stimuli support the perception of depth. We presented observers with random dot stereograms in which correlated dots were presented in a surrounding annulus and correlated or anti-correlated dots were presented in a central circular region. While observers could reliably report the depth of the central region for correlated stimuli, we found no evidence for depth perception in static or dynamic anti-correlated stimuli. Confidence ratings for stereoscopic perception were uniformly low for anti-correlated stimuli, but showed normal variation with disparity for correlated stimuli. These results establish that the inability of observers to perceive depth in ACRDS is a robust phenomenon. PMID:24416195

  1. Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0.

    PubMed

    Jin, Xin; Liu, Li; Chen, Yanqin; Dai, Qionghai

    2017-05-01

    This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object's depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.

  2. Depth Perception of Surgeons in Minimally Invasive Surgery.

    PubMed

    Bogdanova, Rositsa; Boulanger, Pierre; Zheng, Bin

    2016-10-01

    Minimally invasive surgery (MIS) poses visual challenges to the surgeons. In MIS, binocular disparity is not freely available for surgeons, who are required to mentally rebuild the 3-dimensional (3D) patient anatomy from a limited number of monoscopic visual cues. The insufficient depth cues from the MIS environment could cause surgeons to misjudge spatial depth, which could lead to performance errors thus jeopardizing patient safety. In this article, we will first discuss the natural human depth perception by exploring the main depth cues available for surgeons in open procedures. Subsequently, we will reveal what depth cues are lost in MIS and how surgeons compensate for the incomplete depth presentation. Next, we will further expand our knowledge by exploring some of the available solutions for improving depth presentation to surgeons. Here we will review the innovative approaches (multiple 2D camera assembly, shadow introduction) and devices (3D monitors, head-mounted devices, and auto-stereoscopic monitors) for 3D image presentation from the past few years. © The Author(s) 2016.

  3. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  4. Femininity, Masculinity, and Body Image Issues among College-Age Women: An In-Depth and Written Interview Study of the Mind-Body Dichotomy

    ERIC Educational Resources Information Center

    Leavy, Patricia; Gnong, Andrea; Ross, Lauren Sardi

    2009-01-01

    In this article we investigate college-age women's body image issues in the context of dominant femininity and its polarization of the mind and body. We use original data collected through seven in-depth interviews and 32 qualitative written interviews with college-age women and men. We coded the data thematically applying feminist approaches to…

  5. Non-Cartesian Parallel Imaging Reconstruction

    PubMed Central

    Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole

    2014-01-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

  6. The Athena Pancam and Color Microscopic Imager (CMI)

    NASA Technical Reports Server (NTRS)

    Bell, J. F., III; Herkenhoff, K. E.; Schwochert, M.; Morris, R. V.; Sullivan, R.

    2000-01-01

    The Athena Mars rover payload includes two primary science-grade imagers: Pancam, a multispectral, stereo, panoramic camera system, and the Color Microscopic Imager (CMI), a multispectral and variable depth-of-field microscope. Both of these instruments will help to achieve the primary Athena science goals by providing information on the geology, mineralogy, and climate history of the landing site. In addition, Pancam provides important support for rover navigation and target selection for Athena in situ investigations. Here we describe the science goals, instrument designs, and instrument performance of the Pancam and CMI investigations.

  7. Depth-resolved measurements with elliptically polarized reflectance spectroscopy

    PubMed Central

    Bailey, Maria J.; Sokolov, Konstantin

    2016-01-01

    The ability of elliptical polarized reflectance spectroscopy (EPRS) to detect spectroscopic alterations in tissue mimicking phantoms and in biological tissue in situ is demonstrated. It is shown that there is a linear relationship between light penetration depth and ellipticity. This dependence is used to demonstrate the feasibility of a depth-resolved spectroscopic imaging using EPRS. The advantages and drawbacks of EPRS in evaluation of biological tissue are analyzed and discussed. PMID:27446712

  8. The suitability of lightfield camera depth maps for coordinate measurement applications

    NASA Astrophysics Data System (ADS)

    Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael

    2015-12-01

    Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.

  9. Depth of composite polymerization within simulated root canals using light-transmitting posts.

    PubMed

    Lui, J L

    1994-01-01

    In this study, the depth of cure of composite resins cured within simulated root canals by means of light-transmitting plastic posts was compared to that achieved by the conventional light-curing method. Six sizes of posts with diameters of 1.05 mm, 1.20 mm, 1.35 mm, 1.50 mm, 1.65 mm, and 1.80 mm were investigated. In general, the larger the post diameter, the greater was the depth of cure. There were significant differences in the depth of cure between the control and all sizes of posts investigated. There were also significant differences between the various post diameters except for the 1.35 mm and 1.50 mm diameter posts. It was possible to achieve a depth of cure exceeding 11 mm using these light-transmitting posts.

  10. Measurement of Respiration Rate and Depth Through Difference in Temperature Between Skin Surface and Nostril by Using Thermal Image.

    PubMed

    Jeong, Hieyong; Matsuura, Yutaka; Ohno, Yuko

    2017-01-01

    The purpose of the present study was to propose a method to measure a respiration rate (RR) and depth at once through difference in temperature between the skin surface and nostril by using a thermal image. Although there have been a lot of devices for contact RR monitoring, it was considered that the subjects could be inconvenienced by having the sensing device in contact with their body. Our algorithm enabled us to make a breathing periodic function (BPF) under the non-contact and non-invasive condition through temperature differences near the nostril during the breath. As a result, it was proved that our proposed method was able to classify differences in breathing pattern between normal, deep, and shallow breath (P < 0.001). These results lead us to conclude that the RR and depth is simultaneously measured by the proposed algorithm of BPF without any contact or invasive procedure.

  11. Optoacoustic imaging of tissue blanching during photodynamic therapy of esophageal cancer

    NASA Astrophysics Data System (ADS)

    Jacques, Steven L.; Viator, John A.; Paltauf, Guenther

    2000-05-01

    Esophageal cancer patients often present a highly inflamed esophagus at the time of treatment by photodynamic therapy. Immediately after treatment, the inflamed vessels have been shut down and the esophagus presents a white surface. Optoacoustic imaging via an optical fiber device can provide a depth profile of the blanching of inflammation. Such a profile may be an indicator of the depth of treatment achieved by the PDT. Our progress toward developing this diagnostic for use in our clinical PDT treatments of esophageal cancer patients is presented.

  12. Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera.

    PubMed

    Chen, Hao; Woodward, Maria A; Burke, David T; Jeganathan, V Swetha E; Demirci, Hakan; Sick, Volker

    2017-10-01

    A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring.

  13. Imaging Mass Spectrometry on the Nanoscale with Cluster Ion Beams

    DOE PAGES

    Winograd, Nicholas

    2014-12-02

    Imaging with cluster secondary ion mass spectrometry (SIMS) is reaching a mature level of development. When, using a variety of molecular ion projectiles to stimulate desorption, 3-dimensional imaging with the selectivity of mass spectrometry can now be achieved with submicrometer spatial resolution and <10 nm depth resolution. In this Perspective, stock is taken regarding what it will require to routinely achieve these remarkable properties. Some issues include the chemical nature of the projectile, topography formation, differential erosion rates, and perhaps most importantly, ionization efficiency. Shortcomings of existing instrumentation are also noted. One key part of this discussion involves speculation onmore » how best to resolve these issues.« less

  14. Imaging Mass Spectrometry on the Nanoscale with Cluster Ion Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winograd, Nicholas

    Imaging with cluster secondary ion mass spectrometry (SIMS) is reaching a mature level of development. When, using a variety of molecular ion projectiles to stimulate desorption, 3-dimensional imaging with the selectivity of mass spectrometry can now be achieved with submicrometer spatial resolution and <10 nm depth resolution. In this Perspective, stock is taken regarding what it will require to routinely achieve these remarkable properties. Some issues include the chemical nature of the projectile, topography formation, differential erosion rates, and perhaps most importantly, ionization efficiency. Shortcomings of existing instrumentation are also noted. One key part of this discussion involves speculation onmore » how best to resolve these issues.« less

  15. Plenoptic layer-based modeling for image based rendering.

    PubMed

    Pearson, James; Brookes, Mike; Dragotti, Pier Luigi

    2013-09-01

    Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.

  16. Depth-Dependent Anisotropies of Amides and Sugar in Perpendicular and Parallel Sections of Articular Cartilage by Fourier Transform Infrared Imaging (FTIRI)

    PubMed Central

    Xia, Yang; Mittelstaedt, Daniel; Ramakrishnan, Nagarajan; Szarko, Matthew; Bidthanapally, Aruna

    2010-01-01

    Full thickness blocks of canine humeral cartilage were microtomed into both perpendicular sections and a series of 100 parallel sections, each 6 μm thick. Fourier Transform Infrared Imaging (FTIRI) was used to image each tissue section eleven times under different infrared polarizations (from 0° to 180° polarization states in 20° increments and with an additional 90° polarization), at a spatial resolution of 6.25 μm and a wavenumber step of 8 cm−1. With increasing depth from the articular surface, amide anisotropies increased in the perpendicular sections and decreased in the parallel sections. Both types of tissue sectioning identified a 90° difference between amide I and amide II in the superficial zone of cartilage. The fibrillar distribution in the parallel sections from the superficial zone was shown to not be random. Sugar had the greatest anisotropy in the upper part of the radial zone in the perpendicular sections. The depth-dependent anisotropic data were fitted with a theoretical equation that contained three signature parameters, which illustrate the arcade structure of collagens with the aid of a fibril model. Infrared imaging of both perpendicular and parallel sections provides the possibility of determining the three-dimensional macromolecular structures in articular cartilage. Being sensitive to the orientation of the macromolecular structure in healthy articular cartilage aids the prospect of detecting the early onset of the tissue degradation that may lead to pathological conditions such as osteoarthritis. PMID:21274999

  17. High speed multiphoton imaging

    NASA Astrophysics Data System (ADS)

    Li, Yongxiao; Brustle, Anne; Gautam, Vini; Cockburn, Ian; Gillespie, Cathy; Gaus, Katharina; Lee, Woei Ming

    2016-12-01

    Intravital multiphoton microscopy has emerged as a powerful technique to visualize cellular processes in-vivo. Real time processes revealed through live imaging provided many opportunities to capture cellular activities in living animals. The typical parameters that determine the performance of multiphoton microscopy are speed, field of view, 3D imaging and imaging depth; many of these are important to achieving data from in-vivo. Here, we provide a full exposition of the flexible polygon mirror based high speed laser scanning multiphoton imaging system, PCI-6110 card (National Instruments) and high speed analog frame grabber card (Matrox Solios eA/XA), which allows for rapid adjustments between frame rates i.e. 5 Hz to 50 Hz with 512 × 512 pixels. Furthermore, a motion correction algorithm is also used to mitigate motion artifacts. A customized control software called Pscan 1.0 is developed for the system. This is then followed by calibration of the imaging performance of the system and a series of quantitative in-vitro and in-vivo imaging in neuronal tissues and mice.

  18. On the relationships between higher and lower bit-depth system measurements

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Haefner, David P.; Doe, Joshua M.

    2018-04-01

    The quality of an imaging system can be assessed through controlled laboratory objective measurements. Currently, all imaging measurements require some form of digitization in order to evaluate a metric. Depending on the device, the amount of bits available, relative to a fixed dynamic range, will exhibit quantization artifacts. From a measurement standpoint, measurements are desired to be performed at the highest possible bit-depth available. In this correspondence, we described the relationship between higher and lower bit-depth measurements. The limits to which quantization alters the observed measurements will be presented. Specifically, we address dynamic range, MTF, SiTF, and noise. Our results provide guidelines to how systems of lower bit-depth should be characterized and the corresponding experimental methods.

  19. Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frary, R.; Louie, J.; Pullammanappallil, S.

    Roxanna Frary, John N. Louie, Sathish Pullammanappallil, Amy Eisses, 2011, Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect: presented at American Geophysical Union Fall Meeting, San Francisco, Dec. 5-9, abstract T13G-07.

  20. In vivo photoacoustic imaging of mouse embryos

    NASA Astrophysics Data System (ADS)

    Laufer, Jan; Norris, Francesca; Cleary, Jon; Zhang, Edward; Treeby, Bradley; Cox, Ben; Johnson, Peter; Scambler, Pete; Lythgoe, Mark; Beard, Paul

    2012-06-01

    The ability to noninvasively image embryonic vascular anatomy in mouse models is an important requirement for characterizing the development of the normal cardiovascular system and malformations in the heart and vascular supply. Photoacoustic imaging, which can provide high resolution non invasive images of the vasculature based upon optical absorption by endogenous hemoglobin, is well suited to this application. In this study, photoacoustic images of mouse embryos were obtained ex vivo and in vivo. The images show intricate details of the embryonic vascular system to depths of up to 10 mm, which allowed whole embryos to be imaged in situ. To achieve this, an all-optical photoacoustic scanner and a novel time reversal image reconstruction algorithm, which provide deep tissue imaging capability while maintaining high spatial resolution and contrast were employed. This technology may find application as an imaging tool for preclinical embryo studies in developmental biology as well as more generally in preclinical and clinical medicine for studying pathologies characterized by changes in the vasculature.

  1. Axial resolution improvement in spectral domain optical coherence tomography using a depth-adaptive maximum-a-posterior framework

    NASA Astrophysics Data System (ADS)

    Boroomand, Ameneh; Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka

    2015-03-01

    The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.

  2. Coupling sky images with radiative transfer models: a new method to estimate cloud optical depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mejia, Felipe A.; Kurtz, Ben; Murray, Keenan

    A method for retrieving cloud optical depth ( τ c) using a UCSD developed ground-based sky imager (USI) is presented. The radiance red–blue ratio (RRBR) method is motivated from the analysis of simulated images of various τ c produced by a radiative transfer model (RTM). From these images the basic parameters affecting the radiance and red–blue ratio (RBR) of a pixel are identified as the solar zenith angle ( θ 0), τ c, solar pixel angle/scattering angle ( θ s), and pixel zenith angle/view angle ( θ z). The effects of these parameters are described and the functions for radiance,more » I λ τ c, θ 0, θ s, θ z , and RBR τ c, θ 0, θ s, θ z are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for τ c, where RBR increases with τ c up to about τ c = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured I λ meas θ s, θ z , in addition to RBR meas θ s, θ z , to obtain a unique solution for τ c. The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement (ARM) program site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min et al. (2003) method for overcast skies. τ c values ranged from 0 to 80 with values over 80, being capped and registered as 80. A τ c RMSE of 2.5 between the Min et al. (2003) method and the USI are observed. The MWR and USI  have an RMSE of 2.2, which is well within the uncertainty of the MWR. In conclusion, the procedure developed here provides a foundation to test and develop other cloud detection algorithms.« less

  3. Coupling sky images with radiative transfer models: a new method to estimate cloud optical depth

    DOE PAGES

    Mejia, Felipe A.; Kurtz, Ben; Murray, Keenan; ...

    2016-08-30

    A method for retrieving cloud optical depth ( τ c) using a UCSD developed ground-based sky imager (USI) is presented. The radiance red–blue ratio (RRBR) method is motivated from the analysis of simulated images of various τ c produced by a radiative transfer model (RTM). From these images the basic parameters affecting the radiance and red–blue ratio (RBR) of a pixel are identified as the solar zenith angle ( θ 0), τ c, solar pixel angle/scattering angle ( θ s), and pixel zenith angle/view angle ( θ z). The effects of these parameters are described and the functions for radiance,more » I λ τ c, θ 0, θ s, θ z , and RBR τ c, θ 0, θ s, θ z are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for τ c, where RBR increases with τ c up to about τ c = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured I λ meas θ s, θ z , in addition to RBR meas θ s, θ z , to obtain a unique solution for τ c. The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement (ARM) program site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min et al. (2003) method for overcast skies. τ c values ranged from 0 to 80 with values over 80, being capped and registered as 80. A τ c RMSE of 2.5 between the Min et al. (2003) method and the USI are observed. The MWR and USI  have an RMSE of 2.2, which is well within the uncertainty of the MWR. In conclusion, the procedure developed here provides a foundation to test and develop other cloud detection algorithms.« less

  4. A new method for depth profiling reconstruction in confocal microscopy

    NASA Astrophysics Data System (ADS)

    Esposito, Rosario; Scherillo, Giuseppe; Mensitieri, Giuseppe

    2018-05-01

    Confocal microscopy is commonly used to reconstruct depth profiles of chemical species in multicomponent systems and to image nuclear and cellular details in human tissues via image intensity measurements of optical sections. However, the performance of this technique is reduced by inherent effects related to wave diffraction phenomena, refractive index mismatch and finite beam spot size. All these effects distort the optical wave and cause an image to be captured of a small volume around the desired illuminated focal point within the specimen rather than an image of the focal point itself. The size of this small volume increases with depth, thus causing a further loss of resolution and distortion of the profile. Recently, we proposed a theoretical model that accounts for the above wave distortion and allows for a correct reconstruction of the depth profiles for homogeneous samples. In this paper, this theoretical approach has been adapted for describing the profiles measured from non-homogeneous distributions of emitters inside the investigated samples. The intensity image is built by summing the intensities collected from each of the emitters planes belonging to the illuminated volume, weighed by the emitters concentration. The true distribution of the emitters concentration is recovered by a new approach that implements this theoretical model in a numerical algorithm based on the Maximum Entropy Method. Comparisons with experimental data and numerical simulations show that this new approach is able to recover the real unknown concentration distribution from experimental profiles with an accuracy better than 3%.

  5. Ultra-high-speed variable focus optics for novel applications in advanced imaging

    NASA Astrophysics Data System (ADS)

    Kang, S.; Dotsenko, E.; Amrhein, D.; Theriault, C.; Arnold, C. B.

    2018-02-01

    With the advancement of ultra-fast manufacturing technologies, high speed imaging with high 3D resolution has become increasingly important. Here we show the use of an ultra-high-speed variable focus optical element, the TAG Lens, to enable new ways to acquire 3D information from an object. The TAG Lens uses sound to adjust the index of refraction profile in a liquid and thereby can achieve focal scanning rates greater than 100 kHz. When combined with a high-speed pulsed LED and a high-speed camera, we can exploit this phenomenon to achieve high-resolution imaging through large depths. By combining the image acquisition with digital image processing, we can extract relevant parameters such as tilt and angle information from objects in the image. Due to the high speeds at which images can be collected and processed, we believe this technique can be used as an efficient method of industrial inspection and metrology for high throughput applications.

  6. Photoacoustic imaging probe for detecting lymph nodes and spreading of cancer at various depths

    NASA Astrophysics Data System (ADS)

    Lee, Yong-Jae; Jeong, Eun-Ju; Song, Hyun-Woo; Ahn, Chang-Geun; Noh, Hyung Wook; Sim, Joo Yong; Song, Dong Hoon; Jeon, Min Yong; Lee, Susung; Kim, Heewon; Zhang, Meihua; Kim, Bong Kyu

    2017-09-01

    We propose a compact and easy to use photoacoustic imaging (PAI) probe structure using a single strand of optical fiber and a beam combiner doubly reflecting acoustic waves for convenient detection of lymph nodes and cancers. Conventional PAI probes have difficulty detecting lymph nodes just beneath the skin or simultaneously investigating lymph nodes located in shallow as well as deep regions from skin without any supplementary material because the light and acoustic beams are intersecting obliquely in the probe. To overcome the limitations and improve their convenience, we propose a probe structure in which the illuminated light beam axis coincides with the axis of the ultrasound. The developed PAI probe was able to simultaneously achieve a wide range of images positioned from shallow to deep regions without the use of any supplementary material. Moreover, the proposed probe had low transmission losses for the light and acoustic beams. Therefore, the proposed PAI probe will be useful to easily detect lymph nodes and cancers in real clinical fields.

  7. Optical coherence microscopy for deep tissue imaging of the cerebral cortex with intrinsic contrast

    PubMed Central

    Srinivasan, Vivek J.; Radhakrishnan, Harsha; Jiang, James Y.; Barry, Scott; Cable, Alex E.

    2012-01-01

    In vivo optical microscopic imaging techniques have recently emerged as important tools for the study of neurobiological development and pathophysiology. In particular, two-photon microscopy has proved to be a robust and highly flexible method for in vivo imaging in highly scattering tissue. However, two-photon imaging typically requires extrinsic dyes or contrast agents, and imaging depths are limited to a few hundred microns. Here we demonstrate Optical Coherence Microscopy (OCM) for in vivo imaging of neuronal cell bodies and cortical myelination up to depths of ~1.3 mm in the rat neocortex. Imaging does not require the administration of exogenous dyes or contrast agents, and is achieved through intrinsic scattering contrast and image processing alone. Furthermore, using OCM we demonstrate in vivo, quantitative measurements of optical properties (index of refraction and attenuation coefficient) in the cortex, and correlate these properties with laminar cellular architecture determined from the images. Lastly, we show that OCM enables direct visualization of cellular changes during cell depolarization and may therefore provide novel optical markers of cell viability. PMID:22330462

  8. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  9. Extreme depth-of-field intraocular lenses

    NASA Astrophysics Data System (ADS)

    Baker, Kenneth M.

    1996-05-01

    A new technology brings the full aperture single vision pseudophakic eye's effective hyperfocal distance within the half-meter range. A modulated index IOL containing a subsurface zeroth order coherent microlenticular mosaic defined by an index gradient adds a normalizing function to the vergences or parallactic angles of incoming light rays subtended from field object points and redirects them, in the case of near-field images, to that of far-field images. Along with a scalar reduction of the IOL's linear focal range, this results in an extreme depth of field with a narrow depth of focus and avoids the focal split-up, halo, and inherent reduction in contrast of multifocal IOLs. A high microlenticular spatial frequency, which, while still retaining an anisotropic medium, results in a nearly total zeroth order propagation throughout the visible spectrum. The curved lens surfaces still provide most of the refractive power of the IOL, and the unique holographic fabrication technology is especially suitable not only for IOLs but also for contact lenses, artificial corneas, and miniature lens elements for cameras and other optical devices.

  10. 4D Light Field Imaging System Using Programmable Aperture

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam

    2012-01-01

    Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need

  11. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.

    PubMed

    Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.

  12. iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization

    PubMed Central

    Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia

    2017-01-01

    The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098

  13. Determination of thermal wave reflection coefficient to better estimate defect depth using pulsed thermography

    NASA Astrophysics Data System (ADS)

    Sirikham, Adisorn; Zhao, Yifan; Mehnen, Jörn

    2017-11-01

    Thermography is a promising method for detecting subsurface defects, but accurate measurement of defect depth is still a big challenge because thermographic signals are typically corrupted by imaging noise and affected by 3D heat conduction. Existing methods based on numerical models are susceptible to signal noise and methods based on analytical models require rigorous assumptions that usually cannot be satisfied in practical applications. This paper presents a new method to improve the measurement accuracy of subsurface defect depth through determining the thermal wave reflection coefficient directly from observed data that is usually assumed to be pre-known. This target is achieved through introducing a new heat transfer model that includes multiple physical parameters to better describe the observed thermal behaviour in pulsed thermographic inspection. Numerical simulations are used to evaluate the performance of the proposed method against four selected state-of-the-art methods. Results show that the accuracy of depth measurement has been improved up to 10% when noise level is high and thermal wave reflection coefficients is low. The feasibility of the proposed method in real data is also validated through a case study on characterising flat-bottom holes in carbon fibre reinforced polymer (CFRP) laminates which has a wide application in various sectors of industry.

  14. Erratum: The MACHO Project: Microlensing Optical Depth toward the Galactic Bulge from Difference Image Analysis

    NASA Astrophysics Data System (ADS)

    Alcock, C.; Allsman, R. A.; Alves, D. R.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Geha, M.; Griest, K.; Lehner, M. J.; Marshall, S. L.; Minniti, D.; Nelson, C. A.; Peterson, B. A.; Popowski, P.; Pratt, M. R.; Quinn, P. J.; Stubbs, C. W.; Sutherland, W.; Tomaney, A. B.; Vandehei, T.; Welch, D. L.

    2001-08-01

    In the paper ``The MACHO Project: Microlensing Optical Depth toward the Galactic Bulge from Difference Image Analysis'' by C. Alcock, R. A. Allsman, D. R. Alves, T. S. Axelrod, A. C. Becker, D. P. Bennett, K. H. Cook, A. J. Drake, K. C. Freeman, M. Geha, K. Griest, M. J. Lehner, S. L. Marshall, D. Minniti, C. A. Nelson, B. A. Peterson, P. Popowski, M. R. Pratt, P. J. Quinn, C. W. Stubbs, W. Sutherland, A. B. Tomaney, T. Vandehei, and D. L. Welch (ApJ, 541, 734 [2000]) an incorrect version of Table 3 was published. A second copy of Table 2 was given as Table 3. The correct version of Table 3 is available in the preprint version of the paper (astro-ph/0002510) and is printed below. This correction does not affect any of the results in the paper.

  15. Feasibility of spatial frequency-domain imaging for monitoring palpable breast lesions

    NASA Astrophysics Data System (ADS)

    Robbins, Constance M.; Raghavan, Guruprasad; Antaki, James F.; Kainerstorfer, Jana M.

    2017-12-01

    In breast cancer diagnosis and therapy monitoring, there is a need for frequent, noninvasive disease progression evaluation. Breast tumors differ from healthy tissue in mechanical stiffness as well as optical properties, which allows optical methods to detect and monitor breast lesions noninvasively. Spatial frequency-domain imaging (SFDI) is a reflectance-based diffuse optical method that can yield two-dimensional images of absolute optical properties of tissue with an inexpensive and portable system, although depth penetration is limited. Since the absorption coefficient of breast tissue is relatively low and the tissue is quite flexible, there is an opportunity for compression of tissue to bring stiff, palpable breast lesions within the detection range of SFDI. Sixteen breast tissue-mimicking phantoms were fabricated containing stiffer, more highly absorbing tumor-mimicking inclusions of varying absorption contrast and depth. These phantoms were imaged with an SFDI system at five levels of compression. An increase in absorption contrast was observed with compression, and reliable detection of each inclusion was achieved when compression was sufficient to bring the inclusion center within ˜12 mm of the phantom surface. At highest compression level, contrasts achieved with this system were comparable to those measured with single source-detector near-infrared spectroscopy.

  16. Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera

    PubMed Central

    Chen, Hao; Woodward, Maria A.; Burke, David T.; Jeganathan, V. Swetha E.; Demirci, Hakan; Sick, Volker

    2017-01-01

    A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring. PMID:29082081

  17. Multicontrast photoacoustic in vivo imaging using near-infrared fluorescent proteins

    NASA Astrophysics Data System (ADS)

    Krumholz, Arie; Shcherbakova, Daria M.; Xia, Jun; Wang, Lihong V.; Verkhusha, Vladislav V.

    2014-02-01

    Non-invasive imaging of biological processes in vivo is invaluable in advancing biology. Photoacoustic tomography is a scalable imaging technique that provides higher resolution at greater depths in tissue than achievable by purely optical methods. Here we report the application of two spectrally distinct near-infrared fluorescent proteins, iRFP670 and iRFP720, engineered from bacterial phytochromes, as photoacoustic contrast agents. iRFPs provide tissue-specific contrast without the need for delivery of any additional substances. Compared to conventional GFP-like red-shifted fluorescent proteins, iRFP670 and iRFP720 demonstrate stronger photoacoustic signals at longer wavelengths, and can be spectrally resolved from each other and hemoglobin. We simultaneously visualized two differently labeled tumors, one with iRFP670 and the other with iRFP720, as well as blood vessels. We acquired images of a mouse as 2D sections of a whole animal, and as localized 3D volumetric images with high contrast and sub-millimeter resolution at depths up to 8 mm. Our results suggest iRFPs are genetically-encoded probes of choice for simultaneous photoacoustic imaging of several tissues or processes in vivo.

  18. Developing terahertz imaging equation and enhancement of the resolution of terahertz images using deconvolution

    NASA Astrophysics Data System (ADS)

    Ahi, Kiarash; Anwar, Mehdi

    2016-04-01

    This paper introduces a novel reconstruction approach for enhancing the resolution of the terahertz (THz) images. For this purpose the THz imaging equation is derived. According to our best knowledge we are reporting the first THz imaging equation by this paper. This imaging equation is universal for THz far-field imaging systems and can be used for analyzing, describing and modeling of these systems. The geometry and behavior of Gaussian beams in far-field region imply that the FWHM of the THz beams diverge as the frequencies of the beams decrease. Thus, the resolution of the measurement decreases in lower frequencies. On the other hand, the depth of penetration of THz beams decreases as frequency increases. Roughly speaking beams in sub 1.5 THz, are transmitted into integrated circuit (IC) packages and the similar packaged objects. Thus, it is not possible to use the THz pulse with higher frequencies in order to achieve higher resolution inspection of packaged items. In this paper, after developing the 3-D THz point spread function (PSF) of the scanning THz beam and then the THz imaging equation, THz images are enhanced through deconvolution of the THz PSF and THz images. As a result, the resolution has been improved several times beyond the physical limitations of the THz measurement setup in the far-field region and sub-Nyquist images have been achieved. Particularly, MSE and SSIḾ have been increased by 27% and 50% respectively. Details as small as 0.2 mm were made visible in the THz images which originally reveals no details smaller than 2.2 mm. In other words the resolution of the images has been increased by 10 times. The accuracy of the reconstructed images was proved by high resolution X-ray images.

  19. Burn depth determination using high-speed polarization-sensitive Mueller optical coherence tomography with continuous polarization modulation

    NASA Astrophysics Data System (ADS)

    Todorović, Miloš; Ai, Jun; Pereda Cubian, David; Stoica, George; Wang, Lihong

    2006-02-01

    National Health Interview Survey (NHIS) estimates more than 1.1 million burn injuries per year in the United States, with nearly 15,000 fatalities from wounds and related complications. An imaging modality capable of evaluating burn depths non-invasively is the polarization-sensitive optical coherence tomography. We report on the use of a high-speed, fiber-based Mueller-matrix OCT system with continuous source-polarization modulation for burn depth evaluation. The new system is capable of imaging at near video-quality frame rates (8 frames per second) with resolution of 10 μm in biological tissue (index of refraction: 1.4) and sensitivity of 78 dB. The sample arm optics is integrated in a hand-held probe simplifying the in vivo experiments. The applicability of the system for burn depth determination is demonstrated using biological samples of porcine tendon and porcine skin. The results show an improved imaging depth (1 mm in tendon) and a clear localization of the thermally damaged region. The burnt area determined from OCT images compares well with the histology, thus proving the system's potential for burn depth determination.

  20. Super-resolution for asymmetric resolution of FIB-SEM 3D imaging using AI with deep learning.

    PubMed

    Hagita, Katsumi; Higuchi, Takeshi; Jinnai, Hiroshi

    2018-04-12

    Scanning electron microscopy equipped with a focused ion beam (FIB-SEM) is a promising three-dimensional (3D) imaging technique for nano- and meso-scale morphologies. In FIB-SEM, the specimen surface is stripped by an ion beam and imaged by an SEM installed orthogonally to the FIB. The lateral resolution is governed by the SEM, while the depth resolution, i.e., the FIB milling direction, is determined by the thickness of the stripped thin layer. In most cases, the lateral resolution is superior to the depth resolution; hence, asymmetric resolution is generated in the 3D image. Here, we propose a new approach based on an image-processing or deep-learning-based method for super-resolution of 3D images with such asymmetric resolution, so as to restore the depth resolution to achieve symmetric resolution. The deep-learning-based method learns from high-resolution sub-images obtained via SEM and recovers low-resolution sub-images parallel to the FIB milling direction. The 3D morphologies of polymeric nano-composites are used as test images, which are subjected to the deep-learning-based method as well as conventional methods. We find that the former yields superior restoration, particularly as the asymmetric resolution is increased. Our super-resolution approach for images having asymmetric resolution enables observation time reduction.

  1. Integration time for the perception of depth from motion parallax.

    PubMed

    Nawrot, Mark; Stroyan, Keith

    2012-04-15

    The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio

  2. Real-time depth camera tracking with geometrically stable weight algorithm

    NASA Astrophysics Data System (ADS)

    Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming

    2017-03-01

    We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.

  3. Evaluating motion parallax and stereopsis as depth cues for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Braun, Marius; Leiner, Ulrich; Ruschin, Detlef

    2011-03-01

    The perception of space in the real world is based on multifaceted depth cues, most of them monocular, some binocular. Developing 3D-displays raises the question, which of these depth cues are predominant and should be simulated by computational means in such a panel. Beyond the cues based on image content, such as shadows or patterns, Stereopsis and depth from motion parallax are the most significant mechanisms supporting observers with depth information. We set up a carefully designed test situation, widely excluding undesired other distance hints. Thereafter we conducted a user test to find out, which of these two depth cues is more relevant and whether a combination of both would increase accuracy in a depth estimation task. The trials were conducting utilizing our autostereoscopic "Free2C"-displays, which are capable to detect the user eye position and steer the image lobes dynamically into that direction. At the same time, eye position was used to update the virtual camera's location and thereby offering motion parallax to the observer. As far as we know, this was the first time that such a test has been conducted using an autosteresocopic display without any assistive technologies. Our results showed, in accordance with prior experiments, that both cues are effective, however Stereopsis is by order of magnitude more relevant. Combining both cues improved the precision of distance estimation by another 30-40%.

  4. Blur and the perception of depth at occlusions.

    PubMed

    Zannoli, Marina; Love, Gordon D; Narain, Rahul; Banks, Martin S

    2016-01-01

    The depth ordering of two surfaces, one occluding the other, can in principle be determined from the correlation between the occlusion border's blur and the blur of the two surfaces. If the border is blurred, the blurrier surface is nearer; if the border is sharp, the sharper surface is nearer. Previous research has found that observers do not use this informative cue. We reexamined this finding. Using a multiplane display, we confirmed the previous finding: Our observers did not accurately judge depth order when the blur was rendered and the stimulus presented on one plane. We then presented the same simulated scenes on multiple planes, each at a different focal distance, so the blur was created by the optics of the eye. Performance was now much better, which shows that depth order can be reliably determined from blur information but only when the optical effects are similar to those in natural viewing. We asked what the critical differences were in the single- and multiplane cases. We found that chromatic aberration provides useful information but accommodative microfluctuations do not. In addition, we examined how image formation is affected by occlusions and observed some interesting phenomena that allow the eye to see around and through occluding objects and may allow observers to estimate depth in da Vinci stereopsis, where one eye's view is blocked. Finally, we evaluated how accurately different rendering and displaying techniques reproduce the retinal images that occur in real occlusions. We discuss implications for computer graphics.

  5. Combined in-depth, 3D, en face imaging of the optic disc, optic disc pits and optic disc pit maculopathy using swept-source megahertz OCT at 1050 nm.

    PubMed

    Maertz, Josef; Kolb, Jan Philip; Klein, Thomas; Mohler, Kathrin J; Eibl, Matthias; Wieser, Wolfgang; Huber, Robert; Priglinger, Siegfried; Wolf, Armin

    2018-02-01

    To demonstrate papillary imaging of eyes with optic disc pits (ODP) or optic disc pit associated maculopathy (ODP-M) with ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s. To generate 3D-renderings of the papillary area with 3D volume-reconstructions of the ODP and highly resolved en face images from a single densely-sampled megahertz-OCT (MHz-OCT) dataset for investigation of ODP-characteristics. A 1.68 MHz-prototype SS-MHz-OCT system at 1050 nm based on a Fourier-domain mode-locked laser was employed to acquire high-definition, 3D datasets with a dense sampling of 1600 × 1600 A-scans over a 45° field of view. Six eyes with ODPs, and two further eyes with glaucomatous alteration or without ocular pathology are presented. 3D-rendering of the deep papillary structures, virtual 3D-reconstructions of the ODPs and depth resolved isotropic en face images were generated using semiautomatic segmentation. 3D-rendering and en face imaging of the optic disc, ODPs and ODP associated pathologies showed a broad spectrum regarding ODP characteristics. Between individuals the shape of the ODP and the appending pathologies varied considerably. MHz-OCT en face imaging generates distinct top-view images of ODPs and ODP-M. MHz-OCT generates high resolution images of retinal pathologies associated with ODP-M and allows visualizing ODPs with depths of up to 2.7 mm. Different patterns of ODPs can be visualized in patients for the first time using 3D-reconstructions and co-registered high-definition en face images extracted from a single densely sampled 1050 nm megahertz-OCT (MHz-OCT) dataset. As the immediate vicinity to the SAS and the site of intrapapillary proliferation is located at the bottom of the ODP it is crucial to image the complete structure and the whole depth of ODPs. Especially in very deep pits, where non-swept-source OCT fails to reach the bottom, conventional swept-source devices and the MHz-OCT alike are feasible

  6. Image Restoration for Fluorescence Planar Imaging with Diffusion Model

    PubMed Central

    Gong, Yuzhu; Li, Yang

    2017-01-01

    Fluorescence planar imaging (FPI) is failure to capture high resolution images of deep fluorochromes due to photon diffusion. This paper presents an image restoration method to deal with this kind of blurring. The scheme of this method is conceived based on a reconstruction method in fluorescence molecular tomography (FMT) with diffusion model. A new unknown parameter is defined through introducing the first mean value theorem for definite integrals. System matrix converting this unknown parameter to the blurry image is constructed with the elements of depth conversion matrices related to a chosen plane named focal plane. Results of phantom and mouse experiments show that the proposed method is capable of reducing the blurring of FPI image caused by photon diffusion when the depth of focal plane is chosen within a proper interval around the true depth of fluorochrome. This method will be helpful to the estimation of the size of deep fluorochrome. PMID:29279843

  7. Measuring the depth of the caudal epidural space to prevent dural sac puncture during caudal block in children.

    PubMed

    Lee, Hyun Jeong; Min, Ji Young; Kim, Hyun Il; Byon, Hyo-Jin

    2017-05-01

    Caudal blocks are performed through the sacral hiatus in order to provide pain control in children undergoing lower abdominal surgery. During the block, it is important to avoid advancing the needle beyond the sacrococcygeal ligament too much to prevent unintended dural puncture. This study used demographic data to establish simple guidelines for predicting a safe needle depth in the caudal epidural space in children. A total of 141 children under 12 years old who had undergone lumbar-sacral magnetic resonance imaging were included. The T2 sagittal image that provided the best view of the sacrococcygeal membrane and the dural sac was chosen. We used Picture Achieving and Communication System (Centricity ® PACS, GE Healthcare Co.) to measure the distance between the sacrococcygeal ligament and the dural sac, the length of the sacrococcygeal ligament, and the maximum depth of the caudal space. There were strong correlations between age, weight, height, and BSA, and the distance between the sacrococcygeal ligament and dural sac, as well as the length of the sacrococcygeal ligament. Based on these findings, a simple formula to calculate the distance between the sacrococcygeal ligament and dural sac was developed: 25 × BSA (mm). This simple formula can accurately calculate the safe depth of the caudal epidural space to prevent unintended dural puncture during caudal block in children. However, further clinical studies based on this formula are needed to substantiate its utility. © 2017 John Wiley & Sons Ltd.

  8. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    NASA Astrophysics Data System (ADS)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator

  9. A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light

    PubMed Central

    Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning

    2017-01-01

    Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759

  10. Nonlinear spectral imaging of biological tissues

    NASA Astrophysics Data System (ADS)

    Palero, J. A.

    2007-07-01

    The work presented in this thesis demonstrates live high resolution 3D imaging of tissue in its native state and environment. The nonlinear interaction between focussed femtosecond light pulses and the biological tissue results in the emission of natural autofluorescence and second-harmonic signal. Because biological intrinsic emission is generally very weak and extends from the ultraviolet to the visible spectral range, a broad-spectral range and high sensitivity 3D spectral imaging system is developed. Imaging the spectral characteristics of the biological intrinsic emission reveals the structure and biochemistry of the cells and extra-cellular components. By using different methods in visualizing the spectral images, discrimination between different tissue structures is achieved without the use of any stain or fluorescent label. For instance, RGB real color spectral images of the intrinsic emission of mouse skin tissues show blue cells, green hair follicles, and purple collagen fibers. The color signature of each tissue component is directly related to its characteristic emission spectrum. The results of this study show that skin tissue nonlinear intrinsic emission is mainly due to the autofluorescence of reduced nicotinamide adenine dinucleotide (phosphate), flavins, keratin, melanin, phospholipids, elastin and collagen and nonlinear Raman scattering and second-harmonic generation in Type I collagen. In vivo time-lapse spectral imaging is implemented to study metabolic changes in epidermal cells in tissues. Optical scattering in tissues, a key factor in determining the maximum achievable imaging depth, is also investigated in this work.

  11. Restoration of distorted depth maps calculated from stereo sequences

    NASA Technical Reports Server (NTRS)

    Damour, Kevin; Kaufman, Howard

    1991-01-01

    A model-based Kalman estimator is developed for spatial-temporal filtering of noise and other degradations in velocity and depth maps derived from image sequences or cinema. As an illustration of the proposed procedures, edge information from image sequences of rigid objects is used in the processing of the velocity maps by selecting from a series of models for directional adaptive filtering. Adaptive filtering then allows for noise reduction while preserving sharpness in the velocity maps. Results from several synthetic and real image sequences are given.

  12. Research of detection depth for graphene-based optical sensor

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Sun, Jialve; Liu, Lu; Zhu, Siwei; Yuan, Xiaocong

    2018-03-01

    Graphene-based optical sensors have been developed for research into the biological intercellular refractive index (RI) because they offer greater detection depths than those provided by the surface plasmon resonance technique. In this Letter, we propose an experimental approach for measurement of the detection depth in a graphene-based optical sensor system that uses transparent polydimethylsiloxane layers with different thicknesses. The experimental results show that detection depths of 2.5 μm and 3 μm can be achieved at wavelengths of 532 nm and 633 nm, respectively. These results prove that graphene-based optical sensors can realize long-range RI detection and are thus promising for use as tools in the biological cell detection field. Additionally, we analyze the factors that influence the detection depth and provide a feasible approach for detection depth control based on adjustment of the wavelength and the angle of incidence. We believe that this approach will be useful in RI tomography applications.

  13. Experimental assessment of a 3-D plenoptic endoscopic imaging system.

    PubMed

    Le, Hanh N D; Decker, Ryan; Krieger, Axel; Kang, Jin U

    2017-01-01

    An endoscopic imaging system using a plenoptic technique to reconstruct 3-D information is demonstrated and analyzed in this Letter. The proposed setup integrates a clinical surgical endoscope with a plenoptic camera to achieve a depth accuracy error of about 1 mm and a precision error of about 2 mm, within a 25 mm × 25 mm field of view, operating at 11 frames per second.

  14. Segmentation of malignant lesions in 3D breast ultrasound using a depth-dependent model.

    PubMed

    Tan, Tao; Gubern-Mérida, Albert; Borelli, Cristina; Manniesing, Rashindra; van Zelst, Jan; Wang, Lei; Zhang, Wei; Platel, Bram; Mann, Ritse M; Karssemeijer, Nico

    2016-07-01

    Automated 3D breast ultrasound (ABUS) has been proposed as a complementary screening modality to mammography for early detection of breast cancers. To facilitate the interpretation of ABUS images, automated diagnosis and detection techniques are being developed, in which malignant lesion segmentation plays an important role. However, automated segmentation of cancer in ABUS is challenging since lesion edges might not be well defined. In this study, the authors aim at developing an automated segmentation method for malignant lesions in ABUS that is robust to ill-defined cancer edges and posterior shadowing. A segmentation method using depth-guided dynamic programming based on spiral scanning is proposed. The method automatically adjusts aggressiveness of the segmentation according to the position of the voxels relative to the lesion center. Segmentation is more aggressive in the upper part of the lesion (close to the transducer) than at the bottom (far away from the transducer), where posterior shadowing is usually visible. The authors used Dice similarity coefficient (Dice) for evaluation. The proposed method is compared to existing state of the art approaches such as graph cut, level set, and smart opening and an existing dynamic programming method without depth dependence. In a dataset of 78 cancers, our proposed segmentation method achieved a mean Dice of 0.73 ± 0.14. The method outperforms an existing dynamic programming method (0.70 ± 0.16) on this task (p = 0.03) and it is also significantly (p < 0.001) better than graph cut (0.66 ± 0.18), level set based approach (0.63 ± 0.20) and smart opening (0.65 ± 0.12). The proposed depth-guided dynamic programming method achieves accurate breast malignant lesion segmentation results in automated breast ultrasound.

  15. Effects of cortical damage on binocular depth perception.

    PubMed

    Bridge, Holly

    2016-06-19

    Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Authors.

  16. Effects of cortical damage on binocular depth perception

    PubMed Central

    2016-01-01

    Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269597

  17. Estimation of object motion parameters from noisy images.

    PubMed

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  18. Next Generation Nuclear Plant Defense-in-Depth Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edward G. Wallace; Karl N. Fleming; Edward M. Burns

    2009-12-01

    The purpose of this paper is to (1) document the definition of defense-in-depth and the pproach that will be used to assure that its principles are satisfied for the NGNP project and (2) identify the specific questions proposed for preapplication discussions with the NRC. Defense-in-depth is a safety philosophy in which multiple lines of defense and conservative design and evaluation methods are applied to assure the safety of the public. The philosophy is also intended to deliver a design that is tolerant to uncertainties in knowledge of plant behavior, component reliability or operator performance that might compromise safety. This papermore » includes a review of the regulatory foundation for defense-in-depth, a definition of defense-in-depth that is appropriate for advanced reactor designs based on High Temperature Gas-cooled Reactor (HTGR) technology, and an explanation of how this safety philosophy is achieved in the NGNP.« less

  19. Nonextensive statistics and skin depth of transverse wave in collisional plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashemzadeh, M., E-mail: hashemzade@gmail.com

    Skin depth of transverse wave in a collisional plasma is studied taking into account the nonextensive electron distribution function. Considering the kinetic theory for charge particles and using the Bhatnagar-Gross-Krook collision model, a generalized transverse dielectric permittivity is obtained. The transverse dispersion relation in different frequency ranges is investigated. Obtaining the imaginary part of the wave vector from the dispersion relation, the skin depth for these frequency ranges is also achieved. Profiles of the skin depth show that by increasing the q parameter, the penetration depth decreases. In addition, the skin depth increases by increasing the electron temperature. Finally, itmore » is found that in the high frequency range and high electron temperature, the penetration depth decreases by increasing the collision frequency. In contrast, by increasing the collision frequency in a highly collisional frequency range, the skin depth of transverse wave increases.« less

  20. Robust stereo matching with trinary cross color census and triple image-based refinements

    NASA Astrophysics Data System (ADS)

    Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr

    2017-12-01

    For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.

  1. Nanoscopy—imaging life at the nanoscale: a Nobel Prize achievement with a bright future

    NASA Astrophysics Data System (ADS)

    Blom, Hans; Bates, Mark

    2015-10-01

    A grand scientific prize was awarded last year to three pioneering scientists, for their discovery and development of molecular ‘ON-OFF’ switching which, when combined with optical imaging, can be used to see the previously invisible with light microscopy. The Royal Swedish Academy of Science announced on October 8th their decision and explained that this achievement—rooted in physics and applied in biology and medicine—was awarded with the Nobel Prize in Chemistry for controlling fluorescent molecules to create images of specimens smaller than anything previously observed with light. The story of how this noble switch in optical microscopy was achieved and how it was engineered to visualize life at the nanoscale is highlighted in this invited comment.

  2. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  3. Investigation of the depth and diameter relationship of subkilometer-diameter lunar craters

    NASA Astrophysics Data System (ADS)

    Sun, Shujuan; Yue, Zongyu; Di, Kaichang

    2018-07-01

    The depth and diameter relationship is one of the most important characteristics of craters; however, previous studies have focused mostly on large-diameter craters because of the limitations of image resolution. Recently, very high resolution images have been obtained that make it possible to expand this field of study to craters with diameters of < 1 km. Using images with resolution of up to 0.5 m, acquired by the Lunar Reconnaissance Orbiter, we investigated the depth and diameter relationship of fresh craters with subkilometer diameters. We selected craters from lunar maria and highlands, and we made precise measurements of their diameters and depths. The results show that the d/D ratio of small craters in the lunar maria and highlands, which varies from ∼0.2 to ∼0.1, is generally shallower than that of larger craters. We propose that the reason for the difference is because of the low strength of the lunar surface material. The fitted power law parameters of lunar mare and highland craters were found to be different, and that might be explained by terrain-related differences.

  4. Depth-tunable three-dimensional display with interactive light field control

    NASA Astrophysics Data System (ADS)

    Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chenyu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    A software-defined depth-tunable three-dimensional (3D) display with interactive 3D depth control is presented. With the proposed post-processing system, the disparity of the multi-view media can be freely adjusted. Benefiting from a wealth of information inherently contains in dense multi-view images captured with parallel arrangement camera array, the 3D light field is built and the light field structure is controlled to adjust the disparity without additional acquired depth information since the light field structure itself contains depth information. A statistical analysis based on the least square is carried out to extract the depth information inherently exists in the light field structure and the accurate depth information can be used to re-parameterize light fields for the autostereoscopic display, and a smooth motion parallax can be guaranteed. Experimental results show that the system is convenient and effective to adjust the 3D scene performance in the 3D display.

  5. Mapping snow depth from stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Gascoin, S.; Marti, R.; Berthier, E.; Houet, T.; de Pinel, M.; Laffly, D.

    2016-12-01

    To date, there is no definitive approach to map snow depth in mountainous areas from spaceborne sensors. Here, we examine the potential of very-high-resolution (VHR) optical stereo satellites to this purpose. Two triplets of 0.70 m resolution images were acquired by the Pléiades satellite over an open alpine catchment (14.5 km²) under snow-free and snow-covered conditions. The open-source software Ame's Stereo Pipeline (ASP) was used to match the stereo pairs without ground control points to generate raw photogrammetric clouds and to convert them into high-resolution digital elevation models (DEMs) at 1, 2, and 4 m resolutions. The DEM differences (dDEMs) were computed after 3-D coregistration, including a correction of a -0.48 m vertical bias. The bias-corrected dDEM maps were compared to 451 snow-probe measurements. The results show a decimetric accuracy and precision in the Pléiades-derived snow depths. The median of the residuals is -0.16 m, with a standard deviation (SD) of 0.58 m at a pixel size of 2 m. We compared the 2 m Pléiades dDEM to a 2 m dDEM that was based on a winged unmanned aircraft vehicle (UAV) photogrammetric survey that was performed on the same winter date over a portion of the catchment (3.1 km²). The UAV-derived snow depth map exhibits the same patterns as the Pléiades-derived snow map, with a median of -0.11 m and a SD of 0.62 m when compared to the snow-probe measurements. The Pléiades images benefit from a very broad radiometric range (12 bits), allowing a high correlation success rate over the snow-covered areas. This study demonstrates the value of VHR stereo satellite imagery to map snow depth in remote mountainous areas even when no field data are available. Based on this method we have initiated a multi-year survey of the peak snow depth in the Bassiès catchment.

  6. 3D endoscopic imaging using structured illumination technique (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Le, Hanh N. D.; Nguyen, Hieu; Wang, Zhaoyang; Kang, Jin U.

    2017-02-01

    Surgeons have been increasingly relying on minimally invasive surgical guidance techniques not only to reduce surgical trauma but also to achieve accurate and objective surgical risk evaluations. A typical minimally invasive surgical guidance system provides visual assistance in two-dimensional anatomy and pathology of internal organ within a limited field of view. In this work, we propose and implement a structure illumination endoscope to provide a simple, inexpensive 3D endoscopic imaging to conduct high resolution 3D imagery for use in surgical guidance system. The system is calibrated and validated for quantitative depth measurement in both calibrated target and human subject. The system exhibits a depth of field of 20 mm, depth resolution of 0.2mm and a relative accuracy of 0.1%. The demonstrated setup affirms the feasibility of using the structured illumination endoscope for depth quantization and assisting medical diagnostic assessments

  7. How to reinforce perception of depth in single two-dimensional pictures

    NASA Technical Reports Server (NTRS)

    Nagata, S.

    1989-01-01

    The physical conditions of the display of single 2-D pictures, which produce images realistically, were studied by using the characteristics of the intake of the information for visual depth perception. Depth sensitivity, which is defined as the ratio of viewing distance to depth discrimination threshold, was introduced in order to evaluate the availability of various cues for depth perception: binocular parallax, motion parallax, accommodation, convergence, size, texture, brightness, and air-perspective contrast. The effects of binocular parallax in different conditions, the depth sensitivity of which is greatest at a distance of up to about 10 m, were studied with the new versatile stereoscopic display. From these results, four conditions to reinforce the perception of depth in single pictures were proposed, and these conditions are met by the old viewing devices and the new high-definition and wide television displays.

  8. Identification of the critical depth-of-cut through a 2D image of the cutting region resulting from taper cutting of brittle materials

    NASA Astrophysics Data System (ADS)

    Gu, Wen; Zhu, Zhiwei; Zhu, Wu-Le; Lu, Leyao; To, Suet; Xiao, Gaobo

    2018-05-01

    An automatic identification method for obtaining the critical depth-of-cut (DoC) of brittle materials with nanometric accuracy and sub-nanometric uncertainty is proposed in this paper. With this method, a two-dimensional (2D) microscopic image of the taper cutting region is captured and further processed by image analysis to extract the margin of generated micro-cracks in the imaging plane. Meanwhile, an analytical model is formulated to describe the theoretical curve of the projected cutting points on the imaging plane with respect to a specified DoC during the whole cutting process. By adopting differential evolution algorithm-based minimization, the critical DoC can be identified by minimizing the deviation between the extracted margin and the theoretical curve. The proposed method is demonstrated through both numerical simulation and experimental analysis. Compared with conventional 2D- and 3D-microscopic-image-based methods, determination of the critical DoC in this study uses the envelope profile rather than the onset point of the generated cracks, providing a more objective approach with smaller uncertainty.

  9. Imaging the 2017 MW 8.2 Tehuantepec intermediate-depth earthquake using Teleseismic P Waves

    NASA Astrophysics Data System (ADS)

    Brudzinski, M.; Zhang, H.; Koper, K. D.; Pankow, K. L.

    2017-12-01

    The September 8, 2017 MW 8.1 Tehuantepec, Mexico earthquakes in the middle American subduction zone is one of the largest intermediate-depth earthquake ever recorded and could provide an unprecedented opportunity for understanding the mechanism of intermediate-depth earthquakes. While the hypocenter and centroid depths for this earthquake are shallower than typically considered for intermediate depth earthquakes, the normal faulting mechanism consistent with down-dip extension and location within the subducting plate align with properties of intermediate depth earthquakes. Back-projection of high-frequency teleseismic P-waves from two regional arrays for this earthquake shows unilateral rupture on a southeast-northwest striking fault that extends north of the Tehuantepec fracture zone (TFZ), with an average horizontal rupture speed of 3.0 km/s and total duration of 60 s. Guided by these back-projection results, 47 globally distributed low-frequency P-waves were inverted for a finite-fault model (FFM) of slip for both nodal planes. The FFM shows a slip deficit in proximity to the extension of the TFZ, as well as the minor rupture beyond the TFZ (confirmed by the synthetic tests), which indicates that the TFZ acted as a barrier for this earthquake. Analysis of waveform misfit leads to the preference of a subvertical plane as the causative fault. The FFM shows that the majority of the rupture is above the focal depth and consists of two large slip patches: the first one is near the hypocenter ( 55 km depth) and the second larger one near 30 km depth. The distribution of the two patches spatially agrees with seismicity that defines the upper and lower zones of a double Benioff zone (DBZ). It appears there was single fault rupture across the two depth zones of the DBZ. This is uncommon because a stark aseismic zone is typically observed between the upper and lower zones of the DBZ. This finding indicates that the mechanism for intraslab earthquakes must allow for

  10. A 30-MHz piezo-composite ultrasound array for medical imaging applications.

    PubMed

    Ritter, Timothy A; Shrout, Thomas R; Tutwiler, Rick; Shung, K Kirk

    2002-02-01

    Ultrasound imaging at frequencies above 20 MHz is capable of achieving improved resolution in clinical applications requiring limited penetration depth. High frequency arrays that allow real-time imaging are desired for these applications but are not yet currently available. In this work, a method for fabricating fine-scale 2-2 composites suitable for 30-MHz linear array transducers was successfully demonstrated. High thickness coupling, low mechanical loss, and moderate electrical loss were achieved. This piezo-composite was incorporated into a 30-MHz array that included acoustic matching, an elevation focusing lens, electrical matching, and an air-filled kerf between elements. Bandwidths near 60%, 15-dB insertion loss, and crosstalk less than -30 dB were measured. Images of both a phantom and an ex vivo human eye were acquired using a synthetic aperture reconstruction method, resulting in measured lateral and axial resolutions of approximately 100 microm.

  11. In-vivo gingival sulcus imaging using full-range, complex-conjugate-free, endoscopic spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.

    2012-01-01

    Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.

  12. Three-photon tissue imaging using moxifloxacin.

    PubMed

    Lee, Seunghun; Lee, Jun Ho; Wang, Taejun; Jang, Won Hyuk; Yoon, Yeoreum; Kim, Bumju; Jun, Yong Woong; Kim, Myoung Joon; Kim, Ki Hean

    2018-06-20

    Moxifloxacin is an antibiotic used in clinics and has recently been used as a clinically compatible cell-labeling agent for two-photon (2P) imaging. Although 2P imaging with moxifloxacin labeling visualized cells inside tissues using enhanced fluorescence, the imaging depth was quite limited because of the relatively short excitation wavelength (<800 nm) used. In this study, the feasibility of three-photon (3P) excitation of moxifloxacin using a longer excitation wavelength and moxifloxacin-based 3P imaging were tested to increase the imaging depth. Moxifloxacin fluorescence via 3P excitation was detected at a >1000 nm excitation wavelength. After obtaining the excitation and emission spectra of moxifloxacin, moxifloxacin-based 3P imaging was applied to ex vivo mouse bladder and ex vivo mouse small intestine tissues and compared with moxifloxacin-based 2P imaging by switching the excitation wavelength of a Ti:sapphire oscillator between near 1030 and 780 nm. Both moxifloxacin-based 2P and 3P imaging visualized cellular structures in the tissues via moxifloxacin labeling, but the image contrast was better with 3P imaging than with 2P imaging at the same imaging depths. The imaging speed and imaging depth of moxifloxacin-based 3P imaging using a Ti:sapphire oscillator were limited by insufficient excitation power. Therefore, we constructed a new system for moxifloxacin-based 3P imaging using a high-energy Yb fiber laser at 1030 nm and used it for in vivo deep tissue imaging of a mouse small intestine. Moxifloxacin-based 3P imaging could be useful for clinical applications with enhanced imaging depth.

  13. Cardiac image modelling: Breadth and depth in heart disease.

    PubMed

    Suinesiaputra, Avan; McCulloch, Andrew D; Nash, Martyn P; Pontre, Beau; Young, Alistair A

    2016-10-01

    With the advent of large-scale imaging studies and big health data, and the corresponding growth in analytics, machine learning and computational image analysis methods, there are now exciting opportunities for deepening our understanding of the mechanisms and characteristics of heart disease. Two emerging fields are computational analysis of cardiac remodelling (shape and motion changes due to disease) and computational analysis of physiology and mechanics to estimate biophysical properties from non-invasive imaging. Many large cohort studies now underway around the world have been specifically designed based on non-invasive imaging technologies in order to gain new information about the development of heart disease from asymptomatic to clinical manifestations. These give an unprecedented breadth to the quantification of population variation and disease development. Also, for the individual patient, it is now possible to determine biophysical properties of myocardial tissue in health and disease by interpreting detailed imaging data using computational modelling. For these population and patient-specific computational modelling methods to develop further, we need open benchmarks for algorithm comparison and validation, open sharing of data and algorithms, and demonstration of clinical efficacy in patient management and care. The combination of population and patient-specific modelling will give new insights into the mechanisms of cardiac disease, in particular the development of heart failure, congenital heart disease, myocardial infarction, contractile dysfunction and diastolic dysfunction. Copyright © 2016. Published by Elsevier B.V.

  14. A Cadaveric Analysis of the Optimal Radiographic Angle for Evaluating Trochlear Depth.

    PubMed

    Weinberg, Douglas Stanley; Gilmore, Allison; Guraya, Sahejmeet S; Wang, David M; Liu, Raymond W

    2017-02-01

    Disorders of the patellofemoral joint are common. Diagnosis and management often involves the use tangential imaging of the patella and trochlear grove, with the sunrise projection being the most common. However, imaging protocols vary between institutions, and limited data exist to determine which radiographic projections provide optimal visualization of the trochlear groove at its deepest point. Plain radiographs of 48 cadaveric femora were taken at various beam-femur angles and the maximum trochlear depth was measured; a tilt-board apparatus was used to elevate the femur in 5-degree increments between 40 and 75 degrees. A corollary experiment was undertaken to investigate beam-femur angles osteologically: digital representations of each bone were created with a MicroScribe digitizer, and trochlear depth was measured on all specimens at beam-femur angles from 0 to 75 degrees. The results of the radiographic and digitizer experiments showed that the maximum trochlear grove depth occurred at a beam-femur angle of 50 degrees. These results suggest that the optimal beam-femur angle for visualizing maximum trochlear depth is 50 degrees. This is significantly lower than the beam-femur angle of 90 degrees typically used in the sunrise projection. Clinicians evaluating trochlear depth on sunrise projections may be underestimating maximal depth and evaluating a nonarticulating portion of the femur. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  15. Achieving high-value cardiac imaging: challenges and opportunities.

    PubMed

    Wiener, David H

    2014-01-01

    Cardiac imaging is under intense scrutiny as a contributor to health care costs, with multiple initiatives under way to reduce and eliminate inappropriate testing. Appropriate use criteria are valuable guides to selecting imaging studies but until recently have focused on the test rather than the patient. Patient-centered means are needed to define the true value of imaging for patients in specific clinical situations. This article provides a definition of high-value cardiac imaging. A paradigm to judge the efficacy of echocardiography in the absence of randomized controlled trials is presented. Candidate clinical scenarios are proposed in which echocardiography constitutes high-value imaging, as well as stratagems to increase the likelihood that high-value cardiac imaging takes place in those circumstances. Copyright © 2014 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.

  16. Rapid prototyping of biomimetic vascular phantoms for hyperspectral reflectance imaging.

    PubMed

    Ghassemi, Pejhman; Wang, Jianting; Melchiorri, Anthony J; Ramella-Roman, Jessica C; Mathews, Scott A; Coburn, James C; Sorg, Brian S; Chen, Yu; Pfefer, T Joshua

    2015-01-01

    The emerging technique of rapid prototyping with three-dimensional (3-D) printers provides a simple yet revolutionary method for fabricating objects with arbitrary geometry. The use of 3-D printing for generating morphologically biomimetic tissue phantoms based on medical images represents a potentially major advance over existing phantom approaches. Toward the goal of image-defined phantoms, we converted a segmented fundus image of the human retina into a matrix format and edited it to achieve a geometry suitable for printing. Phantoms with vessel-simulating channels were then printed using a photoreactive resin providing biologically relevant turbidity, as determined by spectrophotometry. The morphology of printed vessels was validated by x-ray microcomputed tomography. Channels were filled with hemoglobin (Hb) solutions undergoing desaturation, and phantoms were imaged with a near-infrared hyperspectral reflectance imaging system. Additionally, a phantom was printed incorporating two disjoint vascular networks at different depths, each filled with Hb solutions at different saturation levels. Light propagation effects noted during these measurements—including the influence of vessel density and depth on Hb concentration and saturation estimates, and the effect of wavelength on vessel visualization depth—were evaluated. Overall, our findings indicated that 3-D-printed biomimetic phantoms hold significant potential as realistic and practical tools for elucidating light–tissue interactions and characterizing biophotonic system performance.

  17. Photoacoustic imaging with planoconcave optical microresonator sensors: feasibility studies based on phantom imaging

    NASA Astrophysics Data System (ADS)

    Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.

    2017-03-01

    The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.

  18. Development of collision avoidance system for useful UAV applications using image sensors with laser transmitter

    NASA Astrophysics Data System (ADS)

    Cheong, M. K.; Bahiki, M. R.; Azrad, S.

    2016-10-01

    The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.

  19. a New Paradigm for Matching - and Aerial Images

    NASA Astrophysics Data System (ADS)

    Koch, T.; Zhuo, X.; Reinartz, P.; Fraundorfer, F.

    2016-06-01

    This paper investigates the performance of SIFT-based image matching regarding large differences in image scaling and rotation, as this is usually the case when trying to match images captured from UAVs and airplanes. This task represents an essential step for image registration and 3d-reconstruction applications. Various real world examples presented in this paper show that SIFT, as well as A-SIFT perform poorly or even fail in this matching scenario. Even if the scale difference in the images is known and eliminated beforehand, the matching performance suffers from too few feature point detections, ambiguous feature point orientations and rejection of many correct matches when applying the ratio-test afterwards. Therefore, a new feature matching method is provided that overcomes these problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions. This method is designed for matching almost nadir-directed images with low scene depth, as this is typical in UAV and aerial image matching scenarios. We tested the proposed method on different real world image pairs. While standard SIFT failed for most of the datasets, plenty of geometrical correct matches could be found using our approach. Comparing the estimated fundamental matrices and homographies with ground-truth solutions, mean errors of few pixels can be achieved.

  20. Diffuse optical microscopy for quantification of depth-dependent epithelial backscattering in the cervix

    NASA Astrophysics Data System (ADS)

    Bodenschatz, Nico; Lam, Sylvia; Carraro, Anita; Korbelik, Jagoda; Miller, Dianne M.; McAlpine, Jessica N.; Lee, Marette; Kienle, Alwin; MacAulay, Calum

    2016-06-01

    A fiber optic imaging approach is presented using structured illumination for quantification of almost pure epithelial backscattering. We employ multiple spatially modulated projection patterns and camera-based reflectance capture to image depth-dependent epithelial scattering. The potential diagnostic value of our approach is investigated on cervical ex vivo tissue specimens. Our study indicates a strong backscattering increase in the upper part of the cervical epithelium caused by dysplastic microstructural changes. Quantization of relative depth-dependent backscattering is confirmed as a potentially useful diagnostic feature for detection of precancerous lesions in cervical squamous epithelium.

  1. Experimental assessment of a 3-D plenoptic endoscopic imaging system

    PubMed Central

    Le, Hanh N. D.; Decker, Ryan; Krieger, Axel; Kang, Jin U.

    2017-01-01

    An endoscopic imaging system using a plenoptic technique to reconstruct 3-D information is demonstrated and analyzed in this Letter. The proposed setup integrates a clinical surgical endoscope with a plenoptic camera to achieve a depth accuracy error of about 1 mm and a precision error of about 2 mm, within a 25 mm × 25 mm field of view, operating at 11 frames per second. PMID:29449863

  2. On the effect of velocity gradients on the depth of correlation in μPIV

    NASA Astrophysics Data System (ADS)

    Mustin, B.; Stoeber, B.

    2016-03-01

    The present work revisits the effect of velocity gradients on the depth of the measurement volume (depth of correlation) in microscopic particle image velocimetry (μPIV). General relations between the μPIV weighting functions and the local correlation function are derived from the original definition of the weighting functions. These relations are used to investigate under which circumstances the weighting functions are related to the curvature of the local correlation function. Furthermore, this work proposes a modified definition of the depth of correlation that leads to more realistic results than previous definitions for the case when flow gradients are taken into account. Dimensionless parameters suitable to describe the effect of velocity gradients on μPIV cross correlation are derived and visual interpretations of these parameters are proposed. We then investigate the effect of the dimensionless parameters on the weighting functions and the depth of correlation for different flow fields with spatially constant flow gradients and with spatially varying gradients. Finally this work demonstrates that the results and dimensionless parameters are not strictly bound to a certain model for particle image intensity distributions but are also meaningful when other models for particle images are used.

  3. Seismic imaging of the Waltham Canyon fault, California: comparison of ray‐theoretical and Fresnel volume prestack depth migration

    USGS Publications Warehouse

    Bauer, Klaus; Ryberg, Trond; Fuis, Gary S.; Lüth, Stefan

    2013-01-01

    Near‐vertical faults can be imaged using reflected refractions identified in controlled‐source seismic data. Often theses phases are observed on a few neighboring shot or receiver gathers, resulting in a low‐fold data set. Imaging can be carried out with Kirchhoff prestack depth migration in which migration noise is suppressed by constructive stacking of large amounts of multifold data. Fresnel volume migration can be used for low‐fold data without severe migration noise, as the smearing along isochrones is limited to the first Fresnel zone around the reflection point. We developed a modified Fresnel volume migration technique to enhance imaging of steep faults and to suppress noise and undesired coherent phases. The modifications include target‐oriented filters to separate reflected refractions from steep‐dipping faults and reflections with hyperbolic moveout. Undesired phases like multiple reflections, mode conversions, direct P and S waves, and surface waves are suppressed by these filters. As an alternative approach, we developed a new prestack line‐drawing migration method, which can be considered as a proxy to an infinite frequency approximation of the Fresnel volume migration. The line‐drawing migration is not considering waveform information but requires significantly shorter computational time. Target‐oriented filters were extended by dip filters in the line‐drawing migration method. The migration methods were tested with synthetic data and applied to real data from the Waltham Canyon fault, California. The two techniques are applied best in combination, to design filters and to generate complementary images of steep faults.

  4. Polarization Lidar for Shallow Water Supraglacial Lake Depth Measurement

    NASA Astrophysics Data System (ADS)

    Mitchell, S.; Adler, J.; Thayer, J. P.; Hayman, M.

    2010-12-01

    A bathymetric, polarization lidar system transmitting at 532 nanometers and using a single photomultiplier tube is developed for applications of shallow water depth measurement, in particular those often found in supraglacial lakes of the ablation zone on the Greenland Ice Sheet. The technique exploits polarization attributes of the probed water body to isolate surface and floor returns, enabling constant fraction detection schemes to determine depth. The minimum resolvable water depth is no longer dictated by the system’s laser or detector pulse width and can achieve better than an order of magnitude improvement over current water depth determination techniques. In laboratory tests, a Nd:YAG microchip laser coupled with polarization optics, a photomultiplier tube, a constant fraction discriminator and a time to digital converter are used to target various water depths, using ice as the floor to simulate a supraglacial lake. Measurement of 1 centimeter water depths with an uncertainty of ±3 millimeters are demonstrated using the technique. This novel technique enables new approaches to designing laser bathymetry systems for shallow depth determination from remote platforms while not compromising deep water depth measurement, and will support comprehensive hydrodynamic studies of supraglacial lakes. Additionally, the compact size and low weight (<15 kg) of the field system currently in development presents opportunities for use in small unmanned aircraft systems (UAS) for large areal surveys of the ablation zone.

  5. Depth.

    PubMed

    Koenderink, Jan J; van Doorn, Andrea J; Wagemans, Johan

    2011-01-01

    Depth is the feeling of remoteness, or separateness, that accompanies awareness in human modalities like vision and audition. In specific cases depths can be graded on an ordinal scale, or even measured quantitatively on an interval scale. In the case of pictorial vision this is complicated by the fact that human observers often appear to apply mental transformations that involve depths in distinct visual directions. This implies that a comparison of empirically determined depths between observers involves pictorial space as an integral entity, whereas comparing pictorial depths as such is meaningless. We describe the formal structure of pictorial space purely in the phenomenological domain, without taking recourse to the theories of optics which properly apply to physical space-a distinct ontological domain. We introduce a number of general ways to design and implement methods of geodesy in pictorial space, and discuss some basic problems associated with such measurements. We deal mainly with conceptual issues.

  6. Plumbing Coastal Depths in Titan Kraken Mare

    NASA Image and Video Library

    2014-11-10

    Radar data from NASA's Cassini spacecraft reveal the depth of liquid methane/ethane seas on Saturn's moon Titan. Cassini's Titan flyby on August 21, 2014, included a segment designed to collect altimetry (or height) data, using the spacecraft's radar instrument, along a 120-mile (200-kilometer) shore-to-shore track on Kraken Mare, Titan's largest hydrocarbon sea. For a 25-mile (40-kilometer) stretch of this data, along the sea's eastern shoreline, Cassini's radar beam bounced off the sea bottom and back to the spacecraft, revealing the sea's depth in that area. Observations in this region, near the mouth of a large, flooded river valley, showed depths ranging from 66 to 115 feet (20 to 35 meters). Plots of three radar echoes are shown at left, indicating depths of 89 feet (27 meters), 108 feet (33 meters) and 98 feet (30 meters), respectively. The altimetry echoes show the characteristic double-peaked returns of a bottom-reflection. The tallest peak represents the sea surface; the shorter of the pair represents the sea bottom. The distance between the two peaks is a measure of the liquid's depth. The Synthetic Aperture Radar (SAR) image at right shows successive altimetry observations as black circles. The three blue circles indicate the locations of the three altimetry echoes shown in the plots at left. http://photojournal.jpl.nasa.gov/catalog/PIA19046

  7. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  8. Fluorescence Imaging Topography Scanning System for intraoperative multimodal imaging

    PubMed Central

    Quang, Tri T.; Kim, Hye-Yeong; Bao, Forrest Sheng; Papay, Francis A.; Edwards, W. Barry; Liu, Yang

    2017-01-01

    Fluorescence imaging is a powerful technique with diverse applications in intraoperative settings. Visualization of three dimensional (3D) structures and depth assessment of lesions, however, are oftentimes limited in planar fluorescence imaging systems. In this study, a novel Fluorescence Imaging Topography Scanning (FITS) system has been developed, which offers color reflectance imaging, fluorescence imaging and surface topography scanning capabilities. The system is compact and portable, and thus suitable for deployment in the operating room without disturbing the surgical flow. For system performance, parameters including near infrared fluorescence detection limit, contrast transfer functions and topography depth resolution were characterized. The developed system was tested in chicken tissues ex vivo with simulated tumors for intraoperative imaging. We subsequently conducted in vivo multimodal imaging of sentinel lymph nodes in mice using FITS and PET/CT. The PET/CT/optical multimodal images were co-registered and conveniently presented to users to guide surgeries. Our results show that the developed system can facilitate multimodal intraoperative imaging. PMID:28437441

  9. Potential of coded excitation in medical ultrasound imaging.

    PubMed

    Misaridis, T X; Gammelmark, K; Jørgensen, C H; Lindberg, N; Thomsen, A H; Pedersen, M H; Jensen, J A

    2000-03-01

    Improvement in signal-to-noise ratio (SNR) and/or penetration depth can be achieved in medical ultrasound by using long coded waveforms, in a similar manner as in radars or sonars. However, the time-bandwidth product (TB) improvement, and thereby SNR improvement is considerably lower in medical ultrasound, due to the lower available bandwidth. There is still space for about 20 dB improvement in the SNR, which will yield a penetration depth up to 20 cm at 5 MHz [M. O'Donnell, IEEE Trans. Ultrason. Ferroelectr. Freq. Contr., 39(3) (1992) 341]. The limited TB additionally yields unacceptably high range sidelobes. However, the frequency weighting from the ultrasonic transducer's bandwidth, although suboptimal, can be beneficial in sidelobe reduction. The purpose of this study is an experimental evaluation of the above considerations in a coded excitation ultrasound system. A coded excitation system based on a modified commercial scanner is presented. A predistorted FM signal is proposed in order to keep the resulting range sidelobes at acceptably low levels. The effect of the transducer is taken into account in the design of the compression filter. Intensity levels have been considered and simulations on the expected improvement in SNR are also presented. Images of a wire phantom and clinical images have been taken with the coded system. The images show a significant improvement in penetration depth and they preserve both axial resolution and contrast.

  10. Depth-resolved monitoring of analytes diffusion in ocular tissues

    NASA Astrophysics Data System (ADS)

    Larin, Kirill V.; Ghosn, Mohamad G.; Tuchin, Valery V.

    2007-02-01

    Optical coherence tomography (OCT) is a noninvasive imaging technique with high in-depth resolution. We employed OCT technique for monitoring and quantification of analyte and drug diffusion in cornea and sclera of rabbit eyes in vitro. Different analytes and drugs such as metronidazole, dexamethasone, ciprofloxacin, mannitol, and glucose solution were studied and whose permeability coefficients were calculated. Drug diffusion monitoring was performed as a function of time and as a function of depth. Obtained results suggest that OCT technique might be used for analyte diffusion studies in connective and epithelial tissues.

  11. Ultrahigh speed 1050nm swept source / Fourier domain OCT retinal and anterior segment imaging at 100,000 to 400,000 axial scans per second

    PubMed Central

    Potsaid, Benjamin; Baumann, Bernhard; Huang, David; Barry, Scott; Cable, Alex E.; Schuman, Joel S.; Duker, Jay S.; Fujimoto, James G.

    2011-01-01

    We demonstrate ultrahigh speed swept source/Fourier domain ophthalmic OCT imaging using a short cavity swept laser at 100,000–400,000 axial scan rates. Several design configurations illustrate tradeoffs in imaging speed, sensitivity, axial resolution, and imaging depth. Variable rate A/D optical clocking is used to acquire linear-in-k OCT fringe data at 100kHz axial scan rate with 5.3um axial resolution in tissue. Fixed rate sampling at 1 GSPS achieves a 7.5mm imaging range in tissue with 6.0um axial resolution at 100kHz axial scan rate. A 200kHz axial scan rate with 5.3um axial resolution over 4mm imaging range is achieved by buffering the laser sweep. Dual spot OCT using two parallel interferometers achieves 400kHz axial scan rate, almost 2X faster than previous 1050nm ophthalmic results and 20X faster than current commercial instruments. Superior sensitivity roll-off performance is shown. Imaging is demonstrated in the human retina and anterior segment. Wide field 12×12mm data sets include the macula and optic nerve head. Small area, high density imaging shows individual cone photoreceptors. The 7.5mm imaging range configuration can show the cornea, iris, and anterior lens in a single image. These improvements in imaging speed and depth range provide important advantages for ophthalmic imaging. The ability to rapidly acquire 3D-OCT data over a wide field of view promises to simplify examination protocols. The ability to image fine structures can provide detailed information on focal pathologies. The large imaging range and improved image penetration at 1050nm wavelengths promises to improve performance for instrumentation which images both the retina and anterior eye. These advantages suggest that swept source OCT at 1050nm wavelengths will play an important role in future ophthalmic instrumentation. PMID:20940894

  12. Alchemical hermeneutics of the Vesica Piscis: Symbol of depth psychology

    NASA Astrophysics Data System (ADS)

    O'Dell, Linda Kay

    The purpose of this study was to develop an understanding of the Vesica Piscis as the symbolic frame for depth psychology and the therapeutic relationship. The method of inquiry was hermeneutics and alchemical hermeneutics, informed theoretically by depth psychology. A theoretical description of the nature of the Vesica Piscis as a dynamic template and symbol for depth psychology and the therapeutic relationship resulted. Gathering the components of the therapeutic relationship into the shape of the Vesica Piscis, gave opportunity to explore what might be happening while treatment is taking place: somatically, psychologically, and emotionally. An investigation into the study of Soul placed the work of psychology within the central, innermost sacred space between—known symbolically as the Vesica Piscis. Imbued with a connectedness and relational welcoming, this symbol images the Greek goddess Hekate (Soul), as mediatrix between mind and matter. Psyche (soul), namesake of "psychology," continues her journey of finding meaning making, restitution, and solace in the therapeutic space as imaged by the Vesica Piscis. Her journey, moving through the generations, becomes the journey of the therapeutic process—one that finds resolution in relationship. Psyche is sought out in the macrocosmic archetypal realm of pure energy, the prima material that forms and coalesces both in response and likewise, creates a response through symbols, images, and imagination. The field was explored from the depth psychological perspective as: the unconscious, consciousness, and archetypal, and in physics as: the quantum field, morphic resonance, and the holographic field. Gaining an understanding of the underlying qualities of the field placed the symbol in its embedded context, allowing for further definition as to how the symbol potentially was either an extension of the field, or served as a constellating factor. Depth psychology, as a scientific discipline, is in need of a symbol that

  13. Scene Semantic Segmentation from Indoor Rgb-D Images Using Encode-Decoder Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Li, T.; Pan, L.; Kang, Z.

    2017-09-01

    With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.

  14. Test Image by Mars Descent Imager

    NASA Image and Video Library

    2010-07-19

    Ken Edgett, deputy principal investigator for NASA Mars Descent Imager, holds a ruler used as a depth-of-field test target. The instrument took this image inside the Malin Space Science Systems clean room in San Diego, CA, during calibration testing.

  15. Temporal consistent depth map upscaling for 3DTV

    NASA Astrophysics Data System (ADS)

    Schwarz, Sebastian; Sjöström, Mârten; Olsson, Roger

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  16. Forty-five degree backscattering-mode nonlinear absorption imaging in turbid media.

    PubMed

    Cui, Liping; Knox, Wayne H

    2010-01-01

    Two-color nonlinear absorption imaging has been previously demonstrated with endogenous contrast of hemoglobin and melanin in turbid media using transmission-mode detection and a dual-laser technology approach. For clinical applications, it would be generally preferable to use backscattering mode detection and a simpler single-laser technology. We demonstrate that imaging in backscattering mode in turbid media using nonlinear absorption can be obtained with as little as 1-mW average power per beam with a single laser source. Images have been achieved with a detector receiving backscattered light at a 45-deg angle relative to the incoming beams' direction. We obtain images of capillary tube phantoms with resolution as high as 20 microm and penetration depth up to 0.9 mm for a 300-microm tube at SNR approximately 1 in calibrated scattering solutions. Simulation results of the backscattering and detection process using nonimaging optics are demonstrated. A Monte Carlo-based method shows that the nonlinear signal drops exponentially as the depth increases, which agrees well with our experimental results. Simulation also shows that with our current detection method, only 2% of the signal is typically collected with a 5-mm-radius detector.

  17. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  18. Imaging Spectrometry of Inland and Coastal Waters: State of the Art, Achievements and Perspectives

    NASA Astrophysics Data System (ADS)

    Giardino, C.; Brando, V. E.; Gege, P.; Pinnel, N.; Hochberg, E.; Knaeps, E.; Reusen, I.; Doerffer, R.; Bresciani, M.; Braga, F.; Foerster, S.; Champollion, N.; Dekker, A.

    2018-06-01

    Imaging spectrometry of non-oceanic aquatic ecosystems has been in development since the late 1980s when the first airborne hyperspectral sensors were deployed over lakes. Most water quality management applications were, however, developed using multispectral mid-spatial resolution satellites or coarse spatial resolution ocean colour satellites till now. This situation is about to change with a suite of upcoming imaging spectrometers being deployed from experimental satellites or from the International Space Station. We review the science of developing applications for inland and coastal aquatic ecosystems that often are a mixture of optically shallow and optically deep waters, with gradients of clear to turbid and oligotrophic to hypertrophic productive waters and with varying bottom visibility with and without macrophytes, macro-algae, benthic micro-algae or corals. As the spaceborne, airborne and in situ optical sensors become increasingly available and appropriate for aquatic ecosystem detection, monitoring and assessment, the science-based applications will need to be further developed to an operational level. The Earth Observation-derived information products will range from more accurate estimates of turbidity and transparency measures, chlorophyll, suspended matter and coloured dissolved organic matter concentration, to more sophisticated products such as particle size distributions, phytoplankton functional types or distinguishing sources of suspended and coloured dissolved matter, estimating water depth and mapping types of heterogeneous substrates. We provide an overview of past science, current state of the art and future directions so that early career scientists as well as aquatic ecosystem managers and associated industry groups may be prepared for the imminent deluge of imaging spectrometry data.

  19. Ultrasound assessed thickness of burn scars in association with laser Doppler imaging determined depth of burns in paediatric patients.

    PubMed

    Wang, Xue-Qing; Mill, Julie; Kravchuk, Olena; Kimble, Roy M

    2010-12-01

    This study describes the ultrasound assessment of burn scars in paediatric patients and the association of these scar thickness with laser Doppler imaging (LDI) determined burn depth. A total of 60 ultrasound scar assessments were conducted on 33 scars from 21 paediatric burn patients at 3, 6 and 9 months after-burn. The mean of peak scar thickness was 0.39±0.032 cm, with the thickest at 6 months (0.40±0.036 cm). There were 17 scald burn scars (0.34±0.045 cm), 4 contact burn scars (0.61±0.092 cm), and 10 flame burn scars (0.42±0.058 cm). Each group of scars followed normal distributions. Twenty-three scars had original burns successfully scanned by LDI and various depths of burns were presented by different colours according to blood perfusion units (PU), with dark blue <125, light blue 125-250, and green 250-440 PU. The thickness of these scars was significantly different between the predominant colours of burns, with the thinnest scars for green coloured burns and the thickest for dark blue coloured burns. Within light blue burns, grafted burns healed with significantly thinner scars than non-grafted burns. This study indicates that LDI can be used for predicting the risk of hypertrophic scarring and for guiding burn care. To our knowledge, this is the first study to correlate the thickness of burns scars by ultrasound scan with burn depth determined by LDI. Copyright © 2010 Elsevier Ltd and ISBI. All rights reserved.

  20. Modeling depth from motion parallax with the motion/pursuit ratio

    PubMed Central

    Nawrot, Mark; Ratzlaff, Michael; Leonard, Zachary; Stroyan, Keith

    2014-01-01

    The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed. PMID:25339926

  1. First cosmic-ray images of bone and soft tissue

    NASA Astrophysics Data System (ADS)

    Mrdja, Dusan; Bikit, Istvan; Bikit, Kristina; Slivka, Jaroslav; Hansman, Jan; Oláh, László; Varga, Dezső

    2016-11-01

    More than 120 years after Roentgen's first X-ray image, the first cosmic-ray muon images of bone and soft tissue are created. The pictures, shown in the present paper, represent the first radiographies of structures of organic origin ever recorded by cosmic rays. This result is achieved by a uniquely designed, simple and versatile cosmic-ray muon-imaging system, which consists of four plastic scintillation detectors and a muon tracker. This system does not use scattering or absorption of muons in order to deduct image information, but takes advantage of the production rate of secondaries in the target materials, detected in coincidence with muons. The 2D image slices of cow femur bone are obtained at several depths along the bone axis, together with the corresponding 3D image. Real organic soft tissue, polymethyl methacrylate and water, never seen before by any other muon imaging techniques, are also registered in the images. Thus, similar imaging systems, placed around structures of organic or inorganic origin, can be used for tomographic imaging using only the omnipresent cosmic radiation.

  2. Design and testing of an annular array for very-high-frequency imaging

    NASA Astrophysics Data System (ADS)

    Ketterling, Jeffrey A.; Ramachandran, Sarayu; Lizzi, Frederic L.; Aristizábal, Orlando; Turnbull, Daniel H.

    2004-05-01

    Very-high-frequency ultrasound (VHFU) transducer technology is currently experiencing a great deal of interest. Traditionally, researchers have used single-element transducers which achieve exceptional lateral image resolution although at a very limited depth of field. A 5-ring focused annular array, a transducer geometry that permits an increased depth of field via electronic focusing, has been constructed. The transducer is fabricated with a PVDF membrane and a copper-clad Kapton film with an annular array pattern. The PVDF is bonded to the Kapton film and pressed into a spherically curved shape. The back side of the transducer is then filled with epoxy. One side of the PVDF is metallized with gold, forming the ground plane of the transducer. The array elements are accessed electrically via copper traces formed on the Kapton film. The annular array consists of 5 equal-area rings with an outer diameter of 1 cm and a radius of curvature of 9 mm. A wire reflector target was used to test the imaging capability of the transducer by acquiring B-scan data for each transmit/receive pair. A synthetic aperture approach was then used to reconstruct the image and demonstrate the enhanced depth of field capabilities of the transducer.

  3. Dual-element transducer with phase-inversion for wide depth of field in high-frequency ultrasound imaging.

    PubMed

    Jeong, Jong Seob

    2014-08-05

    In high frequency ultrasound imaging (HFUI), the quality of focusing is deeply related to the length of the depth of field (DOF). In this paper, a phase-inversion technique implemented by a dual-element transducer is proposed to enlarge the DOF. The performance of the proposed method was numerically demonstrated by using the ultrasound simulation program called Field-II. A simulated dual-element transducer was composed of a disc- and an annular-type elements, and its aperture was concavely shaped to have a confocal point at 6 mm. The area of each element was identical in order to provide same intensity at the focal point. The outer diameters of the inner and the outer elements were 2.1 mm and 3 mm, respectively. The center frequency of each element was 40 MHz and the f-number (focal depth/aperture size) was two. When two input signals with 0° and 180° phases were applied to inner and outer elements simultaneously, a multi-focal zone was generated in the axial direction. The total -6 dB DOF, i.e., sum of two -6 dB DOFs in the near and far field lobes, was 40% longer than that of the conventional single element transducer. The signal to noise ratio (SNR) was increased by about two times, especially in the far field. The point and cyst phantom simulation were conducted and their results were identical to that of the beam pattern simulation. Thus, the proposed scheme may be a potential method to improve the DOF and SNR in HFUI.

  4. Outer Retinal and Choroidal Evaluation in Multiple Evanescent White Dot Syndrome (MEWDS): An Enhanced Depth Imaging Optical Coherence Tomography Study.

    PubMed

    Fiore, Tito; Iaccheri, Barbara; Cerquaglia, Alessio; Lupidi, Marco; Torroni, Giovanni; Fruttini, Daniela; Cagini, Carlo

    2018-01-01

    To perform an analysis of optical coherence tomography (OCT) abnormalities in patients with MEWDS, during the acute and recovery stages, using enhanced depth imaging-OCT (EDI-OCT). A retrospective case series of five patients with MEWDS was included. EDI-OCT imaging was evaluated to detect retinal and choroidal features. In the acute phase, focal impairment of the ellipsoid zone and external limiting membrane, hyperreflective dots in the inner choroid, and full-thickness increase of the choroidal profile were observed in the affected eye; disappearance of these findings and restoration of the choroidal thickness (p = 0.046) was appreciated in the recovery phase. No OCT abnormalities were assessed in the unaffected eye. EDI-OCT revealed transient outer retinal layer changes and inner choroidal hyperreflective dots. A transient increased thickness of the whole choroid was also identified. This might confirm a short-lasting inflammatory involvement of the whole choroidal tissue in the active phase of MEWDS.

  5. Angle-domain common imaging gather extraction via Kirchhoff prestack depth migration based on a traveltime table in transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Shaoyong; Gu, Hanming; Tang, Yongjie; Bingkai, Han; Wang, Huazhong; Liu, Dingjin

    2018-04-01

    Angle-domain common image-point gathers (ADCIGs) can alleviate the limitations of common image-point gathers in an offset domain, and have been widely used for velocity inversion and amplitude variation with angle (AVA) analysis. We propose an effective algorithm for generating ADCIGs in transversely isotropic (TI) media based on the gradient of traveltime by Kirchhoff pre-stack depth migration (KPSDM), as the dynamic programming method for computing the traveltime in TI media would not suffer from the limitation of shadow zones and traveltime interpolation. Meanwhile, we present a specific implementation strategy for ADCIG extraction via KPSDM. Three major steps are included in the presented strategy: (1) traveltime computation using a dynamic programming approach in TI media; (2) slowness vector calculation by the gradient of a traveltime table calculated previously; (3) construction of illumination vectors and subsurface angles in the migration process. Numerical examples are included to demonstrate the effectiveness of our approach, which henceforce shows its potential application for subsequent tomographic velocity inversion and AVA.

  6. Integral imaging with multiple image planes using a uniaxial crystal plate.

    PubMed

    Park, Jae-Hyeung; Jung, Sungyong; Choi, Heejin; Lee, Byoungho

    2003-08-11

    Integral imaging has been attracting much attention recently for its several advantages such as full parallax, continuous view-points, and real-time full-color operation. However, the thickness of the displayed three-dimensional image is limited to relatively small value due to the degradation of the image resolution. In this paper, we propose a method to provide observers with enhanced perception of the depth without severe resolution degradation by the use of the birefringence of a uniaxial crystal plate. The proposed integral imaging system can display images integrated around three central depth planes by dynamically altering the polarization and controlling both elemental images and dynamic slit array mask accordingly. We explain the principle of the proposed method and verify it experimentally.

  7. Real-time calibration-free C-scan images of the eye fundus using Master Slave swept source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Fred; Garway-Heath, David F.; Rajendram, Ranjan; Keane, Pearce; Podoleanu, Adrian G.

    2015-03-01

    Recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), specialized for delivering en-face images. This method uses principles of spectral domain interfereometry in two stages. MS-OCT operates like a time domain OCT, selecting only signals from a chosen depth only while scanning the laser beam across the eye. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. The tremendous advantage in terms of parallel provision of data from numerous depths could not be fully employed by using multi core processors only. The data processing required to generate images at multiple depths simultaneously is not achievable with commodity multicore processors only. We compare here the major improvement in processing and display, brought about by using graphic cards. We demonstrate images obtained with a swept source at 100 kHz (which determines an acquisition time [Ta] for a frame of 200×200 pixels2 of Ta =1.6 s). By the end of the acquired frame being scanned, using our computing capacity, 4 simultaneous en-face images could be created in T = 0.8 s. We demonstrate that by using graphic cards, 32 en-face images can be displayed in Td 0.3 s. Other faster swept source engines can be used with no difference in terms of Td. With 32 images (or more), volumes can be created for 3D display, using en-face images, as opposed to the current technology where volumes are created using cross section OCT images.

  8. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  9. Evaluation of using a depth sensor to estimate the weight of finishing pigs

    USDA-ARS?s Scientific Manuscript database

    A method of continuously monitoring weight would aid producers by ensuring all pigs are healthy (gaining weight) and increasing precision of marketing. Therefore, the objective was to develop an electronic method of obtaining pig weights through depth images. Seven hundred and seventy-two images and...

  10. End-to-end deep neural network for optical inversion in quantitative photoacoustic imaging.

    PubMed

    Cai, Chuangjian; Deng, Kexin; Ma, Cheng; Luo, Jianwen

    2018-06-15

    An end-to-end deep neural network, ResU-net, is developed for quantitative photoacoustic imaging. A residual learning framework is used to facilitate optimization and to gain better accuracy from considerably increased network depth. The contracting and expanding paths enable ResU-net to extract comprehensive context information from multispectral initial pressure images and, subsequently, to infer a quantitative image of chromophore concentration or oxygen saturation (sO 2 ). According to our numerical experiments, the estimations of sO 2 and indocyanine green concentration are accurate and robust against variations in both optical property and object geometry. An extremely short reconstruction time of 22 ms is achieved.

  11. Second harmonic generation imaging of skeletal muscle tissue and myofibrils

    NASA Astrophysics Data System (ADS)

    Campagnola, Paul J.; Mohler, William H.; Plotnikov, Sergey; Millard, Andrew C.

    2006-02-01

    Second Harmonic Generation (SHG) imaging microscopy is used to examine the morphology and structural properties of intact muscle tissue. Using biochemical and optical analysis, we characterize the molecular structure underlying SHG from the complex muscle sarcomere. We find that SHG from isolated myofibrils is abolished by extraction of myosin, but is unaffected by removal or addition of actin filaments. We thus determined that the SHG emission arises from domains of the sarcomere containing thick filaments. By fitting the SHG polarization anisotropy to theoretical response curves, we find an orientation for the harmonophore that corresponds well to the pitch angle of the myosin rod α-helix with respect to the thick filament axis. Taken together, these data indicate that myosin rod domains are the key structures giving rise to SHG from striated muscle. Using SHG imaging microscopy, we have also examined the effect of optical clearing with glycerol to achieve greater penetration into specimens of skeletal muscle tissue. We find that treatment with 50% glycerol results in a 2.5 fold increase in achievable SHG imaging depth. Fast Fourier Transform (FFT) analysis shows quantitatively that the periodicity of the sarcomere structure is unaltered by the clearing process. Also, comparison of the SHG angular polarization dependence shows no change in the supramolecular organization of acto-myosin complexes. We suggest that the primary mechanism of optical clearing in muscle with glycerol treatment results from the reduction of cytoplasmic protein concentration and concomitant decrease in the secondary inner filter effect on the SHG signal. The pronounced lack of dependence of glycerol concentration on the imaging depth indicates that refractive index matching plays only a minor role in the optical clearing of muscle.

  12. Multimodal adaptive optics for depth-enhanced high-resolution ophthalmic imaging

    NASA Astrophysics Data System (ADS)

    Hammer, Daniel X.; Mujat, Mircea; Iftimia, Nicusor V.; Lue, Niyom; Ferguson, R. Daniel

    2010-02-01

    We developed a multimodal adaptive optics (AO) retinal imager for diagnosis of retinal diseases, including glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and retinitis pigmentosa (RP). The development represents the first ever high performance AO system constructed that combines AO-corrected scanning laser ophthalmoscopy (SLO) and swept source Fourier domain optical coherence tomography (SSOCT) imaging modes in a single compact clinical prototype platform. The SSOCT channel operates at a wavelength of 1 μm for increased penetration and visualization of the choriocapillaris and choroid, sites of major disease activity for DR and wet AMD. The system is designed to operate on a broad clinical population with a dual deformable mirror (DM) configuration that allows simultaneous low- and high-order aberration correction. The system also includes a wide field line scanning ophthalmoscope (LSO) for initial screening, target identification, and global orientation; an integrated retinal tracker (RT) to stabilize the SLO, OCT, and LSO imaging fields in the presence of rotational eye motion; and a high-resolution LCD-based fixation target for presentation to the subject of stimuli and other visual cues. The system was tested in a limited number of human subjects without retinal disease for performance optimization and validation. The system was able to resolve and quantify cone photoreceptors across the macula to within ~0.5 deg (~100-150 μm) of the fovea, image and delineate ten retinal layers, and penetrate to resolve targets deep into the choroid. In addition to instrument hardware development, analysis algorithms were developed for efficient information extraction from clinical imaging sessions, with functionality including automated image registration, photoreceptor counting, strip and montage stitching, and segmentation. The system provides clinicians and researchers with high-resolution, high performance adaptive optics imaging to help

  13. Visual saliency detection based on in-depth analysis of sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Shen, Siqiu; Ning, Chen

    2018-03-01

    Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.

  14. MO-G-17A-01: Innovative High-Performance PET Imaging System for Preclinical Imaging and Translational Researches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, X; Lou, K; Rice University, Houston, TX

    Purpose: To develop a practical and compact preclinical PET with innovative technologies for substantially improved imaging performance required for the advanced imaging applications. Methods: Several key components of detector, readout electronics and data acquisition have been developed and evaluated for achieving leapfrogged imaging performance over a prototype animal PET we had developed. The new detector module consists of an 8×8 array of 1.5×1.5×30 mm{sup 3} LYSO scintillators with each end coupled to a latest 4×4 array of 3×3 mm{sup 2} Silicon Photomultipliers (with ∼0.2 mm insensitive gap between pixels) through a 2.0 mm thick transparent light spreader. Scintillator surface andmore » reflector/coupling were designed and fabricated to reserve air-gap to achieve higher depth-of-interaction (DOI) resolution and other detector performance. Front-end readout electronics with upgraded 16-ch ASIC was newly developed and tested, so as the compact and high density FPGA based data acquisition and transfer system targeting 10M/s coincidence counting rate with low power consumption. The new detector module performance of energy, timing and DOI resolutions with the data acquisition system were evaluated. Initial Na-22 point source image was acquired with 2 rotating detectors to assess the system imaging capability. Results: No insensitive gaps at the detector edge and thus it is capable for tiling to a large-scale detector panel. All 64 crystals inside the detector were clearly separated from a flood-source image. Measured energy, timing, and DOI resolutions are around 17%, 2.7 ns and 1.96 mm (mean value). Point source image is acquired successfully without detector/electronics calibration and data correction. Conclusion: Newly developed advanced detector and readout electronics will be enable achieving targeted scalable and compact PET system in stationary configuration with >15% sensitivity, ∼1.3 mm uniform imaging resolution, and fast acquisition counting

  15. Estimating terrestrial snow depth with the Topex-Poseidon altimeter and radiometer

    USGS Publications Warehouse

    Papa, F.; Legresy, B.; Mognard, N.M.; Josberger, E.G.; Remy, F.

    2002-01-01

    Active and passive microwave measurements obtained by the dual-frequency Topex-Poseidon radar altimeter from the Northern Great Plains of the United States are used to develop a snow pack radar backscatter model. The model results are compared with daily time series of surface snow observations made by the U.S. National Weather Service. The model results show that Ku-band provides more accurate snow depth determinations than does C-band. Comparing the snow depth determinations derived from the Topex-Poseidon nadir-looking passive microwave radiometers with the oblique-looking Satellite Sensor Microwave Imager (SSM/I) passive microwave observations and surface observations shows that both instruments accurately portray the temporal characteristics of the snow depth time series. While both retrievals consistently underestimate the actual snow depths, the Topex-Poseidon results are more accurate.

  16. Rapid prototyping of biomimetic vascular phantoms for hyperspectral reflectance imaging

    PubMed Central

    Ghassemi, Pejhman; Wang, Jianting; Melchiorri, Anthony J.; Ramella-Roman, Jessica C.; Mathews, Scott A.; Coburn, James C.; Sorg, Brian S.; Chen, Yu; Joshua Pfefer, T.

    2015-01-01

    Abstract. The emerging technique of rapid prototyping with three-dimensional (3-D) printers provides a simple yet revolutionary method for fabricating objects with arbitrary geometry. The use of 3-D printing for generating morphologically biomimetic tissue phantoms based on medical images represents a potentially major advance over existing phantom approaches. Toward the goal of image-defined phantoms, we converted a segmented fundus image of the human retina into a matrix format and edited it to achieve a geometry suitable for printing. Phantoms with vessel-simulating channels were then printed using a photoreactive resin providing biologically relevant turbidity, as determined by spectrophotometry. The morphology of printed vessels was validated by x-ray microcomputed tomography. Channels were filled with hemoglobin (Hb) solutions undergoing desaturation, and phantoms were imaged with a near-infrared hyperspectral reflectance imaging system. Additionally, a phantom was printed incorporating two disjoint vascular networks at different depths, each filled with Hb solutions at different saturation levels. Light propagation effects noted during these measurements—including the influence of vessel density and depth on Hb concentration and saturation estimates, and the effect of wavelength on vessel visualization depth—were evaluated. Overall, our findings indicated that 3-D-printed biomimetic phantoms hold significant potential as realistic and practical tools for elucidating light–tissue interactions and characterizing biophotonic system performance. PMID:26662064

  17. Temporal presentation protocols in stereoscopic displays: Flicker visibility, perceived motion, and perceived depth

    PubMed Central

    Hoffman, David M.; Karasev, Vasiliy I.; Banks, Martin S.

    2011-01-01

    Most stereoscopic displays rely on field-sequential presentation to present different images to the left and right eyes. With sequential presentation, images are delivered to each eye in alternation with dark intervals, and each eye receives its images in counter phase with the other eye. This type of presentation can exacerbate image artifacts including flicker, and the appearance of unsmooth motion. To address the flicker problem, some methods repeat images multiple times before updating to new ones. This greatly reduces flicker visibility, but makes motion appear less smooth. This paper describes an investigation of how different presentation methods affect the visibility of flicker, motion artifacts, and distortions in perceived depth. It begins with an examination of these methods in the spatio-temporal frequency domain. From this examination, it describes a series of predictions for how presentation rate, object speed, simultaneity of image delivery to the two eyes, and other properties ought to affect flicker, motion artifacts, and depth distortions, and reports a series of experiments that tested these predictions. The results confirmed essentially all of the predictions. The paper concludes with a summary and series of recommendations for the best approach to minimize these undesirable effects. PMID:21572544

  18. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  19. Gain Modulation as a Mechanism for Coding Depth from Motion Parallax in Macaque Area MT

    PubMed Central

    Kim, HyungGoo R.; Angelaki, Dora E.

    2017-01-01

    Observer translation produces differential image motion between objects that are located at different distances from the observer's point of fixation [motion parallax (MP)]. However, MP can be ambiguous with respect to depth sign (near vs far), and this ambiguity can be resolved by combining retinal image motion with signals regarding eye movement relative to the scene. We have previously demonstrated that both extra-retinal and visual signals related to smooth eye movements can modulate the responses of neurons in area MT of macaque monkeys, and that these modulations generate neural selectivity for depth sign. However, the neural mechanisms that govern this selectivity have remained unclear. In this study, we analyze responses of MT neurons as a function of both retinal velocity and direction of eye movement, and we show that smooth eye movements modulate MT responses in a systematic, temporally precise, and directionally specific manner to generate depth-sign selectivity. We demonstrate that depth-sign selectivity is primarily generated by multiplicative modulations of the response gain of MT neurons. Through simulations, we further demonstrate that depth can be estimated reasonably well by a linear decoding of a population of MT neurons with response gains that depend on eye velocity. Together, our findings provide the first mechanistic description of how visual cortical neurons signal depth from MP. SIGNIFICANCE STATEMENT Motion parallax is a monocular cue to depth that commonly arises during observer translation. To compute from motion parallax whether an object appears nearer or farther than the point of fixation requires combining retinal image motion with signals related to eye rotation, but the neurobiological mechanisms have remained unclear. This study provides the first mechanistic account of how this interaction takes place in the responses of cortical neurons. Specifically, we show that smooth eye movements modulate the gain of responses of neurons

  20. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition

    NASA Astrophysics Data System (ADS)

    Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso

    2014-06-01

    New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.

  1. Nanoscale β-nuclear magnetic resonance depth imaging of topological insulators

    PubMed Central

    Koumoulis, Dimitrios; Morris, Gerald D.; He, Liang; Kou, Xufeng; King, Danny; Wang, Dong; Hossain, Masrur D.; Wang, Kang L.; Fiete, Gregory A.; Kanatzidis, Mercouri G.; Bouchard, Louis-S.

    2015-01-01

    Considerable evidence suggests that variations in the properties of topological insulators (TIs) at the nanoscale and at interfaces can strongly affect the physics of topological materials. Therefore, a detailed understanding of surface states and interface coupling is crucial to the search for and applications of new topological phases of matter. Currently, no methods can provide depth profiling near surfaces or at interfaces of topologically inequivalent materials. Such a method could advance the study of interactions. Herein, we present a noninvasive depth-profiling technique based on β-detected NMR (β-NMR) spectroscopy of radioactive 8Li+ ions that can provide “one-dimensional imaging” in films of fixed thickness and generates nanoscale views of the electronic wavefunctions and magnetic order at topological surfaces and interfaces. By mapping the 8Li nuclear resonance near the surface and 10-nm deep into the bulk of pure and Cr-doped bismuth antimony telluride films, we provide signatures related to the TI properties and their topological nontrivial characteristics that affect the electron–nuclear hyperfine field, the metallic shift, and magnetic order. These nanoscale variations in β-NMR parameters reflect the unconventional properties of the topological materials under study, and understanding the role of heterogeneities is expected to lead to the discovery of novel phenomena involving quantum materials. PMID:26124141

  2. Ion penetration depth in the plant cell wall

    NASA Astrophysics Data System (ADS)

    Yu, L. D.; Vilaithong, T.; Phanchaisri, B.; Apavatjrut, P.; Anuntalabhochai, S.; Evans, P.; Brown, I. G.

    2003-05-01

    This study investigates the depth of ion penetration in plant cell wall material. Based on the biological structure of the plant cell wall, a physical model is proposed which assumes that the wall is composed of randomly orientated layers of cylindrical microfibrils made from cellulose molecules of C 6H 12O 6. With this model, we have determined numerical factors for ion implantation in the plant cell wall to correct values calculated from conventional ion implantation programs. Using these correction factors, it is possible to apply common ion implantation programs to estimate the ion penetration depth in the cell for bioengineering purposes. These estimates are compared with measured data from experiments and good agreement is achieved.

  3. Convolutional Sparse Coding for RGB+NIR Imaging.

    PubMed

    Hu, Xuemei; Heide, Felix; Dai, Qionghai; Wetzstein, Gordon

    2018-04-01

    Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.

  4. Depth perception based 3D holograms enabled with polarization-independent metasurfaces.

    PubMed

    Deng, Juan; Li, Zile; Zheng, Guoxing; Tao, Jin; Dai, Qi; Deng, Liangui; He, Ping'an; Deng, Qiling; Mao, Qingzhou

    2018-04-30

    Metasurfaces consist of dielectric nanobrick arrays with different dimensions in the long and short axes can be used to generate different phase delays, predicting a new way to manipulate an incident beam in the two orthogonal directions separately. Here we demonstrate the concept of depth perception based three-dimensional (3D) holograms with polarization-independent metasurfaces. 4-step dielectric metasurfaces-based fan-out optical elements and holograms operating at 658 nm were designed and simulated. Two different holographic images with high fidelity were generated at the same plane in the far field for different polarization states. One can observe the 3D effect of target objects with polarized glasses. With the advantages of ultracompactness, flexibility and replicability, the polarization-independent metasurfaces open up depth perception based stereoscopic imaging in a holographic way.

  5. Uncertainty in cloud optical depth estimates made from satellite radiance measurements

    NASA Technical Reports Server (NTRS)

    Pincus, Robert; Szczodrak, Malgorzata; Gu, Jiujing; Austin, Philip

    1995-01-01

    The uncertainty in optical depths retrieved from satellite measurements of visible wavelength radiance at the top of the atmosphere is quantified. Techniques are briefly reviewed for the estimation of optical depth from measurements of radiance, and it is noted that these estimates are always more uncertain at greater optical depths and larger solar zenith angles. The lack of radiometric calibration for visible wavelength imagers on operational satellites dominates the uncertainty retrievals of optical depth. This is true for both single-pixel retrievals and for statistics calculated from a population of individual retrievals. For individual estimates or small samples, sensor discretization can also be significant, but the sensitivity of the retrieval to the specification of the model atmosphere is less important. The relative uncertainty in calibration affects the accuracy with which optical depth distributions measured by different sensors may be quantitatively compared, while the absolute calibration uncertainty, acting through the nonlinear mapping of radiance to optical depth, limits the degree to which distributions measured by the same sensor may be distinguished.

  6. SIMS of organics—Advances in 2D and 3D imaging and future outlook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilmore, Ian S.

    Secondary ion mass spectrometry (SIMS) has become a powerful technique for the label-free analysis of organics from cells to electronic devices. The development of cluster ion sources has revolutionized the field, increasing the sensitivity for organics by two or three orders of magnitude and for large clusters, such as C{sub 60} and argon clusters, allowing depth profiling of organics. The latter has provided the capability to generate stunning three dimensional images with depth resolutions of around 5 nm, simply unavailable by other techniques. Current state-of-the-art allows molecular images with a spatial resolution of around 500 nm to be achieved andmore » future developments are likely to progress into the sub-100 nm regime. This review is intended to bring those with some familiarity with SIMS up-to-date with the latest developments for organics, the fundamental principles that underpin this and define the future progress. State-of-the-art examples are showcased and signposts to more in-depth reviews about specific topics given for the specialist.« less

  7. Measuring stress variation with depth using Barkhausen signals

    NASA Astrophysics Data System (ADS)

    Kypris, O.; Nlebedim, I. C.; Jiles, D. C.

    2016-06-01

    Magnetic Barkhausen noise analysis (BNA) is an established technique for the characterization of stress in ferromagnetic materials. An important application is the evaluation of residual stress in aerospace components, where shot-peening is used to strengthen the part by inducing compressive residual stresses on its surface. However, the evaluation of the resulting stress-depth gradients cannot be achieved by conventional BNA methods, where signals are interpreted in the time domain. The immediate alternative of using x-ray diffraction stress analysis is less than ideal, as the use of electropolishing to remove surface layers renders the part useless after inspection. Thus, a need for advancing the current BNA techniques prevails. In this work, it is shown how a parametric model for the frequency spectrum of Barkhausen emissions can be used to detect variations of stress along depth in ferromagnetic materials. Proof of concept is demonstrated by inducing linear stress-depth gradients using four-point bending, and fitting the model to the frequency spectra of measured Barkhausen signals, using a simulated annealing algorithm to extract the model parameters. Validation of our model suggests that in bulk samples the Barkhausen frequency spectrum can be expressed by a multi-exponential function with a dependence on stress and depth. One practical application of this spectroscopy method is the non-destructive evaluation of residual stress-depth profiles in aerospace components, thus helping to prevent catastrophic failures.

  8. Oscillating fluid lens in coherent retinal projection displays for extending depth of focus

    NASA Astrophysics Data System (ADS)

    von Waldkirch, extending depth of focus M.; Lukowicz, P.; Troster, G.

    2005-09-01

    See-through head-mounted displays, which allow to overlay virtual information over the user's real view, suffer normally from a limited depth of focus (DOF). To overcome this problem we discuss in this paper the use of a fast oscillating, variable-focus lens in a retinal projection display. The evaluation is based on a schematic eye model and on the partial coherence simulation tool SPLAT which allows us to calculate the projected retinal images of a text target. Objective image quality criteria demonstrate that the use of an oscillating lens is promising provided that partially coherent illumination light is used. In this case, psychometric measurements reveal that the depth of focus for reading text can be extended by a factor of up to 2.2. For fully coherent and incoherent illumination, however, the retinal images suffer from structural and contrast degradation effects, respectively.

  9. Fast and automatic depth control of iterative bone ablation based on optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-07-01

    Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.

  10. Frequency Based Design Partitioning to Achieve Higher Throughput in Digital Cross Correlator for Aperture Synthesis Passive MMW Imager.

    PubMed

    Asif, Muhammad; Guo, Xiangzhou; Zhang, Jing; Miao, Jungang

    2018-04-17

    Digital cross-correlation is central to many applications including but not limited to Digital Image Processing, Satellite Navigation and Remote Sensing. With recent advancements in digital technology, the computational demands of such applications have increased enormously. In this paper we are presenting a high throughput digital cross correlator, capable of processing 1-bit digitized stream, at the rate of up to 2 GHz, simultaneously on 64 channels i.e., approximately 4 Trillion correlation and accumulation operations per second. In order to achieve higher throughput, we have focused on frequency based partitioning of our design and tried to minimize and localize high frequency operations. This correlator is designed for a Passive Millimeter Wave Imager intended for the detection of contraband items concealed on human body. The goals are to increase the system bandwidth, achieve video rate imaging, improve sensitivity and reduce the size. Design methodology is detailed in subsequent sections, elaborating the techniques enabling high throughput. The design is verified for Xilinx Kintex UltraScale device in simulation and the implementation results are given in terms of device utilization and power consumption estimates. Our results show considerable improvements in throughput as compared to our baseline design, while the correlator successfully meets the functional requirements.

  11. An x-ray fluorescence imaging system for gold nanoparticle detection.

    PubMed

    Ricketts, K; Guazzoni, C; Castoldi, A; Gibson, A P; Royle, G J

    2013-11-07

    Gold nanoparticles (GNPs) may be used as a contrast agent to identify tumour location and can be modified to target and image specific tumour biological parameters. There are currently no imaging systems in the literature that have sufficient sensitivity to GNP concentration and distribution measurement at sufficient tissue depth for use in in vivo and in vitro studies. We have demonstrated that high detecting sensitivity of GNPs can be achieved using x-ray fluorescence; furthermore this technique enables greater depth imaging in comparison to optical modalities. Two x-ray fluorescence systems were developed and used to image a range of GNP imaging phantoms. The first system consisted of a 10 mm(2) silicon drift detector coupled to a slightly focusing polycapillary optic which allowed 2D energy resolved imaging in step and scan mode. The system has sensitivity to GNP concentrations as low as 1 ppm. GNP concentrations different by a factor of 5 could be resolved, offering potential to distinguish tumour from non-tumour. The second system was designed to avoid slow step and scan image acquisition; the feasibility of excitation of the whole specimen with a wide beam and detection of the fluorescent x-rays with a pixellated controlled drift energy resolving detector without scanning was investigated. A parallel polycapillary optic coupled to the detector was successfully used to ascertain the position where fluorescence was emitted. The tissue penetration of the technique was demonstrated to be sufficient for near-surface small-animal studies, and for imaging 3D in vitro cellular constructs. Previous work demonstrates strong potential for both imaging systems to form quantitative images of GNP concentration.

  12. Depth enhancement of S3D content and the psychological effects

    NASA Astrophysics Data System (ADS)

    Hirahara, Masahiro; Shiraishi, Saki; Kawai, Takashi

    2012-03-01

    Stereoscopic 3D (S3D) imaging technologies are widely used recently to create content for movies, TV programs, games, etc. Although S3D content differs from 2D content by the use of binocular parallax to induce depth sensation, the relationship between depth control and the user experience remains unclear. In this study, the user experience was subjectively and objectively evaluated in order to determine the effectiveness of depth control, such as an expansion or reduction or a forward or backward shift in the range of maximum parallactic angles in the cross and uncross directions (depth bracket). Four types of S3D content were used in the subjective and objective evaluations. The depth brackets of comparison stimuli were modified in order to enhance the depth sensation corresponding to the content. Interpretation Based Quality (IBQ) methodology was used for the subjective evaluation and the heart rate was measured to evaluate the physiological effect. The results of the evaluations suggest the following two points. (1) Expansion/reduction of the depth bracket affects preference and enhances positive emotions to the S3D content. (2) Expansion/reduction of the depth bracket produces above-mentioned effects more notable than shifting the cross/uncross directions.

  13. Human machine interface by using stereo-based depth extraction

    NASA Astrophysics Data System (ADS)

    Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  14. Plasmonics and metamaterials based super-resolution imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Zhaowei

    2017-05-01

    In recent years, surface imaging of various biological dynamics and biomechanical phenomena has seen a surge of interest. Imaging of processes such as exocytosis and kinesin motion are most effective when depth is limited to a very thin region of interest at the edge of the cell or specimen. However, many objects and processes of interest are of size scales below the diffraction limit for safe, visible wavelength illumination. Super-resolution imaging methods such as structured illumination microscopy and others have offered various compromises between resolution, imaging speed, and bio-compatibility. In this talk, I will present our most recent progress in plasmonic structured illumination microscopy (PSIM) and localized plasmonic structured illumination microscopy (LPSIM), and their applications in bio-imaging. We have achieved wide-field surface imaging with resolution down to 75 nm while maintaining reasonable speed and compatibility with biological specimens. These plasmonic enhanced super resolution techniques offer unique solutions to obtain 50nm spatial resolution and 50 frames per second wide imaging speed at the same time.

  15. Comprehensive Detection of Gas Plumes from Multibeam Water Column Images with Minimisation of Noise Interferences

    PubMed Central

    Zhao, Jianhu; Zhang, Hongmei; Wang, Shiqi

    2017-01-01

    Multibeam echosounder systems (MBES) can record backscatter strengths of gas plumes in the water column (WC) images that may be an indicator of possible occurrence of gas at certain depths. Manual or automatic detection is generally adopted in finding gas plumes, but frequently results in low efficiency and high false detection rates because of WC images that are polluted by noise. To improve the efficiency and reliability of the detection, a comprehensive detection method is proposed in this paper. In the proposed method, the characteristics of WC background noise are first analyzed and given. Then, the mean standard deviation threshold segmentations are respectively used for the denoising of time-angle and depth-angle images, an intersection operation is performed for the two segmented images to further weaken noise in the WC data, and the gas plumes in the WC data are detected from the intersection image by the morphological constraint. The proposed method was tested by conducting shallow-water and deepwater experiments. In these experiments, the detections were conducted automatically and higher correct detection rates than the traditional methods were achieved. The performance of the proposed method is analyzed and discussed. PMID:29186014

  16. Active-passive data fusion algorithms for seafloor imaging and classification from CZMIL data

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Ramnath, Vinod; Feygels, Viktor; Kim, Minsu; Mathur, Abhinav; Aitken, Jennifer; Tuell, Grady

    2010-04-01

    CZMIL will simultaneously acquire lidar and passive spectral data. These data will be fused to produce enhanced seafloor reflectance images from each sensor, and combined at a higher level to achieve seafloor classification. In the DPS software, the lidar data will first be processed to solve for depth, attenuation, and reflectance. The depth measurements will then be used to constrain the spectral optimization of the passive spectral data, and the resulting water column estimates will be used recursively to improve the estimates of seafloor reflectance from the lidar. Finally, the resulting seafloor reflectance cube will be combined with texture metrics estimated from the seafloor topography to produce classifications of the seafloor.

  17. Split image optical display

    DOEpatents

    Veligdan, James T.

    2005-05-31

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  18. Split image optical display

    DOEpatents

    Veligdan, James T [Manorville, NY

    2007-05-29

    A video image is displayed from an optical panel by splitting the image into a plurality of image components, and then projecting the image components through corresponding portions of the panel to collectively form the image. Depth of the display is correspondingly reduced.

  19. Using "residual depths" to monitor pool depths independently of discharge

    Treesearch

    Thomas E. Lisle

    1987-01-01

    As vital components of habitat for stream fishes, pools are often monitored to follow the effects of enhancement projects and natural stream processes. Variations of water depth with discharge, however, can complicate monitoring changes in the depth and volume of pools. To subtract the effect of discharge on depth in pools, residual depths can be measured. Residual...

  20. Determination of linear defect depths from eddy currents disturbances

    NASA Astrophysics Data System (ADS)

    Ramos, Helena Geirinhas; Rocha, Tiago; Pasadas, Dário; Ribeiro, Artur Lopes

    2014-02-01

    One of the still open problems in the inspection research concerns the determination of the maximum depth to which a surface defect goes. Eddy current testing being one of the most sensitive well established inspection methods, able to detect and characterize different type of defects in conductive materials, is an adequate technique to solve this problem. This paper reports a study concerning the disturbances in the magnetic field and in the lines of current due to a machined linear defect having different depths in order to extract relevant information that allows the determination of the defect characteristics. The image of the eddy currents (EC) is paramount to understand the physical phenomena involved. The EC images for this study are generated using a commercial finite element model (FLUX). The excitation used produces a uniform magnetic field on the plate under test in the absence of defects and the disturbances due to the defects are compared with those obtained from experimental measurements. In order to increase the limited penetration depth of the method giant magnetoresistors (GMR) are used to lower the working frequency. The geometry of the excitation planar coil produces a uniform magnetic field on an area of around the GMR sensor, inducing a uniform eddy current distribution on the plate. In the presence of defects in the material surface, the lines of currents inside the material are deviated from their uniform direction and the magnetic field produced by these currents is sensed by the GMR sensor. Besides the theoretical study of the electromagnetic system, the paper describes the experiments that have been carried out to support the theory and conclusions are drawn for cracks having different depths.

  1. Validation of snow depth reconstruction from lapse-rate webcam images against terrestrial laser scanner measurements in centrel Pyrenees

    NASA Astrophysics Data System (ADS)

    Revuelto, Jesús; Jonas, Tobias; López-Moreno, Juan Ignacio

    2015-04-01

    Snow distribution in mountain areas plays a key role in many processes as runoff dynamics, ecological cycles or erosion rates. Nevertheless, the acquisition of high resolution snow depth data (SD) in space-time is a complex task that needs the application of remote sensing techniques as Terrestrial Laser Scanning (TLS). Such kind of techniques requires intense field work for obtaining high quality snowpack evolution during a specific time period. Combining TLS data with other remote sensing techniques (satellite images, photogrammetry…) and in-situ measurements could represent an improvement of the available information of a variable with rapid topographic changes. The aim of this study is to reconstruct daily SD distribution from lapse-rate images from a webcam and data from two to three TLS acquisitions during the snow melting periods of 2012, 2013 and 2014. This information is obtained at Izas Experimental catchment in Central Spanish Pyrenees; a catchment of 33ha, with an elevation ranging from 2050 to 2350m a.s.l. The lapse-rate images provide the Snow Covered Area (SCA) evolution at the study site, while TLS allows obtaining high resolution information of SD distribution. With ground control points, lapse-rate images are georrectified and their information is rasterized into a 1-meter resolution Digital Elevation Model. Subsequently, for each snow season, the Melt-Out Date (MOD) of each pixel is obtained. The reconstruction increases the estimated SD lose for each time step (day) in a distributed manner; starting the reconstruction for each grid cell at the MOD (note the reverse time evolution). To do so, the reconstruction has been previously adjusted in time and space as follows. Firstly, the degree day factor (SD lose/positive average temperatures) is calculated from the information measured at an automatic weather station (AWS) located in the catchment. Afterwards, comparing the SD lose at the AWS during a specific time period (i.e. between two TLS

  2. A comparison of observed and analytically derived remote sensing penetration depths for turbid water

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; Usry, J. W.; Witte, W. G.; Whitlock, C. H.; Guraus, E. A.

    1981-01-01

    The depth to which sunlight will penetrate in turbid waters was investigated. The tests were conducted in water with a single scattering albedo range, and over a range of solar elevation angles. Two different techniques were used to determine the depth of light penetration. It showed little change in the depth of sunlight penetration with changing solar elevation angle. A comparison of the penetration depths indicates that the best agreement between the two methods was achieved when the quasisingle scattering relationship was not corrected for solar angle. It is concluded that sunlight penetration is dependent on inherent water properties only.

  3. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    , commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately

  4. Correlation plenoptic imaging

    NASA Astrophysics Data System (ADS)

    Pepe, Francesco V.; Di Lena, Francesco; Garuccio, Augusto; D'Angelo, Milena

    2017-06-01

    Plenoptic Imaging (PI) is a novel optical technique for achieving tridimensional imaging in a single shot. In conventional PI, a microlens array is inserted in the native image plane and the sensor array is moved behind the microlenses. On the one hand, the microlenses act as imaging pixels to reproduce the image of the scene; on the other hand, each microlens reproduces on the sensor array an image of the camera lens, thus providing the angular information associated with each imaging pixel. The recorded propagation direction is exploited, in post- processing, to computationally retrace the geometrical light path, thus enabling the refocusing of different planes within the scene, the extension of the depth of field of the acquired image, as well as the 3D reconstruction of the scene. However, a trade-off between spatial and angular resolution is built in the standard plenoptic imaging process. We demonstrate that the second-order spatio-temporal correlation properties of light can be exploited to overcome this fundamental limitation. Using two correlated beams, from either a chaotic or an entangled photon source, we can perform imaging in one arm and simultaneously obtain the angular information in the other arm. In fact, we show that the second order correlation function possesses plenoptic imaging properties (i.e., it encodes both spatial and angular information), and is thus characterized by a key re-focusing and 3D imaging capability. From a fundamental standpoint, the plenoptic application is the first situation where the counterintuitive properties of correlated systems are effectively used to beat intrinsic limits of standard imaging systems. From a practical standpoint, our protocol can dramatically enhance the potentials of PI, paving the way towards its promising applications.

  5. A new segmentation strategy for processing magnetic anomaly detection data of shallow depth ferromagnetic pipeline

    NASA Astrophysics Data System (ADS)

    Feng, Shuo; Liu, Dejun; Cheng, Xing; Fang, Huafeng; Li, Caifang

    2017-04-01

    Magnetic anomalies produced by underground ferromagnetic pipelines because of the polarization of earth's magnetic field are used to obtain the information on the location, buried depth and other parameters of pipelines. In order to achieve a fast inversion and interpretation of measured data, it is necessary to develop a fast and stable forward method. Magnetic dipole reconstruction (MDR), as a kind of integration numerical method, is well suited for simulating a thin pipeline anomaly. In MDR the pipeline model must be cut into small magnetic dipoles through different segmentation methods. The segmentation method has an impact on the stability and speed of forward calculation. Rapid and accurate simulation of deep-buried pipelines has been achieved by exciting segmentation method. However, in practical measurement, the depth of underground pipe is uncertain. When it comes to the shallow-buried pipeline, the present segmentation may generate significant errors. This paper aims at solving this problem in three stages. First, the cause of inaccuracy is analyzed by simulation experiment. Secondly, new variable interval section segmentation is proposed based on the existing segmentation. It can help MDR method to obtain simulation results in a fast way under the premise of ensuring the accuracy of different depth models. Finally, the measured data is inversed based on new segmentation method. The result proves that the inversion based on the new segmentation can achieve fast and accurate inversion of depth parameters of underground pipes without being limited by pipeline depth.

  6. Design and technical evaluation of fibre-coupled Raman probes for the image-guided discrimination of cancerous skin

    NASA Astrophysics Data System (ADS)

    Schleusener, J.; Reble, C.; Helfmann, J.; Gersonde, I.; Cappius, H.-J.; Glanert, M.; Fluhr, J. W.; Meinke, M. C.

    2014-03-01

    Two different designs for fibre-coupled Raman probes are presented that are optimized for discriminating cancerous and normal skin by achieving high epithelial sensitivity to detect a major component of the Raman signal from the depth range of the epithelium. This is achieved by optimizing Raman spot diameters to the range of ≈200 µm, which distinguishes this approach from the common applications of either Raman microspectroscopy (1-5 µm) or measurements on larger sampling volume using spot sizes of a few mm. Video imaging with a depicted area in the order of a few cm, to allow comparing Raman measurements to the location of the histo-pathologic findings, is integrated in both designs. This is important due to the inhomogeneity of cancerous lesions. Video image acquisition is achieved using white light LED illumination, which avoids ambient light artefacts. The design requirements focus either on a compact light-weight configuration, for pen-like handling, or on a video-visible measurement spot to enable increased positioning accuracy. Both probes are evaluated with regard to spot size, Rayleigh suppression, background fluorescence, depth sensitivity, clinical handling and ambient light suppression. Ex vivo measurements on porcine ear skin correlates well with findings of other groups.

  7. Design of efficient, broadband single-element (20-80 MHz) ultrasonic transducers for medical imaging applications.

    PubMed

    Cannata, Jonathan M; Ritter, Timothy A; Chen, Wo-Hsing; Silverman, Ronald H; Shung, K Kirk

    2003-11-01

    This paper discusses the design, fabrication, and testing of sensitive broadband lithium niobate (LiNbO3) single-element ultrasonic transducers in the 20-80 MHz frequency range. Transducers of varying dimensions were built for an f# range of 2.0-3.1. The desired focal depths were achieved by either casting an acoustic lens on the transducer face or press-focusing the piezoelectric into a spherical curvature. For designs that required electrical impedance matching, a low impedance transmission line coaxial cable was used. All transducers were tested in a pulse-echo arrangement, whereby the center frequency, bandwidth, insertion loss, and focal depth were measured. Several transducers were fabricated with center frequencies in the 20-80 MHz range with the measured -6 dB bandwidths and two-way insertion loss values ranging from 57 to 74% and 9.6 to 21.3 dB, respectively. Both transducer focusing techniques proved successful in producing highly sensitive, high-frequency, single-element, ultrasonic-imaging transducers. In vivo and in vitro ultrasonic backscatter microscope (UBM) images of human eyes were obtained with the 50 MHz transducers. The high sensitivity of these devices could possibly allow for an increase in depth of penetration, higher image signal-to-noise ratio (SNR), and improved image contrast at high frequencies when compared to previously reported results.

  8. Enhancing depth of focus in tilted microfluidics channels by digital holography.

    PubMed

    Matrecano, Marcella; Paturzo, Melania; Finizio, Andrea; Ferraro, Pietro

    2013-03-15

    In this Letter we propose a method to enhance the limited depth of field (DOF) in optical imaging systems, through digital holography. The proposed approach is based on the introduction of a cubic phase plate into the diffraction integral, analogous to what occurs in white-light imaging systems. By this approach we show that it is possible to improve the DOF and to recover the extended focus image of a tilted object in a single reconstruction step. Moreover, we demonstrate the possibility of obtaining well-focused biological cells flowing into a tilted microfluidic channel.

  9. Contrast-enhanced optical coherence tomography with picomolar sensitivity for functional in vivo imaging

    NASA Astrophysics Data System (ADS)

    Liba, Orly; Sorelle, Elliott D.; Sen, Debasish; de La Zerda, Adam

    2016-03-01

    Optical Coherence Tomography (OCT) enables real-time imaging of living tissues at cell-scale resolution over millimeters in three dimensions. Despite these advantages, functional biological studies with OCT have been limited by a lack of exogenous contrast agents that can be distinguished from tissue. Here we report an approach to functional OCT imaging that implements custom algorithms to spectrally identify unique contrast agents: large gold nanorods (LGNRs). LGNRs exhibit 110-fold greater spectral signal per particle than conventional GNRs, which enables detection of individual LGNRs in water and concentrations as low as 250 pM in the circulation of living mice. This translates to ~40 particles per imaging voxel in vivo. Unlike previous implementations of OCT spectral detection, the methods described herein adaptively compensate for depth and processing artifacts on a per sample basis. Collectively, these methods enable high-quality noninvasive contrast-enhanced imaging of OCT in living subjects, including detection of tumor microvasculature at twice the depth achievable with conventional OCT. Additionally, multiplexed detection of spectrally-distinct LGNRs was demonstrated to observe discrete patterns of lymphatic drainage and identify individual lymphangions and lymphatic valve functional states. These capabilities provide a powerful platform for molecular imaging and characterization of tissue noninvasively at cellular resolution, called MOZART.

  10. Deep Tissue Fluorescent Imaging in Scattering Specimens Using Confocal Microscopy

    PubMed Central

    Clendenon, Sherry G.; Young, Pamela A.; Ferkowicz, Michael; Phillips, Carrie; Dunn, Kenneth W.

    2015-01-01

    In scattering specimens, multiphoton excitation and nondescanned detection improve imaging depth by a factor of 2 or more over confocal microscopy; however, imaging depth is still limited by scattering. We applied the concept of clearing to deep tissue imaging of highly scattering specimens. Clearing is a remarkably effective approach to improving image quality at depth using either confocal or multiphoton microscopy. Tissue clearing appears to eliminate the need for multiphoton excitation for deep tissue imaging. PMID:21729357

  11. Ultrasound strain imaging using Barker code

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  12. Depth Structure from Asymmetric Shading Supports Face Discrimination

    PubMed Central

    Chen, Chien-Chung; Chen, Chin-Mei; Tyler, Christopher W.

    2013-01-01

    To examine the effect of illumination direction on the ability of observers to discriminate between faces, we manipulated the direction of illumination on scanned 3D face models. In order to dissociate the surface reflectance and illumination components of front-view face images, we introduce a symmetry algorithm that can separate the symmetric and asymmetric components of the face in both low and high spatial frequency bands. Based on this approach, hybrid faces stimuli were constructed with different combinations of symmetric and asymmetric spatial content. Discrimination results with these images showed that asymmetric illumination information biased face perception toward the structure of the shading component, while the symmetric illumination information had little, if any, effect. Measures of perceived depth showed that this property increased systematically with the asymmetric but not the symmetric low spatial frequency component. Together, these results suggest that (1) the asymmetric 3D shading information dramatically affects both the perceived facial information and the perceived depth of the facial structure; and (2) these effects both increase as the illumination direction is shifted to the side. Thus, our results support the hypothesis that face processing has a strong 3D component. PMID:23457484

  13. Image translation for single-shot focal tomography

    DOE PAGES

    Llull, Patrick; Yuan, Xin; Carin, Lawrence; ...

    2015-01-01

    Focus and depth of field are conventionally addressed by adjusting longitudinal lens position. More recently, combinations of deliberate blur and computational processing have been used to extend depth of field. Here we show that dynamic control of transverse and longitudinal lens position can be used to decode focus and extend depth of field without degrading static resolution. Our results suggest that optical image stabilization systems may be used for autofocus, extended depth of field, and 3D imaging.

  14. An alternative approach to depth of field which avoids the blur circle and uses the pixel pitch

    NASA Astrophysics Data System (ADS)

    Schuster, Norbert

    2015-09-01

    Modern thermal imaging systems apply more and more uncooled detectors. High volume applications work with detectors which have a reduced pixel count (typical between 200x150 and 640x480). This shrinks the application of modern image treatment procedures like wave front coding. On the other hand side, uncooled detectors demand lenses with fast F-numbers near 1.0. Which are the limits on resolution if the target to analyze changes its distance to the camera system? The aim to implement lens arrangements without any focusing mechanism demands a deeper quantification of the Depth of Field problem. The proposed Depth of Field approach avoids the classic "accepted image blur circle". It bases on a camera specific depth of focus which is transformed in the object space by paraxial relations. The traditional RAYLEIGH's -criterion bases on the unaberrated Point Spread Function and delivers a first order relation for the depth of focus. Hence, neither the actual lens resolution neither the detector impact is considered. The camera specific depth of focus respects a lot of camera properties: Lens aberrations at actual F-number, detector size and pixel pitch. The through focus MTF is the base of the camera specific depth of focus. It has a nearly symmetric course around the maximum of sharp imaging. The through focus MTF is considered at detector's Nyquist frequency. The camera specific depth of focus is this the axial distance in front and behind of sharp image plane where the through focus MTF is <0.25. This camera specific depth of focus is transferred in the object space by paraxial relations. It follows a general applicable Depth of Field diagram which could be applied to lenses realizing a lateral magnification range -0.05…0. Easy to handle formulas are provided between hyperfocal distance and the borders of the Depth of Field in dependence on sharp distances. These relations are in line with the classical Depth of Field-theory. Thermal pictures, taken by different IR

  15. Miniature all-optical probe for photoacoustic and ultrasound dual-modality imaging

    NASA Astrophysics Data System (ADS)

    Li, Guangyao; Guo, Zhendong; Chen, Sung-Liang

    2018-02-01

    Photoacoustic (PA) imaging forms an image based on optical absorption contrasts with ultrasound (US) resolution. In contrast, US imaging is based on acoustic backscattering to provide structural information. In this study, we develop a miniature all-optical probe for high-resolution PA-US dual-modality imaging over a large imaging depth range. The probe employs three individual optical fibers (F1-F3) to achieve optical generation and detection of acoustic waves for both PA and US modalities. To offer wide-angle laser illumination, fiber F1 with a large numerical aperture (NA) is used for PA excitation. On the other hand, wide-angle US waves are generated by laser illumination on an optically absorbing composite film which is coated on the end face of fiber F2. Both the excited PA and backscattered US waves are detected by a Fabry-Pérot cavity on the tip of fiber F3 for wide-angle acoustic detection. The wide angular features of the three optical fibers make large-NA synthetic aperture focusing technique possible and thus high-resolution PA and US imaging. The probe diameter is less than 2 mm. Over a depth range of 4 mm, lateral resolutions of PA and US imaging are 104-154 μm and 64-112 μm, respectively, and axial resolutions of PA and US imaging are 72-117 μm and 31-67 μm, respectively. To show the imaging capability of the probe, phantom imaging with both PA and US contrasts is demonstrated. The results show that the probe has potential for endoscopic and intravascular imaging applications that require PA and US contrast with high resolution.

  16. Magnetic Resonance Imaging (MRI) Analysis of Fibroid Location in Women Achieving Pregnancy After Uterine Artery Embolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Woodruff J.; Bratby, Mark John

    The purpose of this study was to evaluate the fibroid morphology in a cohort of women achieving pregnancy following treatment with uterine artery embolization (UAE) for symptomatic uterine fibroids. A retrospective review of magnetic resonance imaging (MRI) of the uterus was performed to assess pre-embolization fibroid morphology. Data were collected on fibroid size, type, and number and included analysis of follow-up imaging to assess response. There have been 67 pregnancies in 51 women, with 40 live births. Intramural fibroids were seen in 62.7% of the women (32/48). Of these the fibroids were multiple in 16. A further 12 women hadmore » submucosal fibroids, with equal numbers of types 1 and 2. Two of these women had coexistent intramural fibroids. In six women the fibroids could not be individually delineated and formed a complex mass. All subtypes of fibroid were represented in those subgroups of women achieving a live birth versus those who did not. These results demonstrate that the location of uterine fibroids did not adversely affect subsequent pregnancy in the patient population investigated. Although this is only a small qualitative study, it does suggest that all types of fibroids treated with UAE have the potential for future fertility.« less

  17. Characterization of punctate inner choroidopathy using enhanced depth imaging optical coherence tomography.

    PubMed

    Zarranz-Ventura, Javier; Sim, Dawn A; Keane, Pearse A; Patel, Praveen J; Westcott, Mark C; Lee, Richard W; Tufail, Adnan; Pavesio, Carlos E

    2014-09-01

    To perform qualitative and quantitative analyses of retinal and choroidal morphology in patients with punctate inner choroidopathy (PIC) using enhanced depth imaging optical coherence tomography (EDI-OCT). Cross-sectional, consecutive series. A total of 2242 patients attending 2 tertiary referral uveitis clinics at Moorfields Eye Hospital were screened; 46 patients with PIC diagnosis were identified, and 35 eyes (35 patients) had clinically inactive PIC had EDI-OCT images that met the inclusion criteria. Punctate inner choroidopathy lesions were qualitatively assessed for retinal features, such as (1) focal elevation of the retinal pigment epithelium (RPE), (2) focal atrophy of the outer retina/RPE, and (3) presence of sub-RPE hyperreflective deposits and choroidal features: (a) presence of focal hyperreflective dots in the inner choroid and (b) focal thinning of the choroid adjacent to PIC lesions. Quantitative analyses of the retina, choroid, and choroidal sublayers were performed, and associations with clinical and demographic data were examined. Prevalence of each lesion pattern and thickness of retinal and choroidal layers. A total of 90 discrete PIC lesions were captured; 46.6% of PIC lesions consisted of focal atrophy of the outer retina and RPE; 34.4% consisted of sub-RPE hyperreflective deposits; and 18.8% consisted of localized RPE elevation with underlying hyporeflective space. Focal hyperreflective dots were seen in the inner choroid of 68.5% of patients, with 17.1% of eyes presenting focal choroidal thinning underlying PIC lesions. By excluding high myopes, patients with "atypical" PIC had reduced retinal thickness compared with patients with "typical" PIC (246.65±30.2 vs. 270.05±24.6 μm; P = 0.04), and greater disease duration was associated with decreases in retinal thickness (r = -0.53; P = 0.01). A significant correlation was observed between best-corrected visual acuity and foveal retinal thickness (r = -0.40; P = 0.03). In a large series of

  18. Diurnal variations in optical depth at Mars

    NASA Technical Reports Server (NTRS)

    Colburn, D. S.; Pollack, J. B.; Haberle, R. M.

    1989-01-01

    Viking lander camera images of the Sun were used to compute atmospheric optical depth at two sites over a period of 1 to 1/3 martian years. The complete set of 1044 optical depth determinations is presented in graphical and tabular form. Error estimates are presented in detail. Otpical depths in the morning (AM) are generally larger than in the afternoon (PM). The AM-PM differences are ascribed to condensation of water vapor into atmospheric ice aerosols at night and their evaporation in midday. A smoothed time series of these differences shows several seasonal peaks. These are simulated using a one-dimensional radiative convective model which predicts martial atmospheric temperature profiles. A calculation combinig these profiles with water vapor measurements from the Mars Atmospheric Water Detector is used to predict when the diurnal variations of water condensation should occur. The model reproduces a majority of the observed peaks and shows the factors influencing the process. Diurnal variation of condensation is shown to peak when the latitude and season combine to warm the atmosphere to the optimum temperature, cool enough to condense vapor at night and warm enough to cause evaporation at midday.

  19. SU-E-T-296: Dosimetric Analysis of Small Animal Image-Guided Irradiator Using High Resolution Optical CT Imaging of 3D Dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Y; Qian, X; Wuu, C

    Purpose: To verify the dosimetric characteristics of a small animal image-guided irradiator using a high-resolution of optical CT imaging of 3D dosimeters. Methods: PRESAEGE 3D dosimeters were used to determine dosimetric characteristics of a small animal image-guided irradiator and compared with EBT2 films. Cylindrical PRESAGE dosimeters with 7cm height and 6cm diameter were placed along the central axis of the beam. The films were positioned between 6×6cm{sup 2} cubed plastic water phantoms perpendicular to the beam direction with multiple depths. PRESAGE dosimeters and EBT2 films were then irradiated with the irradiator beams at 220kVp and 13mA. Each of irradiated PRESAGEmore » dosimeters named PA1, PA2, PB1, and PB2, was independently scanned using a high-resolution single laser beam optical CT scanner. The transverse images were reconstructed with a 0.1mm high-resolution pixel. A commercial Epson Expression 10000XL flatbed scanner was used for readout of irradiated EBT2 films at a 0.4mm pixel resolution. PDD curves and beam profiles were measured for the irradiated PRESAGE dosimeters and EBT2 films. Results: The PDD agreements between the irradiated PRESAGE dosimeter PA1, PA2, PB1, PB2 and the EB2 films were 1.7, 2.3, 1.9, and 1.9% for the multiple depths at 1, 5, 10, 15, 20, 30, 40 and 50mm, respectively. The FWHM measurements for each PRESAEGE dosimeter and film agreed with 0.5, 1.1, 0.4, and 1.7%, respectively, at 30mm depth. Both PDD and FWHM measurements for the PRESAGE dosimeters and the films agreed overall within 2%. The 20%–80% penumbral widths of each PRESAGE dosimeter and the film at a given depth were respectively found to be 0.97, 0.91, 0.79, 0.88, and 0.37mm. Conclusion: Dosimetric characteristics of a small animal image-guided irradiator have been demonstrated with the measurements of PRESAGE dosimeter and EB2 film. With the high resolution and accuracy obtained from this 3D dosimetry system, precise targeting small animal irradiation can

  20. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.