Extended depth of field imaging for high speed object analysis
NASA Technical Reports Server (NTRS)
Frost, Keith (Inventor); Ortyn, William (Inventor); Basiji, David (Inventor); Bauer, Richard (Inventor); Liang, Luchuan (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)
2011-01-01
A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.
Enhanced optical clearing of skin in vivo and optical coherence tomography in-depth imaging
NASA Astrophysics Data System (ADS)
Wen, Xiang; Jacques, Steven L.; Tuchin, Valery V.; Zhu, Dan
2012-06-01
The strong optical scattering of skin tissue makes it very difficult for optical coherence tomography (OCT) to achieve deep imaging in skin. Significant optical clearing of in vivo rat skin sites was achieved within 15 min by topical application of an optical clearing agent PEG-400, a chemical enhancer (thiazone or propanediol), and physical massage. Only when all three components were applied together could a 15 min treatment achieve a three fold increase in the OCT reflectance from a 300 μm depth and 31% enhancement in image depth Zthreshold.
Noise removal in extended depth of field microscope images through nonlinear signal processing.
Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J
2013-04-01
Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.
High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.
Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S
2018-03-05
A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.
Layered compression for high-precision depth data.
Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen
2015-12-01
With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.
NASA Astrophysics Data System (ADS)
Balu, Mihaela; Saytashev, Ilyas; Hou, Jue; Dantus, Marcos; Tromberg, Bruce J.
2016-02-01
We report on a direct comparison between Ti:Sapphire and Yb fiber lasers for depth-resolved label-free multimodal imaging of human skin. We found that the penetration depth achieved with the Yb laser was 80% greater than for the Ti:Sapphire. Third harmonic generation (THG) imaging with Yb laser excitation provides additional information about skin structure. Our results indicate the potential of fiber-based laser systems for moving into clinical use.
Image processing operations achievable with the Microchannel Spatial Light Modulator
NASA Astrophysics Data System (ADS)
Warde, C.; Fisher, A. D.; Thackara, J. I.; Weiss, A. M.
1980-01-01
The Microchannel Spatial Light Modulator (MSLM) is a versatile, optically-addressed, highly-sensitive device that is well suited for low-light-level, real-time, optical information processing. It consists of a photocathode, a microchannel plate (MCP), a planar acceleration grid, and an electro-optic plate in proximity focus. A framing rate of 20 Hz with full modulation depth, and 100 Hz with 20% modulation depth has been achieved in a vacuum-demountable LiTaO3 device. A halfwave exposure sensitivity of 2.2 mJ/sq cm and an optical information storage time of more than 2 months have been achieved in a similar gridless LiTaO3 device employing a visible photocathode. Image processing operations such as analog and digital thresholding, real-time image hard clipping, contrast reversal, contrast enhancement, image addition and subtraction, and binary-level logic operations such as AND, OR, XOR, and NOR can be achieved with this device. This collection of achievable image processing characteristics makes the MSLM potentially useful for a number of smart sensor applications.
Depth-aware image seam carving.
Shen, Jianbing; Wang, Dapeng; Li, Xuelong
2013-10-01
Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.
NASA Astrophysics Data System (ADS)
Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing
2018-03-01
In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.
No scanning depth imaging system based on TOF
NASA Astrophysics Data System (ADS)
Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo
2016-03-01
To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.
NASA Astrophysics Data System (ADS)
King, Sharon V.; Yuan, Shuai; Preza, Chrysanthe
2018-03-01
Effectiveness of extended depth of field microscopy (EDFM) implementation with wavefront encoding methods is reduced by depth-induced spherical aberration (SA) due to reliance of this approach on a defined point spread function (PSF). Evaluation of the engineered PSF's robustness to SA, when a specific phase mask design is used, is presented in terms of the final restored image quality. Synthetic intermediate images were generated using selected generalized cubic and cubic phase mask designs. Experimental intermediate images were acquired using the same phase mask designs projected from a liquid crystal spatial light modulator. Intermediate images were restored using the penalized space-invariant expectation maximization and the regularized linear least squares algorithms. In the presence of depth-induced SA, systems characterized by radially symmetric PSFs, coupled with model-based computational methods, achieve microscope imaging performance with fewer deviations in structural fidelity (e.g., artifacts) in simulation and experiment and 50% more accurate positioning of 1-μm beads at 10-μm depth in simulation than those with radially asymmetric PSFs. Despite a drop in the signal-to-noise ratio after processing, EDFM is shown to achieve the conventional resolution limit when a model-based reconstruction algorithm with appropriate regularization is used. These trends are also found in images of fixed fluorescently labeled brine shrimp, not adjacent to the coverslip, and fluorescently labeled mitochondria in live cells.
Introducing the depth transfer curve for 3D capture system characterization
NASA Astrophysics Data System (ADS)
Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas
2011-03-01
3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.
High bit depth infrared image compression via low bit depth codecs
NASA Astrophysics Data System (ADS)
Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren
2017-08-01
Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.
Single grating x-ray imaging for dynamic biological systems
NASA Astrophysics Data System (ADS)
Morgan, Kaye S.; Paganin, David M.; Parsons, David W.; Donnelley, Martin; Yagi, Naoto; Uesugi, Kentaro; Suzuki, Yoshio; Takeuchi, Akihisa; Siu, Karen K. W.
2012-07-01
Biomedical studies are already benefiting from the excellent contrast offered by phase contrast x-ray imaging, but live imaging work presents several challenges. Living samples make it particularly difficult to achieve high resolution, sensitive phase contrast images, as exposures must be short and cannot be repeated. We therefore present a single-exposure, high-flux method of differential phase contrast imaging [1, 2, 3] in the context of imaging live airways for Cystic Fibrosis (CF) treatment assessment [4]. The CF study seeks to non-invasively observe the liquid lining the airways, which should increase in depth in response to effective treatments. Both high spatial resolution and sensitivity are required in order to track micron size changes in a liquid that is not easily differentiated from the tissue on which it lies. Our imaging method achieves these goals by using a single attenuation grating or grid as a reference pattern, and analyzing how the sample deforms the pattern to quantitatively retrieve the phase depth of the sample. The deformations are mapped at each pixel in the image using local cross-correlations comparing each 'sample and pattern' image with a reference 'pattern only' image taken before the sample is introduced. This produces a differential phase image, which may be integrated to give the sample phase depth.
Depth image enhancement using perceptual texture priors
NASA Astrophysics Data System (ADS)
Bang, Duhyeon; Shim, Hyunjung
2015-03-01
A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.
Joint optic disc and cup boundary extraction from monocular fundus images.
Chakravarty, Arunava; Sivaswamy, Jayanthi
2017-08-01
Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.
Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)
Legleiter, Carl
2016-01-01
Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.
Wide field video-rate two-photon imaging by using spinning disk beam scanner
NASA Astrophysics Data System (ADS)
Maeda, Yasuhiro; Kurokawa, Kazuo; Ito, Yoko; Wada, Satoshi; Nakano, Akihiko
2018-02-01
The microscope technology with wider view field, deeper penetration depth, higher spatial resolution and higher imaging speed are required to investigate the intercellular dynamics or interactions of molecules and organs in cells or a tissue in more detail. The two-photon microscope with a near infrared (NIR) femtosecond laser is one of the technique to improve the penetration depth and spatial resolution. However, the video-rate or high-speed imaging with wide view field is difficult to perform with the conventional two-photon microscope. Because point-to-point scanning method is used in conventional one, so it's difficult to achieve video-rate imaging. In this study, we developed a two-photon microscope with spinning disk beam scanner and femtosecond NIR fiber laser with around 10 W average power for the microscope system to achieve above requirements. The laser is consisted of an oscillator based on mode-locked Yb fiber laser, a two-stage pre-amplifier, a main amplifier based on a Yb-doped photonic crystal fiber (PCF), and a pulse compressor with a pair of gratings. The laser generates a beam with maximally 10 W average power, 300 fs pulse width and 72 MHz repetition rate. And the beam incident to a spinning beam scanner (Yokogawa Electric) optimized for two-photon imaging. By using this system, we achieved to obtain the 3D images with over 1mm-penetration depth and video-rate image with 350 x 350 um view field from the root of Arabidopsis thaliana.
A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion
NASA Astrophysics Data System (ADS)
Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen
2017-09-01
In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.
Nanometric depth resolution from multi-focal images in microscopy.
Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H
2011-07-06
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.
Nanometric depth resolution from multi-focal images in microscopy
Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.
2011-01-01
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948
Kothapalli, Sri-Rajasekhar; Ma, Te-Jen; Vaithilingam, Srikant; Oralkan, Ömer
2014-01-01
In this paper, we demonstrate 3-D photoacoustic imaging (PAI) of light absorbing objects embedded as deep as 5 cm inside strong optically scattering phantoms using a miniaturized (4 mm × 4 mm × 500 µm), 2-D capacitive micromachined ultrasonic transducer (CMUT) array of 16 × 16 elements with a center frequency of 5.5 MHz. Two-dimensional tomographic images and 3-D volumetric images of the objects placed at different depths are presented. In addition, we studied the sensitivity of CMUT-based PAI to the concentration of indocyanine green dye at 5 cm depth inside the phantom. Under optimized experimental conditions, the objects at 5 cm depth can be imaged with SNR of about 35 dB and a spatial resolution of approximately 500 µm. Results demonstrate that CMUTs with integrated front-end amplifier circuits are an attractive choice for achieving relatively high depth sensitivity for PAI. PMID:22249594
Time multiplexing based extended depth of focus imaging.
Ilovitsh, Asaf; Zalevsky, Zeev
2016-01-01
We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.
Computational-optical microscopy for 3D biological imaging beyond the diffraction limit
NASA Astrophysics Data System (ADS)
Grover, Ginni
In recent years, super-resolution imaging has become an important fluorescent microscopy tool. It has enabled imaging of structures smaller than the optical diffraction limit with resolution less than 50 nm. Extension to high-resolution volume imaging has been achieved by integration with various optical techniques. In this thesis, development of a fluorescent microscope to enable high resolution, extended depth, three dimensional (3D) imaging is discussed; which is achieved by integration of computational methods with optical systems. In the first part of the thesis, point spread function (PSF) engineering for volume imaging is discussed. A class of PSFs, referred to as double-helix (DH) PSFs, is generated. The PSFs exhibit two focused spots in the image plane which rotate about the optical axis, encoding depth in rotation of the image. These PSFs extend the depth-of-field up to a factor of ˜5. Precision performance of the DH-PSFs, based on an information theoretical analysis, is compared with other 3D methods with conclusion that the DH-PSFs provide the best precision and the longest depth-of-field. Out of various possible DH-PSFs, a suitable PSF is obtained for super-resolution microscopy. The DH-PSFs are implemented in imaging systems, such as a microscope, with a special phase modulation at the pupil plane. Surface-relief elements which are polarization-insensitive and ˜90% light efficient are developed for phase modulation. The photon-efficient DH-PSF microscopes thus developed are used, along with optimal position estimation algorithms, for tracking and super-resolution imaging in 3D. Imaging at depths-of-field of up to 2.5 microm is achieved without focus scanning. Microtubules were imaged with 3D resolution of (6, 9, 39) nm, which is in close agreement with the theoretical limit. A quantitative study of co-localization of two proteins in volume was conducted in live bacteria. In the last part of the thesis practical aspects of the DH-PSF microscope are discussed. A method to stabilize it, for extended periods of time, with 3-4 nm precision in 3D is developed. 3D Super-resolution is demonstrated without drift. A PSF correction algorithm is demonstrated to improve characteristics of the DH-PSF in an experiment, where it is implemented with a polarization-insensitive liquid crystal spatial light modulator.
A Flexible Annular-Array Imaging Platform for Micro-Ultrasound
Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei
2013-01-01
Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923
Wang, Ruikang K.; An, Lin; Francis, Peter; Wilson, David J.
2010-01-01
We demonstrate the depth-resolved and detailed ocular perfusion maps within retina and choroid can be obtained from an ultrahigh sensitive optical microangiography (OMAG). As opposed to the conventional OMAG, we apply the OMAG algorithm along the slow scanning axis to achieve the ultrahigh sensitive imaging to the slow flows within capillaries. We use an 840nm system operating at an imaging rate of 400 frames/sec that requires 3 sec to complete one 3D scan of ~3x3 mm2 area on retina. We show the superior imaging performance of OMAG to provide functional images of capillary level microcirculation at different land-marked depths within retina and choroid that correlate well with the standard retinal pathology. PMID:20436605
Development of HiLo Microscope and its use in In-Vivo Applications
NASA Astrophysics Data System (ADS)
Patel, Shreyas J.
The functionality of achieving optical sectioning in biomedical research is invaluable as it allows for visualization of a biological sample at different depths while being free of background scattering. Most current microscopy techniques that offer optical sectioning, unfortunately, require complex instrumentation and thus are generally costly. HiLo microscopy, on the other hand, offers the same functionality and advantage at a relatively low cost. Hence, the work described in this thesis involves the design, build, and application of a HiLo microscope. More specifically, a standalone HiLo microscope was built in addition to implementing HiLo microscopy on a standard fluorescence microscope. In HiLo microscopy, optical sectioning is achieved by acquiring two different types of images per focal plane. One image is acquired under uniform illumination and the other is acquired under speckle illumination. These images are processed using an algorithm that extracts in-focus information and removes features and glare that occur as a result of background fluorescence. To show the benefits of the HiLo microscopy, several imaging experiments on various samples were performed under a HiLo microscope and compared against a traditional fluorescence microscope and a confocal microscope, which is considered the gold standard in optical imaging. In-vitro and ex-vivo imaging was performed on a set of pollen grains, and optically cleared mouse brain and heart slices. Each of these experiments showed great reduction in background scattering at different depths under HiLo microscopy. More importantly, HiLo imaging of optically cleared heart slice demonstrated emergence of different vasculature at different depths. Reduction of out-of-focus light increased the spatial resolution and allowed better visualization of capillary vessels. Furthermore, HiLo imaging was tested in an in-vivo model of a rodent dorsal window chamber model. When imaging the same sample under confocal microscope, the results were comparable between the two modalities. Additionally, a method of achieving blood flow maps at different depth using a combination of HiLo and LSI imaging is also discussed. The significance of this combined technique could help categorize blood flow to particular depths; this can help improve outcomes of medical treatments such pulse dye laser and photodynamic therapy treatments.
NASA Astrophysics Data System (ADS)
Yang, Jiamiao; Gong, Lei; Xu, Xiao; Hai, Pengfei; Suzuki, Yuta; Wang, Lihong V.
2017-03-01
Photoacoustic microscopy (PAM) has been extensively applied in biomedical study because of its ability to visualize tissue morphology and physiology in vivo in three dimensions (3D). However, conventional PAM suffers from a rapidly decreasing resolution away from the focal plane because of the limited depth of focus of an objective lens, which deteriorates the volumetric imaging quality inevitably. Here, we propose a novel method to synthesize an ultra-long light needle to extend a microscope's depth of focus beyond its physical limitations with wavefront engineering method. Furthermore, it enables an improved lateral resolution that exceeds the diffraction limit of the objective lens. The virtual light needle can be flexibly synthesized anywhere throughout the imaging volume without mechanical scanning. Benefiting from these advantages, we developed a synthetic light needle photoacoustic microscopy (SLN-PAM) to achieve an extended depth of field (DOF), sub-diffraction and motionless volumetric imaging. The DOF of our SLN-PAM system is up to 1800 µm, more than 30-fold improvement over that gained by conventional PAM. Our system also achieves the lateral resolution of 1.8 µm (characterized at 532 nm and 0.1 NA objective), about 50% higher than the Rayleigh diffraction limit. Its superior imaging performance was demonstrated by 3D imaging of both non-biological and biological samples. This extended DOF, sub-diffraction and motionless 3D PAM will open up new opportunities for potential biomedical applications.
Automatic laser welding and milling with in situ inline coherent imaging.
Webster, P J L; Wright, L G; Ji, Y; Galbraith, C M; Kinross, A W; Van Vlack, C; Fraser, J M
2014-11-01
Although new affordable high-power laser technologies enable many processing applications in science and industry, depth control remains a serious technical challenge. In this Letter we show that inline coherent imaging (ICI), with line rates up to 312 kHz and microsecond-duration capture times, is capable of directly measuring laser penetration depth, in a process as violent as kW-class keyhole welding. We exploit ICI's high speed, high dynamic range, and robustness to interference from other optical sources to achieve automatic, adaptive control of laser welding, as well as ablation, achieving 3D micron-scale sculpting in vastly different heterogeneous biological materials.
A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences
Zhu, Youding; Fujimura, Kikuo
2010-01-01
This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933
Planarity constrained multi-view depth map reconstruction for urban scenes
NASA Astrophysics Data System (ADS)
Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie
2018-05-01
Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.
Chen, Yuling; Lou, Yang; Yen, Jesse
2017-07-01
During conventional ultrasound imaging, the need for multiple transmissions for one image and the time of flight for a desired imaging depth limit the frame rate of the system. Using a single plane wave pulse during each transmission followed by parallel receive processing allows for high frame rate imaging. However, image quality is degraded because of the lack of transmit focusing. Beamforming by spatial matched filtering (SMF) is a promising method which focuses ultrasonic energy using spatial filters constructed from the transmit-receive impulse response of the system. Studies by other researchers have shown that SMF beamforming can provide dynamic transmit-receive focusing throughout the field of view. In this paper, we apply SMF beamforming to plane wave transmissions (PWTs) to achieve both dynamic transmit-receive focusing at all imaging depths and high imaging frame rate (>5000 frames per second). We demonstrated the capability of the combined method (PWT + SMF) of achieving two-way focusing mathematically through analysis based on the narrowband Rayleigh-Sommerfeld diffraction theory. Moreover, the broadband performance of PWT + SMF was quantified in terms of lateral resolution and contrast from both computer simulations and experimental data. Results were compared between SMF beamforming and conventional delay-and-sum (DAS) beamforming in both simulations and experiments. At an imaging depth of 40 mm, simulation results showed a 29% lateral resolution improvement and a 160% contrast improvement with PWT + SMF. These improvements were 17% and 48% for experimental data with noise.
Satheesha, T. Y.; Prasad, M. N. Giri; Dhruve, Kashyap D.
2017-01-01
Melanoma mortality rates are the highest amongst skin cancer patients. Melanoma is life threating when it grows beyond the dermis of the skin. Hence, depth is an important factor to diagnose melanoma. This paper introduces a non-invasive computerized dermoscopy system that considers the estimated depth of skin lesions for diagnosis. A 3-D skin lesion reconstruction technique using the estimated depth obtained from regular dermoscopic images is presented. On basis of the 3-D reconstruction, depth and 3-D shape features are extracted. In addition to 3-D features, regular color, texture, and 2-D shape features are also extracted. Feature extraction is critical to achieve accurate results. Apart from melanoma, in-situ melanoma the proposed system is designed to diagnose basal cell carcinoma, blue nevus, dermatofibroma, haemangioma, seborrhoeic keratosis, and normal mole lesions. For experimental evaluations, the PH2, ISIC: Melanoma Project, and ATLAS dermoscopy data sets is considered. Different feature set combinations is considered and performance is evaluated. Significant performance improvement is reported the post inclusion of estimated depth and 3-D features. The good classification scores of sensitivity = 96%, specificity = 97% on PH2 data set and sensitivity = 98%, specificity = 99% on the ATLAS data set is achieved. Experiments conducted to estimate tumor depth from 3-D lesion reconstruction is presented. Experimental results achieved prove that the proposed computerized dermoscopy system is efficient and can be used to diagnose varied skin lesion dermoscopy images. PMID:28512610
NASA Astrophysics Data System (ADS)
Serrels, K. A.; Ramsay, E.; Reid, D. T.
2009-02-01
We present experimental evidence for the resolution-enhancing effect of an annular pupil-plane aperture when performing nonlinear imaging in the vectorial-focusing regime through manipulation of the focal spot geometry. By acquiring two-photon optical beam-induced current images of a silicon integrated-circuit using solid-immersion-lens microscopy at 1550 nm we achieved 70 nm resolution. This result demonstrates a reduction in the minimum effective focal spot diameter of 36%. In addition, the annular-aperture-induced extension of the depth-of-focus causes an observable decrease in the depth contrast of the resulting image and we explain the origins of this using a simulation of the imaging process.
NASA Astrophysics Data System (ADS)
Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Choi, Wonshik
2016-03-01
With the advancement of 3D display technology, 3D imaging of macroscopic objects has drawn much attention as they provide the contents to display. The most widely used imaging methods include a depth camera, which measures time of flight for the depth discrimination, and various structured illumination techniques. However, these existing methods have poor depth resolution, which makes imaging complicated structures a difficult task. In order to resolve this issue, we propose an imaging system based upon low-coherence interferometry and off-axis digital holographic imaging. By using light source with coherence length of 200 micro, we achieved the depth resolution of 100 micro. In order to map the macroscopic objects with this high axial resolution, we installed a pair of prisms in the reference beam path for the long-range scanning of the optical path length. Specifically, one prism was fixed in position, and the other prism was mounted on a translation stage and translated in parallel to the first prism. Due to the multiple internal reflections between the two prisms, the overall path length was elongated by a factor of 50. In this way, we could cover a depth range more than 1 meter. In addition, we employed multiple speckle illuminations and incoherent averaging of the acquired holographic images for reducing the specular reflections from the target surface. Using this newly developed system, we performed imaging targets with multiple different layers and demonstrated imaging targets hidden behind the scattering layers. The method was also applied to imaging targets located around the corner.
NASA Astrophysics Data System (ADS)
Dhalla, Al-Hafeez Zahir
Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for extending the imaging range of OCT systems are developed. These techniques include the use of a high spectral purity swept source laser in a full-field OCT system, as well as the use of a peculiar phenomenon known as coherence revival to resolve the complex conjugate ambiguity in swept source OCT. In addition, a technique for extending the depth of focus of OCT systems by using a polarization-encoded, dual-focus sample arm is demonstrated. Along the way, other related advances are also presented, including the development of techniques to reduce crosstalk and speckle artifacts in full-field OCT, and the use of fast optical switches to increase the imaging speed of certain low-duty cycle swept source OCT systems. Finally, the clinical utility of these techniques is demonstrated by combining them to demonstrate high-speed, high resolution, extended-depth imaging of both the anterior and posterior eye simultaneously and in vivo.
Sarder, Pinaki; Yazdanfar, Siavash; Akers, Walter J.; Tang, Rui; Sudlow, Gail P.; Egbulefu, Christopher
2013-01-01
Abstract. The era of molecular medicine has ushered in the development of microscopic methods that can report molecular processes in thick tissues with high spatial resolution. A commonality in deep-tissue microscopy is the use of near-infrared (NIR) lasers with single- or multiphoton excitations. However, the relationship between different NIR excitation microscopic techniques and the imaging depths in tissue has not been established. We compared such depth limits for three NIR excitation techniques: NIR single-photon confocal microscopy (NIR SPCM), NIR multiphoton excitation with visible detection (NIR/VIS MPM), and all-NIR multiphoton excitation with NIR detection (NIR/NIR MPM). Homologous cyanine dyes provided the fluorescence. Intact kidneys were harvested after administration of kidney-clearing cyanine dyes in mice. NIR SPCM and NIR/VIS MPM achieved similar maximum imaging depth of ∼100 μm. The NIR/NIR MPM enabled greater than fivefold imaging depth (>500 μm) using the harvested kidneys. Although the NIR/NIR MPM used 1550-nm excitation where water absorption is relatively high, cell viability and histology studies demonstrate that the laser did not induce photothermal damage at the low laser powers used for the kidney imaging. This study provides guidance on the imaging depth capabilities of NIR excitation-based microscopic techniques and reveals the potential to multiplex information using these platforms. PMID:24150231
Fang, Simin; Zhou, Sheng; Wang, Xiaochun; Ye, Qingsheng; Tian, Ling; Ji, Jianjun; Wang, Yanqun
2015-01-01
To design and improve signal processing algorithms of ophthalmic ultrasonography based on FPGA. Achieved three signal processing modules: full parallel distributed dynamic filter, digital quadrature demodulation, logarithmic compression, using Verilog HDL hardware language in Quartus II. Compared to the original system, the hardware cost is reduced, the whole image shows clearer and more information of the deep eyeball contained in the image, the depth of detection increases from 5 cm to 6 cm. The new algorithms meet the design requirements and achieve the system's optimization that they can effectively improve the image quality of existing equipment.
Zhou, Renjie; Jin, Di; Hosseini, Poorya; Singh, Vijay Raj; Kim, Yang-hyo; Kuang, Cuifang; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.
2017-01-01
Unlike most optical coherence microscopy (OCM) systems, dynamic speckle-field interferometric microscopy (DSIM) achieves depth sectioning through the spatial-coherence gating effect. Under high numerical aperture (NA) speckle-field illumination, our previous experiments have demonstrated less than 1 μm depth resolution in reflection-mode DSIM, while doubling the diffraction limited resolution as under structured illumination. However, there has not been a physical model to rigorously describe the speckle imaging process, in particular explaining the sectioning effect under high illumination and imaging NA settings in DSIM. In this paper, we develop such a model based on the diffraction tomography theory and the speckle statistics. Using this model, we calculate the system response function, which is used to further obtain the depth resolution limit in reflection-mode DSIM. Theoretically calculated depth resolution limit is in an excellent agreement with experiment results. We envision that our physical model will not only help in understanding the imaging process in DSIM, but also enable better designing such systems for depth-resolved measurements in biological cells and tissues. PMID:28085800
IP Subsurface Imaging in the Presence of Buried Steel Infrastructure
NASA Astrophysics Data System (ADS)
Smart, N. H.; Everett, M. E.
2017-12-01
The purpose of this research is to explore the use of induced polarization to image closely-spaced steel columns at a controlled test site. Texas A&M University's Riverside Campus (RELLIS) was used as a control test site to examine the difference between actual and remotely-sensed observed depths. Known borehole depths and soil composition made this site ideal. The subsurface metal structures were assessed using a combination of ER (Electrical Resistivity) and IP (Induced Polarization), and later processed using data inversion. Surveying was set up in reference to known locations and depths of steel structures in order to maximize control data quality. In comparing of known and remotely-sensed foundation depths a series of questions is raised regarding how percent error between imaged and actual depths can be lowered. We are able to draw questions from the results of our survey, as we compare them with the known depth and width of the metal beams. As RELLIS offers a control for us to conduct research, ideal survey geometry and inversion parameters can be met to achieve optimal results and resolution
Micromachined array tip for multifocus fiber-based optical coherence tomography.
Yang, Victor X D; Munce, Nigel; Pekar, Julius; Gordon, Maggie L; Lo, Stewart; Marcon, Norman E; Wilson, Brian C; Vitkin, I Alex
2004-08-01
High-resolution optical coherence tomography demands a large detector bandwidth and a high numerical aperture for real-time imaging, which is difficult to achieve over a large imaging depth. To resolve these conflicting requirements we propose a novel multifocus fiber-based optical coherence tomography system with a micromachined array tip. We demonstrate the fabrication of a prototype four-channel tip that maintains a 9-14-microm spot diameter with more than 500 microm of imaging depth. Images of a resolution target and a human tooth were obtained with this tip by use of a four-channel cascaded Michelson fiber-optic interferometer, scanned simultaneously at 8 kHz with geometric power distribution across the four channels.
A commercialized photoacoustic microscopy system with switchable optical and acoustic resolutions
NASA Astrophysics Data System (ADS)
Pu, Yang; Bi, Renzhe; Olivo, Malini; Zhao, Xiaojie
2018-02-01
A focused-scanning photoacoustic microscopy (PAM) is available to help advancing life science research in neuroscience, cell biology, and in vivo imaging. At this early stage, the only one manufacturer of PAM systems, MicroPhotoAcoustics (MPA; Ronkonkoma, NY), MPA has developed a commercial PAM system with switchable optical and acoustic resolution (OR- and AR-PAM), using multiple patents licensed from the lab of Lihong Wang, who pioneered photoacoustics. The system includes different excitation sources. Two kilohertz-tunable, Q-switched, Diode Pumped Solid-State (DPSS) lasers offering a up to 30kHz pulse repetition rate and 9 ns pulse duration with 532 and 559 nm to achieve functional photoacoustic tomography for sO2 (oxygen saturation of hemoglobin) imaging in OR-PAM. A Ti:sapphire laser from 700 to 900 nm to achieve deep-tissue imaging. OR-PAM provides up to 1 mm penetration depth and 5 μm lateral resolution. while AR-PAM offers up to 3 mm imaging depth and 45 μm lateral resolution. The scanning step sizes for OR- and AR-PAM are 0.625 and 6.25 μm, respectively. Researchers have used the system for a range of applications, including preclinical neural imaging; imaging of cell nuclei in intestine, ear, and leg; and preclinical human imaging of finger cuticle. With the continuation of new technological advancements and discoveries, MPA plans to further advance PAM to achieve faster imaging speed, higher spatial resolution at deeper tissue layer, and address a broader range of biomedical applications.
Predefined Redundant Dictionary for Effective Depth Maps Representation
NASA Astrophysics Data System (ADS)
Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi
2016-01-01
The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.
Optical Drug Monitoring: Photoacoustic Imaging of Nanosensors to Monitor Therapeutic Lithium In Vivo
Cash, Kevin J.; Li, Chiye; Xia, Jun; Wang, Lihong V.; Clark, Heather A.
2015-01-01
Personalized medicine could revolutionize how primary care physicians treat chronic disease and how researchers study fundamental biological questions. To realize this goal we need to develop more robust, modular tools and imaging approaches for in vivo monitoring of analytes. In this report, we demonstrate that synthetic nanosensors can measure physiologic parameters with photoacoustic contrast, and we apply that platform to continuously track lithium levels in vivo. Photoacoustic imaging achieves imaging depths that are unattainable with fluorescence or multiphoton microscopy. We validated the photoacoustic results that illustrate the superior imaging depth and quality of photoacoustic imaging with optical measurements. This powerful combination of techniques will unlock the ability to measure analyte changes in deep tissue and will open up photoacoustic imaging as a diagnostic tool for continuous physiological tracking of a wide range of analytes. PMID:25588028
Cash, Kevin J; Li, Chiye; Xia, Jun; Wang, Lihong V; Clark, Heather A
2015-02-24
Personalized medicine could revolutionize how primary care physicians treat chronic disease and how researchers study fundamental biological questions. To realize this goal, we need to develop more robust, modular tools and imaging approaches for in vivo monitoring of analytes. In this report, we demonstrate that synthetic nanosensors can measure physiologic parameters with photoacoustic contrast, and we apply that platform to continuously track lithium levels in vivo. Photoacoustic imaging achieves imaging depths that are unattainable with fluorescence or multiphoton microscopy. We validated the photoacoustic results that illustrate the superior imaging depth and quality of photoacoustic imaging with optical measurements. This powerful combination of techniques will unlock the ability to measure analyte changes in deep tissue and will open up photoacoustic imaging as a diagnostic tool for continuous physiological tracking of a wide range of analytes.
Confocal Imaging of the Embryonic Heart: How Deep?
NASA Astrophysics Data System (ADS)
Miller, Christine E.; Thompson, Robert P.; Bigelow, Michael R.; Gittinger, George; Trusk, Thomas C.; Sedmera, David
2005-06-01
Confocal microscopy allows for optical sectioning of tissues, thus obviating the need for physical sectioning and subsequent registration to obtain a three-dimensional representation of tissue architecture. However, practicalities such as tissue opacity, light penetration, and detector sensitivity have usually limited the available depth of imaging to 200 [mu]m. With the emergence of newer, more powerful systems, we attempted to push these limits to those dictated by the working distance of the objective. We used whole-mount immunohistochemical staining followed by clearing with benzyl alcohol-benzyl benzoate (BABB) to visualize three-dimensional myocardial architecture. Confocal imaging of entire chick embryonic hearts up to a depth of 1.5 mm with voxel dimensions of 3 [mu]m was achieved with a 10× dry objective. For the purpose of screening for congenital heart defects, we used endocardial painting with fluorescently labeled poly-L-lysine and imaged BABB-cleared hearts with a 5× objective up to a depth of 2 mm. Two-photon imaging of whole-mount specimens stained with Hoechst nuclear dye produced clear images all the way through stage 29 hearts without significant signal attenuation. Thus, currently available systems allow confocal imaging of fixed samples to previously unattainable depths, the current limiting factors being objective working distance, antibody penetration, specimen autofluorescence, and incomplete clearing.
NASA Astrophysics Data System (ADS)
Ruf, B.; Erdnuess, B.; Weinmann, M.
2017-08-01
With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.
Colour helps to solve the binocular matching problem
den Ouden, HEM; van Ee, R; de Haan, EHF
2005-01-01
The spatial differences between the two retinal images, called binocular disparities, can be used to recover the three-dimensional (3D) aspects of a scene. The computation of disparity depends upon the correct identification of corresponding features in the two images. Understanding what image features are used by the brain to solve this binocular matching problem is an important issue in research on stereoscopic vision. The role of colour in binocular vision is controversial and it has been argued that colour is ineffective in achieving binocular vision. In the current experiment subjects were required to indicate the amount of perceived depth. The stimulus consisted of an array of fronto-parallel bars uniformly distributed in a constant sized volume. We studied the perceived depth in those 3D stimuli by manipulating both colour (monochrome, trichrome) and luminance (congruent, incongruent). Our results demonstrate that the amount of perceived depth was influenced by colour, indicating that the visual system uses colour to achieve binocular matching. Physiological data have revealed cortical cells in macaque V2 that are tuned both to binocular disparity and to colour. We suggest that one of the functional roles of these cells may be to help solve the binocular matching problem. PMID:15975983
Colour helps to solve the binocular matching problem.
den Ouden, H E M; van Ee, R; de Haan, E H F
2005-09-01
The spatial differences between the two retinal images, called binocular disparities, can be used to recover the three-dimensional (3D) aspects of a scene. The computation of disparity depends upon the correct identification of corresponding features in the two images. Understanding what image features are used by the brain to solve this binocular matching problem is an important issue in research on stereoscopic vision. The role of colour in binocular vision is controversial and it has been argued that colour is ineffective in achieving binocular vision. In the current experiment subjects were required to indicate the amount of perceived depth. The stimulus consisted of an array of fronto-parallel bars uniformly distributed in a constant sized volume. We studied the perceived depth in those 3D stimuli by manipulating both colour (monochrome, trichrome) and luminance (congruent, incongruent). Our results demonstrate that the amount of perceived depth was influenced by colour, indicating that the visual system uses colour to achieve binocular matching. Physiological data have revealed cortical cells in macaque V2 that are tuned both to binocular disparity and to colour. We suggest that one of the functional roles of these cells may be to help solve the binocular matching problem.
Optoacoustic imaging of tissue blanching during photodynamic therapy of esophageal cancer
NASA Astrophysics Data System (ADS)
Jacques, Steven L.; Viator, John A.; Paltauf, Guenther
2000-05-01
Esophageal cancer patients often present a highly inflamed esophagus at the time of treatment by photodynamic therapy. Immediately after treatment, the inflamed vessels have been shut down and the esophagus presents a white surface. Optoacoustic imaging via an optical fiber device can provide a depth profile of the blanching of inflammation. Such a profile may be an indicator of the depth of treatment achieved by the PDT. Our progress toward developing this diagnostic for use in our clinical PDT treatments of esophageal cancer patients is presented.
Study on super-resolution three-dimensional range-gated imaging technology
NASA Astrophysics Data System (ADS)
Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao
2018-04-01
Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.
Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering
NASA Astrophysics Data System (ADS)
Jiang, Lu; Piao, Yan
2018-04-01
The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.
NASA Astrophysics Data System (ADS)
Moothanchery, Mohesh; Sharma, Arunima; Periyasamy, Vijitha; Pramanik, Manojit
2018-02-01
It is always a great challenge for pure optical techniques to maintain good resolution and imaging depth at the same time. Photoacoustic imaging is an emerging technique which can overcome the limitation by pulsed light illumination and acoustic detection. Here, we report a Near Infrared Acoustic-Resolution Photoacoustic Microscopy (NIR-AR-PAM) systm with 30 MHz transducer and 1064 nm illumination which can achieve a lateral resolution of around 88 μm and imaging depth of 9.2 mm. Compared to visible light NIR beam can penetrate deeper in biological tissue due to weaker optical attenuation. In this work, we also demonstrated the in vivo imaging capabilty of NIRARPAM by near infrared detection of SLN with black ink as exogenous photoacoustic contrast agent in a rodent model.
Multidepth imaging by chromatic dispersion confocal microscopy
NASA Astrophysics Data System (ADS)
Olsovsky, Cory A.; Shelton, Ryan L.; Saldua, Meagan A.; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.
2012-03-01
Confocal microscopy has shown potential as an imaging technique to detect precancer. Imaging cellular features throughout the depth of epithelial tissue may provide useful information for diagnosis. However, the current in vivo axial scanning techniques for confocal microscopy are cumbersome, time-consuming, and restrictive when attempting to reconstruct volumetric images acquired in breathing patients. Chromatic dispersion confocal microscopy (CDCM) exploits severe longitudinal chromatic aberration in the system to axially disperse light from a broadband source and, ultimately, spectrally encode high resolution images along the depth of the object. Hyperchromat lenses are designed to have severe and linear longitudinal chromatic aberration, but have not yet been used in confocal microscopy. We use a hyperchromat lens in a stage scanning confocal microscope to demonstrate the capability to simultaneously capture information at multiple depths without mechanical scanning. A photonic crystal fiber pumped with a 830nm wavelength Ti:Sapphire laser was used as a supercontinuum source, and a spectrometer was used as the detector. The chromatic aberration and magnification in the system give a focal shift of 140μm after the objective lens and an axial resolution of 5.2-7.6μm over the wavelength range from 585nm to 830nm. A 400x400x140μm3 volume of pig cheek epithelium was imaged in a single X-Y scan. Nuclei can be seen at several depths within the epithelium. The capability of this technique to achieve simultaneous high resolution confocal imaging at multiple depths may reduce imaging time and motion artifacts and enable volumetric reconstruction of in vivo confocal images of the epithelium.
Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.
Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun
2017-07-01
In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.
Experimental study on the sensitive depth of backwards detected light in turbid media.
Zhang, Yunyao; Huang, Liqing; Zhang, Ning; Tian, Heng; Zhu, Jingping
2018-05-28
In the recent past, optical spectroscopy and imaging methods for biomedical diagnosis and target enhancing have been widely researched. The challenge to improve the performance of these methods is to know the sensitive depth of the backwards detected light well. Former research mainly employed a Monte Carlo method to run simulations to statistically describe the light sensitive depth. An experimental method for investigating the sensitive depth was developed and is presented here. An absorption plate was employed to remove all the light that may have travelled deeper than the plate, leaving only the light which cannot reach the plate. By measuring the received backwards light intensity and the depth between the probe and the plate, the light intensity distribution along the depth dimension can be achieved. The depth with the maximum light intensity was recorded as the sensitive depth. The experimental results showed that the maximum light intensity was nearly the same in a short depth range. It could be deduced that the sensitive depth was a range, rather than a single depth. This sensitive depth range as well as its central depth increased consistently with the increasing source-detection distance. Relationships between sensitive depth and optical properties were also investigated. It also showed that the reduced scattering coefficient affects the central sensitive depth and the range of the sensitive depth more than the absorption coefficient, so they cannot be simply added as reduced distinct coefficients to describe the sensitive depth. This study provides an efficient method for investigation of sensitive depth. It may facilitate the development of spectroscopy and imaging techniques for biomedical diagnosis and underwater imaging.
Robust stereo matching with trinary cross color census and triple image-based refinements
NASA Astrophysics Data System (ADS)
Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr
2017-12-01
For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.
A small-molecule dye for NIR-II imaging
NASA Astrophysics Data System (ADS)
Antaris, Alexander L.; Chen, Hao; Cheng, Kai; Sun, Yao; Hong, Guosong; Qu, Chunrong; Diao, Shuo; Deng, Zixin; Hu, Xianming; Zhang, Bo; Zhang, Xiaodong; Yaghi, Omar K.; Alamparambil, Zita R.; Hong, Xuechuan; Cheng, Zhen; Dai, Hongjie
2016-02-01
Fluorescent imaging of biological systems in the second near-infrared window (NIR-II) can probe tissue at centimetre depths and achieve micrometre-scale resolution at depths of millimetres. Unfortunately, all current NIR-II fluorophores are excreted slowly and are largely retained within the reticuloendothelial system, making clinical translation nearly impossible. Here, we report a rapidly excreted NIR-II fluorophore (~90% excreted through the kidneys within 24 h) based on a synthetic 970-Da organic molecule (CH1055). The fluorophore outperformed indocyanine green (ICG)--a clinically approved NIR-I dye--in resolving mouse lymphatic vasculature and sentinel lymphatic mapping near a tumour. High levels of uptake of PEGylated-CH1055 dye were observed in brain tumours in mice, suggesting that the dye was detected at a depth of ~4 mm. The CH1055 dye also allowed targeted molecular imaging of tumours in vivo when conjugated with anti-EGFR Affibody. Moreover, a superior tumour-to-background signal ratio allowed precise image-guided tumour-removal surgery.
Halimi, Abdelghafour; Batatia, Hadj; Le Digabel, Jimmy; Josse, Gwendal; Tourneret, Jean Yves
2017-01-01
Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50μm and 60μm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction. PMID:29296480
Depth-encoded all-fiber swept source polarization sensitive OCT
Wang, Zhao; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Lee, ByungKun; Choi, WooJhon; Potsaid, Benjamin; Liu, Jonathan; Jayaraman, Vijaysekhar; Cable, Alex; Kraus, Martin F.; Liang, Kaicheng; Hornegger, Joachim; Fujimoto, James G.
2014-01-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of conventional OCT and can assess depth-resolved tissue birefringence in addition to intensity. Most existing PS-OCT systems are relatively complex and their clinical translation remains difficult. We present a simple and robust all-fiber PS-OCT system based on swept source technology and polarization depth-encoding. Polarization multiplexing was achieved using a polarization maintaining fiber. Polarization sensitive signals were detected using fiber based polarization beam splitters and polarization controllers were used to remove the polarization ambiguity. A simplified post-processing algorithm was proposed for speckle noise reduction relaxing the demand for phase stability. We demonstrated systems design for both ophthalmic and catheter-based PS-OCT. For ophthalmic imaging, we used an optical clock frequency doubling method to extend the imaging range of a commercially available short cavity light source to improve polarization depth-encoding. For catheter based imaging, we demonstrated 200 kHz PS-OCT imaging using a MEMS-tunable vertical cavity surface emitting laser (VCSEL) and a high speed micromotor imaging catheter. The system was demonstrated in human retina, finger and lip imaging, as well as ex vivo swine esophagus and cardiovascular imaging. The all-fiber PS-OCT is easier to implement and maintain compared to previous PS-OCT systems and can be more easily translated to clinical applications due to its robust design. PMID:25401008
Potsaid, Benjamin; Baumann, Bernhard; Huang, David; Barry, Scott; Cable, Alex E.; Schuman, Joel S.; Duker, Jay S.; Fujimoto, James G.
2011-01-01
We demonstrate ultrahigh speed swept source/Fourier domain ophthalmic OCT imaging using a short cavity swept laser at 100,000–400,000 axial scan rates. Several design configurations illustrate tradeoffs in imaging speed, sensitivity, axial resolution, and imaging depth. Variable rate A/D optical clocking is used to acquire linear-in-k OCT fringe data at 100kHz axial scan rate with 5.3um axial resolution in tissue. Fixed rate sampling at 1 GSPS achieves a 7.5mm imaging range in tissue with 6.0um axial resolution at 100kHz axial scan rate. A 200kHz axial scan rate with 5.3um axial resolution over 4mm imaging range is achieved by buffering the laser sweep. Dual spot OCT using two parallel interferometers achieves 400kHz axial scan rate, almost 2X faster than previous 1050nm ophthalmic results and 20X faster than current commercial instruments. Superior sensitivity roll-off performance is shown. Imaging is demonstrated in the human retina and anterior segment. Wide field 12×12mm data sets include the macula and optic nerve head. Small area, high density imaging shows individual cone photoreceptors. The 7.5mm imaging range configuration can show the cornea, iris, and anterior lens in a single image. These improvements in imaging speed and depth range provide important advantages for ophthalmic imaging. The ability to rapidly acquire 3D-OCT data over a wide field of view promises to simplify examination protocols. The ability to image fine structures can provide detailed information on focal pathologies. The large imaging range and improved image penetration at 1050nm wavelengths promises to improve performance for instrumentation which images both the retina and anterior eye. These advantages suggest that swept source OCT at 1050nm wavelengths will play an important role in future ophthalmic instrumentation. PMID:20940894
The multifocus plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Lumsdaine, Andrew
2012-01-01
The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.
VISIDEP™: visual image depth enhancement by parallax induction
NASA Astrophysics Data System (ADS)
Jones, Edwin R.; McLaurin, A. P.; Cathey, LeConte
1984-05-01
The usual descriptions of depth perception have traditionally required the simultaneous presentation of disparate views presented to separate eyes with the concomitant demand that the resulting binocular parallax be horizontally aligned. Our work suggests that the visual input information is compared in a short-term memory buffer which permits the brain to compute depth as it is normally perceived. However, the mechanism utilized is also capable of receiving and processing the stereographic information even when it is received monocularly or when identical inputs are simultaneously fed to both eyes. We have also found that the restriction to horizontally displaced images is not a necessary requirement and that improvement in image acceptability is achieved by the use of vertical parallax. Use of these ideas permit the presentation of three-dimensional scenes on flat screens in full color without the encumbrance of glasses or other viewing aids.
Real-time handling of existing content sources on a multi-layer display
NASA Astrophysics Data System (ADS)
Singh, Darryl S. K.; Shin, Jung
2013-03-01
A Multi-Layer Display (MLD) consists of two or more imaging planes separated by physical depth where the depth is a key component in creating a glasses-free 3D effect. Its core benefits include being viewable from multiple angles, having full panel resolution for 3D effects with no side effects of nausea or eye-strain. However, typically content must be designed for its optical configuration in foreground and background image pairs. A process was designed to give a consistent 3D effect in a 2-layer MLD from existing stereo video content in real-time. Optimizations to stereo matching algorithms that generate depth maps in real-time were specifically tailored for the optical characteristics and image processing algorithms of a MLD. The end-to-end process included improvements to the Hierarchical Belief Propagation (HBP) stereo matching algorithm, improvements to optical flow and temporal consistency. Imaging algorithms designed for the optical characteristics of a MLD provided some visual compensation for depth map inaccuracies. The result can be demonstrated in a PC environment, displayed on a 22" MLD, used in the casino slot market, with 8mm of panel seperation. Prior to this development, stereo content had not been used to achieve a depth-based 3D effect on a MLD in real-time
Long-wavelength optical coherence tomography at 1.7 µm for enhanced imaging depth
Sharma, Utkarsh; Chang, Ernest W.; Yun, Seok H.
2009-01-01
Multiple scattering in a sample presents a significant limitation to achieve meaningful structural information at deeper penetration depths in optical coherence tomography (OCT). Previous studies suggest that the spectral region around 1.7 µm may exhibit reduced scattering coefficients in biological tissues compared to the widely used wavelengths around 1.3 µm. To investigate this long-wavelength region, we developed a wavelength-swept laser at 1.7 µm wavelength and conducted OCT or optical frequency domain imaging (OFDI) for the first time in this spectral range. The constructed laser is capable of providing a wide tuning range from 1.59 to 1.75 µm over 160 nm. When the laser was operated with a reduced tuning range over 95 nm at a repetition rate of 10.9 kHz and an average output power of 12.3 mW, the OFDI imaging system exhibited a sensitivity of about 100 dB and axial and lateral resolution of 24 µm and 14 µm, respectively. We imaged several phantom and biological samples using 1.3 µm and 1.7 µm OFDI systems and found that the depth-dependent signal decay rate is substantially lower at 1.7 µm wavelength in most, if not all samples. Our results suggest that this imaging window may offer an advantage over shorter wavelengths by increasing the penetration depths as well as enhancing image contrast at deeper penetration depths where otherwise multiple scattered photons dominate over ballistic photons. PMID:19030057
In vivo deep tissue fluorescence imaging of the murine small intestine and colon
NASA Astrophysics Data System (ADS)
Crosignani, Viera; Dvornikov, Alexander; Aguilar, Jose S.; Stringari, Chiara; Edwards, Roberts; Mantulin, Williams; Gratton, Enrico
2012-03-01
Recently we described a novel technical approach with enhanced fluorescence detection capabilities in two-photon microscopy that achieves deep tissue imaging, while maintaining micron resolution. This technique was applied to in vivo imaging of murine small intestine and colon. Individuals with Inflammatory Bowel Disease (IBD), commonly presenting as Crohn's disease or Ulcerative Colitis, are at increased risk for developing colorectal cancer. We have developed a Giα2 gene knock out mouse IBD model that develops colitis and colon cancer. The challenge is to study the disease in the whole animal, while maintaining high resolution imaging at millimeter depth. In the Giα2-/- mice, we have been successful in imaging Lgr5-GFP positive stem cell reporters that are found in crypts of niche structures, as well as deeper structures, in the small intestine and colon at depths greater than 1mm. In parallel with these in vivo deep tissue imaging experiments, we have also pursued autofluorescence FLIM imaging of the colon and small intestine-at more shallow depths (roughly 160μm)- on commercial two photon microscopes with excellent structural correlation (in overlapping tissue regions) between the different technologies.
Prestack depth migration for complex 2D structure using phase-screen propagators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, P.; Huang, Lian-Jie; Burch, C.
1997-11-01
We present results for the phase-screen propagator method applied to prestack depth migration of the Marmousi synthetic data set. The data were migrated as individual common-shot records and the resulting partial images were superposed to obtain the final complete Image. Tests were performed to determine the minimum number of frequency components required to achieve the best quality image and this in turn provided estimates of the minimum computing time. Running on a single processor SUN SPARC Ultra I, high quality images were obtained in as little as 8.7 CPU hours and adequate images were obtained in as little as 4.4more » CPU hours. Different methods were tested for choosing the reference velocity used for the background phase-shift operation and for defining the slowness perturbation screens. Although the depths of some of the steeply dipping, high-contrast features were shifted slightly the overall image quality was fairly insensitive to the choice of the reference velocity. Our jests show the phase-screen method to be a reliable and fast algorithm for imaging complex geologic structures, at least for complex 2D synthetic data where the velocity model is known.« less
Oriented modulation for watermarking in direct binary search halftone images.
Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der
2012-09-01
In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.
Full range line-field parallel swept source imaging utilizing digital refocusing
NASA Astrophysics Data System (ADS)
Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.
2015-12-01
We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.
High-speed image processing system and its micro-optics application
NASA Astrophysics Data System (ADS)
Ohba, Kohtaro; Ortega, Jesus C. P.; Tanikawa, Tamio; Tanie, Kazuo; Tajima, Kenji; Nagai, Hiroshi; Tsuji, Masataka; Yamada, Shigeru
2003-07-01
In this paper, a new application system with high speed photography, i.e. an observational system for the tele-micro-operation, has been proposed with a dynamic focusing system and a high-speed image processing system using the "Depth From Focus (DFF)" criteria. In micro operation, such as for the microsurgery, DNA operation and etc., the small depth of a focus on the microscope makes bad observation. For example, if the focus is on the object, the actuator cannot be seen with the microscope. On the other hand, if the focus is on the actuator, the object cannot be observed. In this sense, the "all-in-focus image," which holds the in-focused texture all over the image, is useful to observe the microenvironments on the microscope. It is also important to obtain the "depth map" which could show the 3D micro virtual environments in real-time to actuate the micro objects, intuitively. To realize the real-time micro operation with DFF criteria, which has to integrate several images to obtain "all-in-focus image" and "depth map," at least, the 240 frames par second based image capture and processing system should be required. At first, this paper briefly reviews the criteria of "depth from focus" to achieve the all-in-focus image and the 3D microenvironments' reconstruction, simultaneously. After discussing the problem in our past system, a new frame-rate system is constructed with the high-speed video camera and FPGA hardware with 240 frames par second. To apply this system in the real microscope, a new criterion "ghost filtering" technique to reconstruct the all-in-focus image is proposed. Finally, the micro observation shows the validity of this system.
Ding, Qiuning; Tao, Chao; Liu, Xiaojun
2017-03-20
Speed-of-sound and optical absorption reflect the structure and function of tissues from different aspects. A dual-mode microscopy system based on a concentric annular ultrasound array is proposed to simultaneously acquire the long depth-of-field images of speed-of-sound and optical absorption of inhomogeneous samples. First, speed-of-sound is decoded from the signal delay between each element of the annular array. The measured speed-of-sound could not only be used as an image contrast, but also improve the resolution and accuracy of spatial location of photoacoustic image in inhomogeneous acoustic media. Secondly, benefitting from dynamic focusing of annular array and the measured speed-of-sound, it is achieved an advanced acoustic-resolution photoacoustic microscopy with a precise position and a long depth-of-field. The performance of the dual-mode imaging system has been experimentally examined by using a custom-made annular array. The proposed dual-mode microscopy might have the significances in monitoring the biological physiological and pathological processes.
In vivo microwave-based thermoacoustic tomography of rats (Conference Presentation)
NASA Astrophysics Data System (ADS)
Lin, Li; Zhou, Yong; Wang, Lihong V.
2016-03-01
Microwave-based thermoacoustic tomography (TAT), based on the measurement of ultrasonic waves induced by microwave pulses, can reveal tissue dielectric properties that may be closely related to the physiological and pathological status of the tissues. Using microwaves as the excitation source improved imaging depth because of their deep penetration into biological tissues. We demonstrate, for the first time, in vivo microwave-based thermoacoustic imaging in rats. The transducer is rotated around the rat in a full circle, providing a full two-dimensional view. Instead of a flat ultrasonic transducer, we used a virtual line detector based on a cylindrically focused transducer. A 3 GHz microwave source with 0.6 µs pulse width and an electromagnetically shielded transducer with 2.25 MHz central frequency provided clear cross-sectional images of the rat's body. The high imaging contrast, based on the tissue's rate of absorption, and the ultrasonically defined spatial resolution combine to reveal the spine, kidney, muscle, and other deeply seated anatomical features in the rat's abdominal cavity. This non-invasive and non-ionizing imaging modality achieved an imaging depth beyond 6 cm in the rat's tissue. Cancer diagnosis based on information about tissue properties from microwave band TAT can potentially be more accurate than has previously been achievable.
NASA Astrophysics Data System (ADS)
Cheong, M. K.; Bahiki, M. R.; Azrad, S.
2016-10-01
The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.
Saleh, Khaled; Hossny, Mohammed; Nahavandi, Saeid
2018-06-12
Traffic collisions between kangaroos and motorists are on the rise on Australian roads. According to a recent report, it was estimated that there were more than 20,000 kangaroo vehicle collisions that occurred only during the year 2015 in Australia. In this work, we are proposing a vehicle-based framework for kangaroo detection in urban and highway traffic environment that could be used for collision warning systems. Our proposed framework is based on region-based convolutional neural networks (RCNN). Given the scarcity of labeled data of kangaroos in traffic environments, we utilized our state-of-the-art data generation pipeline to generate 17,000 synthetic depth images of traffic scenes with kangaroo instances annotated in them. We trained our proposed RCNN-based framework on a subset of the generated synthetic depth images dataset. The proposed framework achieved a higher average precision (AP) score of 92% over all the testing synthetic depth image datasets. We compared our proposed framework against other baseline approaches and we outperformed it with more than 37% in AP score over all the testing datasets. Additionally, we evaluated the generalization performance of the proposed framework on real live data and we achieved a resilient detection accuracy without any further fine-tuning of our proposed RCNN-based framework.
Modeling the convergence accommodation of stereo vision for binocular endoscopy.
Gao, Yuanqian; Li, Jinhua; Li, Jianmin; Wang, Shuxin
2018-02-01
The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS. Copyright © 2017 John Wiley & Sons, Ltd.
Imaging Mass Spectrometry on the Nanoscale with Cluster Ion Beams
2015-01-01
Imaging with cluster secondary ion mass spectrometry (SIMS) is reaching a mature level of development. Using a variety of molecular ion projectiles to stimulate desorption, 3-dimensional imaging with the selectivity of mass spectrometry can now be achieved with submicrometer spatial resolution and <10 nm depth resolution. In this Perspective, stock is taken regarding what it will require to routinely achieve these remarkable properties. Issues include the chemical nature of the projectile, topography formation, differential erosion rates, and perhaps most importantly, ionization efficiency. Shortcomings of existing instrumentation are also noted. Speculation about how to successfully resolve these issues is a key part of the discussion. PMID:25458665
3D endoscopic imaging using structured illumination technique (Conference Presentation)
NASA Astrophysics Data System (ADS)
Le, Hanh N. D.; Nguyen, Hieu; Wang, Zhaoyang; Kang, Jin U.
2017-02-01
Surgeons have been increasingly relying on minimally invasive surgical guidance techniques not only to reduce surgical trauma but also to achieve accurate and objective surgical risk evaluations. A typical minimally invasive surgical guidance system provides visual assistance in two-dimensional anatomy and pathology of internal organ within a limited field of view. In this work, we propose and implement a structure illumination endoscope to provide a simple, inexpensive 3D endoscopic imaging to conduct high resolution 3D imagery for use in surgical guidance system. The system is calibrated and validated for quantitative depth measurement in both calibrated target and human subject. The system exhibits a depth of field of 20 mm, depth resolution of 0.2mm and a relative accuracy of 0.1%. The demonstrated setup affirms the feasibility of using the structured illumination endoscope for depth quantization and assisting medical diagnostic assessments
NASA Astrophysics Data System (ADS)
Siegel, Mel; Tobinaga, Yoshikazu; Akiya, Takeo
1999-05-01
Not only binocular perspective disparity, but also many secondary binocular and monocular sensory phenomena, contribute to the human sensation of depth. Binocular perspective disparity is notable as the strongest depth perception factor. However means for creating if artificially from flat image pairs are notorious for inducing physical and mental stresses, e.g., 'virtual reality sickness'. Aiming to deliver a less stressful 'kinder gentler stereo (KGS)', we systematically examine the secondary phenomena and their synergistic combination with each other and with binocular perspective disparity. By KGS we mean a stereo capture, rendering, and display paradigm without cue conflicts, without eyewear, without viewing zones, with negligible 'lock-in' time to perceive the image in depth, and with a normal appearance for stereo-deficient viewers. To achieve KGS we employ optical and digital image processing steps that introduce distortions contrary to strict 'geometrical correctness' of binocular perspective but which nevertheless result in increased stereoscopic viewing comfort. We particularly exploit the lower limits of interoccular separation, showing that unexpectedly small disparities stimulate accurate and pleasant depth sensations. Under these circumstances crosstalk is perceived as depth-of-focus rather than as ghosting. This suggests the possibility of radically new approaches to stereoview multiplexing that enable zoneless autostereoscopic display.
NASA Astrophysics Data System (ADS)
Enfield, Joey; McGrath, James; Daly, Susan M.; Leahy, Martin
2016-08-01
Changes within the microcirculation can provide an early indication of the onset of a plethora of ailments. Various techniques have thus been developed that enable the study of microcirculatory irregularities. Correlation mapping optical coherence tomography (cmOCT) is a recently proposed technique, which enables mapping of vasculature networks at the capillary level in a noninvasive and noncontact manner. This technique is an extension of conventional optical coherence tomography (OCT) and is therefore likewise limited in the penetration depth of ballistic photons in biological media. Optical clearing has previously been demonstrated to enhance the penetration depth and the imaging capabilities of OCT. In order to enhance the achievable maximum imaging depth, we propose the use of optical clearing in conjunction with the cmOCT technique. We demonstrate in vivo a 13% increase in OCT penetration depth by topical application of a high-concentration fructose solution, thereby enabling the visualization of vessel features at deeper depths within the tissue.
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
NASA Astrophysics Data System (ADS)
Poddar, Raju; Zawadzki, Robert J.; Cortés, Dennis E.; Mannis, Mark J.; Werner, John S.
2015-06-01
We present in vivo volumetric depth-resolved vasculature images of the anterior segment of the human eye acquired with phase-variance based motion contrast using a high-speed (100 kHz, 105 A-scans/s) swept source optical coherence tomography system (SSOCT). High phase stability SSOCT imaging was achieved by using a computationally efficient phase stabilization approach. The human corneo-scleral junction and sclera were imaged with swept source phase-variance optical coherence angiography and compared with slit lamp images from the same eyes of normal subjects. Different features of the rich vascular system in the conjunctiva and episclera were visualized and described. This system can be used as a potential tool for ophthalmological research to determine changes in the outflow system, which may be helpful for identification of abnormalities that lead to glaucoma.
Scanning fiber angle-resolved low coherence interferometry
Zhu, Yizheng; Terry, Neil G.; Wax, Adam
2010-01-01
We present a fiber-optic probe for Fourier-domain angle-resolved low coherence interferometry for the determination of depth-resolved scatterer size. The probe employs a scanning single-mode fiber to collect the angular scattering distribution of the sample, which is analyzed using the Mie theory to obtain the average size of the scatterers. Depth sectioning is achieved with low coherence Mach–Zehnder interferometry. In the sample arm of the interferometer, a fixed fiber illuminates the sample through an imaging lens and a collection fiber samples the backscattered angular distribution by scanning across the Fourier plane image of the sample. We characterize the optical performance of the probe and demonstrate the ability to execute depth-resolved sizing with subwavelength accuracy by using a double-layer phantom containing two sizes of polystyrene microspheres. PMID:19838271
NASA Astrophysics Data System (ADS)
Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard
2017-07-01
In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.
In vivo rat deep brain imaging using photoacoustic computed tomography (Conference Presentation)
NASA Astrophysics Data System (ADS)
Lin, Li; Li, Lei; Zhu, Liren; Hu, Peng; Wang, Lihong V.
2017-03-01
The brain has been likened to a great stretch of unknown territory consisting of a number of unexplored continents. Small animal brain imaging plays an important role charting that territory. By using 1064 nm illumination from the side, we imaged the full coronal depth of rat brains in vivo. The experiment was performed using a real-time full-ring-array photoacoustic computed tomography (PACT) imaging system, which achieved an imaging depth of 11 mm and a 100 μm radial resolution. Because of the fast imaging speed of the full-ring-array PACT system, no animal motion artifact was induced. The frame rate of the system was limited by the laser repetition rate (50 Hz). In addition to anatomical imaging of the blood vessels in the brain, we continuously monitored correlations between the two brain hemispheres in one of the coronal planes. The resting states in the coronal plane were measured before and after stroke ligation surgery at a neck artery.
Poddar, Raju; Cortés, Dennis E.; Werner, John S.; Mannis, Mark J.
2013-01-01
Abstract. A high-speed (100 kHz A-scans/s) complex conjugate resolved 1 μm swept source optical coherence tomography (SS-OCT) system using coherence revival of the light source is suitable for dense three-dimensional (3-D) imaging of the anterior segment. The short acquisition time helps to minimize the influence of motion artifacts. The extended depth range of the SS-OCT system allows topographic analysis of clinically relevant images of the entire depth of the anterior segment of the eye. Patients with the type 1 Boston Keratoprosthesis (KPro) require evaluation of the full anterior segment depth. Current commercially available OCT systems are not suitable for this application due to limited acquisition speed, resolution, and axial imaging range. Moreover, most commonly used research grade and some clinical OCT systems implement a commercially available SS (Axsun) that offers only 3.7 mm imaging range (in air) in its standard configuration. We describe implementation of a common swept laser with built-in k-clock to allow phase stable imaging in both low range and high range, 3.7 and 11.5 mm in air, respectively, without the need to build an external MZI k-clock. As a result, 3-D morphology of the KPro position with respect to the surrounding tissue could be investigated in vivo both at high resolution and with large depth range to achieve noninvasive and precise evaluation of success of the surgical procedure. PMID:23912759
Super-resolution for asymmetric resolution of FIB-SEM 3D imaging using AI with deep learning.
Hagita, Katsumi; Higuchi, Takeshi; Jinnai, Hiroshi
2018-04-12
Scanning electron microscopy equipped with a focused ion beam (FIB-SEM) is a promising three-dimensional (3D) imaging technique for nano- and meso-scale morphologies. In FIB-SEM, the specimen surface is stripped by an ion beam and imaged by an SEM installed orthogonally to the FIB. The lateral resolution is governed by the SEM, while the depth resolution, i.e., the FIB milling direction, is determined by the thickness of the stripped thin layer. In most cases, the lateral resolution is superior to the depth resolution; hence, asymmetric resolution is generated in the 3D image. Here, we propose a new approach based on an image-processing or deep-learning-based method for super-resolution of 3D images with such asymmetric resolution, so as to restore the depth resolution to achieve symmetric resolution. The deep-learning-based method learns from high-resolution sub-images obtained via SEM and recovers low-resolution sub-images parallel to the FIB milling direction. The 3D morphologies of polymeric nano-composites are used as test images, which are subjected to the deep-learning-based method as well as conventional methods. We find that the former yields superior restoration, particularly as the asymmetric resolution is increased. Our super-resolution approach for images having asymmetric resolution enables observation time reduction.
Feasibility of spatial frequency-domain imaging for monitoring palpable breast lesions
NASA Astrophysics Data System (ADS)
Robbins, Constance M.; Raghavan, Guruprasad; Antaki, James F.; Kainerstorfer, Jana M.
2017-12-01
In breast cancer diagnosis and therapy monitoring, there is a need for frequent, noninvasive disease progression evaluation. Breast tumors differ from healthy tissue in mechanical stiffness as well as optical properties, which allows optical methods to detect and monitor breast lesions noninvasively. Spatial frequency-domain imaging (SFDI) is a reflectance-based diffuse optical method that can yield two-dimensional images of absolute optical properties of tissue with an inexpensive and portable system, although depth penetration is limited. Since the absorption coefficient of breast tissue is relatively low and the tissue is quite flexible, there is an opportunity for compression of tissue to bring stiff, palpable breast lesions within the detection range of SFDI. Sixteen breast tissue-mimicking phantoms were fabricated containing stiffer, more highly absorbing tumor-mimicking inclusions of varying absorption contrast and depth. These phantoms were imaged with an SFDI system at five levels of compression. An increase in absorption contrast was observed with compression, and reliable detection of each inclusion was achieved when compression was sufficient to bring the inclusion center within ˜12 mm of the phantom surface. At highest compression level, contrasts achieved with this system were comparable to those measured with single source-detector near-infrared spectroscopy.
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera
Chen, Hao; Woodward, Maria A.; Burke, David T.; Jeganathan, V. Swetha E.; Demirci, Hakan; Sick, Volker
2017-01-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring. PMID:29082081
Human iris three-dimensional imaging at micron resolution by a micro-plenoptic camera.
Chen, Hao; Woodward, Maria A; Burke, David T; Jeganathan, V Swetha E; Demirci, Hakan; Sick, Volker
2017-10-01
A micro-plenoptic system was designed to capture the three-dimensional (3D) topography of the anterior iris surface by simple single-shot imaging. Within a depth-of-field of 2.4 mm, depth resolution of 10 µm can be achieved with accuracy (systematic errors) and precision (random errors) below 20%. We demonstrated the application of our micro-plenoptic imaging system on two healthy irides, an iris with naevi, and an iris with melanoma. The ridges and folds, with height differences of 10~80 µm, on the healthy irides can be effectively captured. The front surface on the iris naevi was flat, and the iris melanoma was 50 ± 10 µm higher than the surrounding iris. The micro-plenoptic imaging system has great potential to be utilized for iris disease diagnosis and continuing, simple monitoring.
Monocular depth perception using image processing and machine learning
NASA Astrophysics Data System (ADS)
Hombali, Apoorv; Gorde, Vaibhav; Deshpande, Abhishek
2011-10-01
This paper primarily exploits some of the more obscure, but inherent properties of camera and image to propose a simpler and more efficient way of perceiving depth. The proposed method involves the use of a single stationary camera at an unknown perspective and an unknown height to determine depth of an object on unknown terrain. In achieving so a direct correlation between a pixel in an image and the corresponding location in real space has to be formulated. First, a calibration step is undertaken whereby the equation of the plane visible in the field of view is calculated along with the relative distance between camera and plane by using a set of derived spatial geometrical relations coupled with a few intrinsic properties of the system. The depth of an unknown object is then perceived by first extracting the object under observation using a series of image processing steps followed by exploiting the aforementioned mapping of pixel and real space coordinate. The performance of the algorithm is greatly enhanced by the introduction of reinforced learning making the system independent of hardware and environment. Furthermore the depth calculation function is modified with a supervised learning algorithm giving consistent improvement in results. Thus, the system uses the experience in past and optimizes the current run successively. Using the above procedure a series of experiments and trials are carried out to prove the concept and its efficacy.
Optical coherence microscope for invariant high resolution in vivo skin imaging
NASA Astrophysics Data System (ADS)
Murali, S.; Lee, K. S.; Meemon, P.; Rolland, J. P.
2008-02-01
A non-invasive, reliable and affordable imaging system with the capability of detecting skin pathologies such as skin cancer would be a valuable tool to use for pre-screening and diagnostic applications. Optical Coherence Microscopy (OCM) is emerging as a building block for in vivo optical diagnosis, where high numerical aperture optics is introduced in the sample arm to achieve high lateral resolution. While high numerical aperture optics enables realizing high lateral resolution at the focus point, dynamic focusing is required to maintain the target lateral resolution throughout the depth of the sample being imaged. In this paper, we demonstrate the ability to dynamically focus in real-time with no moving parts to a depth of up to 2mm in skin-equivalent tissue in order to achieve 3.5μm lateral resolution throughout an 8 cubic millimeter sample. The built-in dynamic focusing ability is provided by an addressable liquid lens embedded in custom-designed optics which was designed for a broadband laser source of 120 nm bandwidth centered at around 800nm. The imaging probe was designed to be low-cost and portable. Design evaluation and tolerance analysis results show that the probe is robust to manufacturing errors and produces consistent high performance throughout the imaging volume.
NASA Astrophysics Data System (ADS)
Schleusener, J.; Reble, C.; Helfmann, J.; Gersonde, I.; Cappius, H.-J.; Glanert, M.; Fluhr, J. W.; Meinke, M. C.
2014-03-01
Two different designs for fibre-coupled Raman probes are presented that are optimized for discriminating cancerous and normal skin by achieving high epithelial sensitivity to detect a major component of the Raman signal from the depth range of the epithelium. This is achieved by optimizing Raman spot diameters to the range of ≈200 µm, which distinguishes this approach from the common applications of either Raman microspectroscopy (1-5 µm) or measurements on larger sampling volume using spot sizes of a few mm. Video imaging with a depicted area in the order of a few cm, to allow comparing Raman measurements to the location of the histo-pathologic findings, is integrated in both designs. This is important due to the inhomogeneity of cancerous lesions. Video image acquisition is achieved using white light LED illumination, which avoids ambient light artefacts. The design requirements focus either on a compact light-weight configuration, for pen-like handling, or on a video-visible measurement spot to enable increased positioning accuracy. Both probes are evaluated with regard to spot size, Rayleigh suppression, background fluorescence, depth sensitivity, clinical handling and ambient light suppression. Ex vivo measurements on porcine ear skin correlates well with findings of other groups.
NASA Astrophysics Data System (ADS)
Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador
2012-01-01
A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.
Chan, K L Andrew; Kazarian, Sergei G
2008-10-01
Attenuated total reflection-Fourier transform infrared (ATR-FT-IR) imaging is a very useful tool for capturing chemical images of various materials due to the simple sample preparation and the ability to measure wet samples or samples in an aqueous environment. However, the size of the array detector used for image acquisition is often limited and there is usually a trade off between spatial resolution and the field of view (FOV). The combination of mapping and imaging can be used to acquire images with a larger FOV without sacrificing spatial resolution. Previous attempts have demonstrated this using an infrared microscope and a Germanium hemispherical ATR crystal to achieve images of up to 2.5 mm x 2.5 mm but with varying spatial resolution and depth of penetration across the imaged area. In this paper, we demonstrate a combination of mapping and imaging with a different approach using an external optics housing for large ATR accessories and inverted ATR prisms to achieve ATR-FT-IR images with a large FOV and reasonable spatial resolution. The results have shown that a FOV of 10 mm x 14 mm can be obtained with a spatial resolution of approximately 40-60 microm when using an accessory that gives no magnification. A FOV of 1.3 mm x 1.3 mm can be obtained with spatial resolution of approximately 15-20 microm when using a diamond ATR imaging accessory with 4x magnification. No significant change in image quality such as spatial resolution or depth of penetration has been observed across the whole FOV with this method and the measurement time was approximately 15 minutes for an image consisting of 16 image tiles.
Imaging Mass Spectrometry on the Nanoscale with Cluster Ion Beams
Winograd, Nicholas
2014-12-02
Imaging with cluster secondary ion mass spectrometry (SIMS) is reaching a mature level of development. When, using a variety of molecular ion projectiles to stimulate desorption, 3-dimensional imaging with the selectivity of mass spectrometry can now be achieved with submicrometer spatial resolution and <10 nm depth resolution. In this Perspective, stock is taken regarding what it will require to routinely achieve these remarkable properties. Some issues include the chemical nature of the projectile, topography formation, differential erosion rates, and perhaps most importantly, ionization efficiency. Shortcomings of existing instrumentation are also noted. One key part of this discussion involves speculation onmore » how best to resolve these issues.« less
Imaging Mass Spectrometry on the Nanoscale with Cluster Ion Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winograd, Nicholas
Imaging with cluster secondary ion mass spectrometry (SIMS) is reaching a mature level of development. When, using a variety of molecular ion projectiles to stimulate desorption, 3-dimensional imaging with the selectivity of mass spectrometry can now be achieved with submicrometer spatial resolution and <10 nm depth resolution. In this Perspective, stock is taken regarding what it will require to routinely achieve these remarkable properties. Some issues include the chemical nature of the projectile, topography formation, differential erosion rates, and perhaps most importantly, ionization efficiency. Shortcomings of existing instrumentation are also noted. One key part of this discussion involves speculation onmore » how best to resolve these issues.« less
Computational adaptive optics for broadband optical interferometric tomography of biological tissue.
Adie, Steven G; Graf, Benedikt W; Ahmad, Adeel; Carney, P Scott; Boppart, Stephen A
2012-05-08
Aberrations in optical microscopy reduce image resolution and contrast, and can limit imaging depth when focusing into biological samples. Static correction of aberrations may be achieved through appropriate lens design, but this approach does not offer the flexibility of simultaneously correcting aberrations for all imaging depths, nor the adaptability to correct for sample-specific aberrations for high-quality tomographic optical imaging. Incorporation of adaptive optics (AO) methods have demonstrated considerable improvement in optical image contrast and resolution in noninterferometric microscopy techniques, as well as in optical coherence tomography. Here we present a method to correct aberrations in a tomogram rather than the beam of a broadband optical interferometry system. Based on Fourier optics principles, we correct aberrations of a virtual pupil using Zernike polynomials. When used in conjunction with the computed imaging method interferometric synthetic aperture microscopy, this computational AO enables object reconstruction (within the single scattering limit) with ideal focal-plane resolution at all depths. Tomographic reconstructions of tissue phantoms containing subresolution titanium-dioxide particles and of ex vivo rat lung tissue demonstrate aberration correction in datasets acquired with a highly astigmatic illumination beam. These results also demonstrate that imaging with an aberrated astigmatic beam provides the advantage of a more uniform depth-dependent signal compared to imaging with a standard gaussian beam. With further work, computational AO could enable the replacement of complicated and expensive optical hardware components with algorithms implemented on a standard desktop computer, making high-resolution 3D interferometric tomography accessible to a wider group of users and nonspecialists.
NASA Astrophysics Data System (ADS)
Bradu, Adrian; Kapinchev, Konstantin; Barnes, Fred; Garway-Heath, David F.; Rajendram, Ranjan; Keane, Pearce; Podoleanu, Adrian G.
2015-03-01
Recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), specialized for delivering en-face images. This method uses principles of spectral domain interfereometry in two stages. MS-OCT operates like a time domain OCT, selecting only signals from a chosen depth only while scanning the laser beam across the eye. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. The tremendous advantage in terms of parallel provision of data from numerous depths could not be fully employed by using multi core processors only. The data processing required to generate images at multiple depths simultaneously is not achievable with commodity multicore processors only. We compare here the major improvement in processing and display, brought about by using graphic cards. We demonstrate images obtained with a swept source at 100 kHz (which determines an acquisition time [Ta] for a frame of 200×200 pixels2 of Ta =1.6 s). By the end of the acquired frame being scanned, using our computing capacity, 4 simultaneous en-face images could be created in T = 0.8 s. We demonstrate that by using graphic cards, 32 en-face images can be displayed in Td 0.3 s. Other faster swept source engines can be used with no difference in terms of Td. With 32 images (or more), volumes can be created for 3D display, using en-face images, as opposed to the current technology where volumes are created using cross section OCT images.
Lan, Gongpu; Mauger, Thomas F.; Li, Guoqiang
2015-01-01
We report on the theory and design of adaptive objective lens for ultra broadband near infrared light imaging with large dynamic optical depth scanning range by using an embedded tunable lens, which can find wide applications in deep tissue biomedical imaging systems, such as confocal microscope, optical coherence tomography (OCT), two-photon microscopy, etc., both in vivo and ex vivo. This design is based on, but not limited to, a home-made prototype of liquid-filled membrane lens with a clear aperture of 8mm and the thickness of 2.55mm ~3.18mm. It is beneficial to have an adaptive objective lens which allows an extended depth scanning range larger than the focal length zoom range, since this will keep the magnification of the whole system, numerical aperture (NA), field of view (FOV), and resolution more consistent. To achieve this goal, a systematic theory is presented, for the first time to our acknowledgment, by inserting the varifocal lens in between a front and a back solid lens group. The designed objective has a compact size (10mm-diameter and 15mm-length), ultrabroad working bandwidth (760nm - 920nm), a large depth scanning range (7.36mm in air) — 1.533 times of focal length zoom range (4.8mm in air), and a FOV around 1mm × 1mm. Diffraction-limited performance can be achieved within this ultrabroad bandwidth through all the scanning depth (the resolution is 2.22 μm - 2.81 μm, calculated at the wavelength of 800nm with the NA of 0.214 - 0.171). The chromatic focal shift value is within the depth of focus (field). The chromatic difference in distortion is nearly zero and the maximum distortion is less than 0.05%. PMID:26417508
Improved image processing of road pavement defect by infrared thermography
NASA Astrophysics Data System (ADS)
Sim, Jun-Gi
2018-03-01
This paper intends to achieve improved image processing for the clear identification of defects in damaged road pavement structure using infrared thermography non-destructive testing (NDT). To that goal, 4 types of pavement specimen including internal defects were fabricated to exploit the results obtained by heating the specimens by natural light. The results showed that defects located down to a depth of 3 cm could be detected by infrared thermography NDT using the improved image processing method.
Three-dimensional digital mapping of the optic nerve head cupping in glaucoma
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Ramirez, Manuel; Morales, Jose
1992-08-01
Visualization of the optic nerve head cupping is clinically achieved by stereoscopic viewing of a fundus image pair of the suspected eye. A novel algorithm for three-dimensional digital surface representation of the optic nerve head, using fusion of stereo depth map with a linearly stretched intensity image of a stereo fundus image pair, is presented. Prior to depth map acquisition, a number of preprocessing tasks including feature extraction, registration by cepstral analysis, and correction for intensity variations are performed. The depth map is obtained by using a coarse to fine strategy for obtaining disparities between corresponding areas. The required matching techniques to obtain the translational differences in every step, uses cepstral analysis and correlation-like scanning technique in the spatial domain for the finest details. The quantitative and precise representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful information than just qualitative stereoscopic viewing of the fundus as one of the diagnostic criteria for diagnosis of glaucoma.
Optical coherence microscopy for deep tissue imaging of the cerebral cortex with intrinsic contrast
Srinivasan, Vivek J.; Radhakrishnan, Harsha; Jiang, James Y.; Barry, Scott; Cable, Alex E.
2012-01-01
In vivo optical microscopic imaging techniques have recently emerged as important tools for the study of neurobiological development and pathophysiology. In particular, two-photon microscopy has proved to be a robust and highly flexible method for in vivo imaging in highly scattering tissue. However, two-photon imaging typically requires extrinsic dyes or contrast agents, and imaging depths are limited to a few hundred microns. Here we demonstrate Optical Coherence Microscopy (OCM) for in vivo imaging of neuronal cell bodies and cortical myelination up to depths of ~1.3 mm in the rat neocortex. Imaging does not require the administration of exogenous dyes or contrast agents, and is achieved through intrinsic scattering contrast and image processing alone. Furthermore, using OCM we demonstrate in vivo, quantitative measurements of optical properties (index of refraction and attenuation coefficient) in the cortex, and correlate these properties with laminar cellular architecture determined from the images. Lastly, we show that OCM enables direct visualization of cellular changes during cell depolarization and may therefore provide novel optical markers of cell viability. PMID:22330462
Correlation Plenoptic Imaging.
D'Angelo, Milena; Pepe, Francesco V; Garuccio, Augusto; Scarcelli, Giuliano
2016-06-03
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.
NASA Astrophysics Data System (ADS)
D'Angelo, Milena; Pepe, Francesco V.; Garuccio, Augusto; Scarcelli, Giuliano
2016-06-01
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.
NASA Astrophysics Data System (ADS)
Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.
2012-01-01
Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.
NASA Astrophysics Data System (ADS)
An, Lin; Shen, Tueng T.; Wang, Ruikang K.
2011-10-01
This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.
Three dimensional live-cell STED microscopy at increased depth using a water immersion objective
NASA Astrophysics Data System (ADS)
Heine, Jörn; Wurm, Christian A.; Keller-Findeisen, Jan; Schönle, Andreas; Harke, Benjamin; Reuss, Matthias; Winter, Franziska R.; Donnert, Gerald
2018-05-01
Modern fluorescence superresolution microscopes are capable of imaging living cells on the nanometer scale. One of those techniques is stimulated emission depletion (STED) which increases the microscope's resolution many times in the lateral and the axial directions. To achieve these high resolutions not only close to the coverslip but also at greater depths, the choice of objective becomes crucial. Oil immersion objectives have frequently been used for STED imaging since their high numerical aperture (NA) leads to high spatial resolutions. But during live-cell imaging, especially at great penetration depths, these objectives have a distinct disadvantage. The refractive index mismatch between the immersion oil and the usually aqueous embedding media of living specimens results in unwanted spherical aberrations. These aberrations distort the point spread functions (PSFs). Notably, during z- and 3D-STED imaging, the resolution increase along the optical axis is majorly hampered if at all possible. To overcome this limitation, we here use a water immersion objective in combination with a spatial light modulator for z-STED measurements of living samples at great depths. This compact design allows for switching between objectives without having to adapt the STED beam path and enables on the fly alterations of the STED PSF to correct for aberrations. Furthermore, we derive the influence of the NA on the axial STED resolution theoretically and experimentally. We show under live-cell imaging conditions that a water immersion objective leads to far superior results than an oil immersion objective at penetration depths of 5-180 μm.
NASA Astrophysics Data System (ADS)
Zhou, Yi; Tang, Yan; Deng, Qinyuan; Liu, Junbo; Wang, Jian; Zhao, Lixin
2017-08-01
Dimensional metrology for micro structure plays an important role in addressing quality issues and observing the performance of micro-fabricated products. In white light interferometry, the proposed method is expected to measure three-dimensional topography through modulation depth in spatial frequency domain. A normalized modulation depth is first obtained in the xy plane (image plane) for each CCD image individually. After that, the modulation depth of each pixel is analyzed along the scanning direction (z-axis) to reshape the topography of micro samples. Owing to the characteristics of modulation depth in broadband light interferometry, the method could effectively suppress the negative influences caused by light fluctuations and external irradiance disturbance. Both theory and experiments are elaborated in detail to verify that the modulation depth-based method can greatly level up the stability and sensitivity with satisfied precision in the measurement system. This technique can achieve an improved robustness in a complex measurement environment with the potential to be applied in online topography measurement such as chemistry and medical domains.
Extending the depth of field with chromatic aberration for dual-wavelength iris imaging.
Fitzgerald, Niamh M; Dainty, Christopher; Goncharov, Alexander V
2017-12-11
We propose a method of extending the depth of field to twice that achievable by conventional lenses for the purpose of a low cost iris recognition front-facing camera in mobile phones. By introducing intrinsic primary chromatic aberration in the lens, the depth of field is doubled by means of dual wavelength illumination. The lens parameters (radius of curvature, optical power) can be found analytically by using paraxial raytracing. The effective range of distances covered increases with dispersion of the glass chosen and with larger distance for the near object point.
Mobile robots exploration through cnn-based reinforcement learning.
Tai, Lei; Liu, Ming
2016-01-01
Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.
Active-passive data fusion algorithms for seafloor imaging and classification from CZMIL data
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Ramnath, Vinod; Feygels, Viktor; Kim, Minsu; Mathur, Abhinav; Aitken, Jennifer; Tuell, Grady
2010-04-01
CZMIL will simultaneously acquire lidar and passive spectral data. These data will be fused to produce enhanced seafloor reflectance images from each sensor, and combined at a higher level to achieve seafloor classification. In the DPS software, the lidar data will first be processed to solve for depth, attenuation, and reflectance. The depth measurements will then be used to constrain the spectral optimization of the passive spectral data, and the resulting water column estimates will be used recursively to improve the estimates of seafloor reflectance from the lidar. Finally, the resulting seafloor reflectance cube will be combined with texture metrics estimated from the seafloor topography to produce classifications of the seafloor.
Miniature objective lens with variable focus for confocal endomicroscopy
Kim, Minkyu; Kang, DongKyun; Wu, Tao; Tabatabaei, Nima; Carruth, Robert W.; Martinez, Ramses V; Whitesides, George M.; Nakajima, Yoshikazu; Tearney, Guillermo J.
2014-01-01
Spectrally encoded confocal microscopy (SECM) is a reflectance confocal microscopy technology that can rapidly image large areas of luminal organs at microscopic resolution. One of the main challenges for large-area SECM imaging in vivo is maintaining the same imaging depth within the tissue when patient motion and tissue surface irregularity are present. In this paper, we report the development of a miniature vari-focal objective lens that can be used in an SECM endoscopic probe to conduct adaptive focusing and to maintain the same imaging depth during in vivo imaging. The vari-focal objective lens is composed of an aspheric singlet with an NA of 0.5, a miniature water chamber, and a thin elastic membrane. The water volume within the chamber was changed to control curvature of the elastic membrane, which subsequently altered the position of the SECM focus. The vari-focal objective lens has a diameter of 5 mm and thickness of 4 mm. A vari-focal range of 240 μm was achieved while maintaining lateral resolution better than 2.6 μm and axial resolution better than 26 μm. Volumetric SECM images of swine esophageal tissues were obtained over the vari-focal range of 260 μm. SECM images clearly visualized cellular features of the swine esophagus at all focal depths, including basal cell nuclei, papillae, and lamina propria. PMID:25574443
A new data processing technique for Rayleigh-Taylor instability growth experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Yongteng; Tu, Shaoyong; Miao, Wenyong
Typical face-on experiments for Rayleigh-Taylor instability study involve the time-resolved radiography of an accelerated foil with line-of-sight of the radiography along the direction of motion. The usual method which derives perturbation amplitudes from the face-on images reverses the actual image transmission procedure, so the obtained results will have a large error in the case of large optical depth. In order to improve the accuracy of data processing, a new data processing technique has been developed to process the face-on images. This technique based on convolution theorem, refined solutions of optical depth can be achieved by solving equations. Furthermore, we discussmore » both techniques for image processing, including the influence of modulation transfer function of imaging system and the backlighter spatial profile. Besides, we use the two methods to the process the experimental results in Shenguang-II laser facility and the comparison shows that the new method effectively improve the accuracy of data processing.« less
A 5mm catheter for constant resolution probing in Fourier domain optical coherence endoscopy
NASA Astrophysics Data System (ADS)
Lee, Kye-Sung; Wu, Lei; Xie, Huikai; Ilegbusi, Olusegun; Costa, Marco; Rolland, Jannick P.
2007-02-01
A 5mm biophotonic catheter was conceived for optical coherence tomography (OCT) with collimation optics, an axicon lens, and custom design imaging optics, yielding a 360 degree scan aimed at imaging within concave structures such as lung lobes. In OCT a large depth of focus is necessary to image a thick sample with a constant high transverse resolution. There are two approaches to achieving constant lateral resolution in OCT: Dynamic focusing or Bessel beam forming. This paper focuses on imaging with Bessel beams. A Bessel beam can be generated in the sample arm of the OCT interferometer when axicon optics is employed instead of a conventional focusing lens. We present a design for a 5mm catheter that combines an axicon lens with imaging optics and the coupling of a MEMS mirror attached to a micromotor that allow 360 degree scanning with a resolution of about 5 microns across a depth of focus of about 1.2mm.
Dynamic-Receive Focusing with High-Frequency Annular Arrays
NASA Astrophysics Data System (ADS)
Ketterling, J. A.; Mamou, J.; Silverman, R. H.
High-frequency ultrasound is commonly employed for ophthalmic and small-animal imaging because of the fine-resolution images it affords. Annular arrays allow improved depth of field and lateral resolution versus commonly used single-element, focused transducers. The best image quality from an annular array is achieved by using synthetic transmit-to-receive focusing while utilizing data from all transmit-to-receive element combinations. However, annular arrays must be laterally scanned to form an image and this requires one pass for each of the array elements when implementing full synthetic transmit-to-receive focusing. A dynamic-receive focusing approach permits a single pass, although at a sacrifice of depth of field and lateral resolution. A five-element, 20-MHz annular array is examined to determine the acoustic beam properties for synthetic and dynamic-receive focusing. A spatial impulse response model is used to simulate the acoustic beam properties for each focusing case and then data acquired from a human eye-bank eye are processed to demonstrate the effect of each approach on image quality.
Experimental assessment of a 3-D plenoptic endoscopic imaging system.
Le, Hanh N D; Decker, Ryan; Krieger, Axel; Kang, Jin U
2017-01-01
An endoscopic imaging system using a plenoptic technique to reconstruct 3-D information is demonstrated and analyzed in this Letter. The proposed setup integrates a clinical surgical endoscope with a plenoptic camera to achieve a depth accuracy error of about 1 mm and a precision error of about 2 mm, within a 25 mm × 25 mm field of view, operating at 11 frames per second.
Experimental assessment of a 3-D plenoptic endoscopic imaging system
Le, Hanh N. D.; Decker, Ryan; Krieger, Axel; Kang, Jin U.
2017-01-01
An endoscopic imaging system using a plenoptic technique to reconstruct 3-D information is demonstrated and analyzed in this Letter. The proposed setup integrates a clinical surgical endoscope with a plenoptic camera to achieve a depth accuracy error of about 1 mm and a precision error of about 2 mm, within a 25 mm × 25 mm field of view, operating at 11 frames per second. PMID:29449863
Plenoptic layer-based modeling for image based rendering.
Pearson, James; Brookes, Mike; Dragotti, Pier Luigi
2013-09-01
Image based rendering is an attractive alternative to model based rendering for generating novel views because of its lower complexity and potential for photo-realistic results. To reduce the number of images necessary for alias-free rendering, some geometric information for the 3D scene is normally necessary. In this paper, we present a fast automatic layer-based method for synthesizing an arbitrary new view of a scene from a set of existing views. Our algorithm takes advantage of the knowledge of the typical structure of multiview data to perform occlusion-aware layer extraction. In addition, the number of depth layers used to approximate the geometry of the scene is chosen based on plenoptic sampling theory with the layers placed non-uniformly to account for the scene distribution. The rendering is achieved using a probabilistic interpolation approach and by extracting the depth layer information on a small number of key images. Numerical results demonstrate that the algorithm is fast and yet is only 0.25 dB away from the ideal performance achieved with the ground-truth knowledge of the 3D geometry of the scene of interest. This indicates that there are measurable benefits from following the predictions of plenoptic theory and that they remain true when translated into a practical system for real world data.
Total variation based image deconvolution for extended depth-of-field microscopy images
NASA Astrophysics Data System (ADS)
Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.
2015-03-01
One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.
Very high frame rate volumetric integration of depth images on mobile devices.
Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David
2015-11-01
Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.
110 °C range athermalization of wavefront coding infrared imaging systems
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong
2017-09-01
110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.
Real-time depth processing for embedded platforms
NASA Astrophysics Data System (ADS)
Rahnama, Oscar; Makarov, Aleksej; Torr, Philip
2017-05-01
Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.
Ultra-high-speed variable focus optics for novel applications in advanced imaging
NASA Astrophysics Data System (ADS)
Kang, S.; Dotsenko, E.; Amrhein, D.; Theriault, C.; Arnold, C. B.
2018-02-01
With the advancement of ultra-fast manufacturing technologies, high speed imaging with high 3D resolution has become increasingly important. Here we show the use of an ultra-high-speed variable focus optical element, the TAG Lens, to enable new ways to acquire 3D information from an object. The TAG Lens uses sound to adjust the index of refraction profile in a liquid and thereby can achieve focal scanning rates greater than 100 kHz. When combined with a high-speed pulsed LED and a high-speed camera, we can exploit this phenomenon to achieve high-resolution imaging through large depths. By combining the image acquisition with digital image processing, we can extract relevant parameters such as tilt and angle information from objects in the image. Due to the high speeds at which images can be collected and processed, we believe this technique can be used as an efficient method of industrial inspection and metrology for high throughput applications.
Design and testing of an annular array for very-high-frequency imaging
NASA Astrophysics Data System (ADS)
Ketterling, Jeffrey A.; Ramachandran, Sarayu; Lizzi, Frederic L.; Aristizábal, Orlando; Turnbull, Daniel H.
2004-05-01
Very-high-frequency ultrasound (VHFU) transducer technology is currently experiencing a great deal of interest. Traditionally, researchers have used single-element transducers which achieve exceptional lateral image resolution although at a very limited depth of field. A 5-ring focused annular array, a transducer geometry that permits an increased depth of field via electronic focusing, has been constructed. The transducer is fabricated with a PVDF membrane and a copper-clad Kapton film with an annular array pattern. The PVDF is bonded to the Kapton film and pressed into a spherically curved shape. The back side of the transducer is then filled with epoxy. One side of the PVDF is metallized with gold, forming the ground plane of the transducer. The array elements are accessed electrically via copper traces formed on the Kapton film. The annular array consists of 5 equal-area rings with an outer diameter of 1 cm and a radius of curvature of 9 mm. A wire reflector target was used to test the imaging capability of the transducer by acquiring B-scan data for each transmit/receive pair. A synthetic aperture approach was then used to reconstruct the image and demonstrate the enhanced depth of field capabilities of the transducer.
Simulated disparity and peripheral blur interact during binocular fusion.
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-07-17
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual’s aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. © 2014 ARVO.
Simulated disparity and peripheral blur interact during binocular fusion
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2014-01-01
We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion. PMID:25034260
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
NASA Astrophysics Data System (ADS)
Lee, Min Sun; Kim, Kyeong Yun; Ko, Guen Bae; Lee, Jae Sung
2017-05-01
In this study, we developed a proof-of-concept prototype PET system using a pair of depth-of-interaction (DOI) PET detectors based on the proposed DOI-encoding method and digital silicon photomultiplier (dSiPM). Our novel cost-effective DOI measurement method is based on a triangular-shaped reflector that requires only a single-layer pixelated crystal and single-ended signal readout. The DOI detector consisted of an 18 × 18 array of unpolished LYSO crystal (1.47 × 1.47 × 15 mm3) wrapped with triangular-shaped reflectors. The DOI information was encoded by depth-dependent light distribution tailored by the reflector geometry and DOI correction was performed using four-step depth calibration data and maximum-likelihood (ML) estimation. The detector pair and the object were placed on two motorized rotation stages to demonstrate 12-block ring PET geometry with 11.15 cm diameter. Spatial resolution was measured and phantom and animal imaging studies were performed to investigate imaging performance. All images were reconstructed with and without the DOI correction to examine the impact of our DOI measurement. The pair of dSiPM-based DOI PET detectors showed good physical performances respectively: 2.82 and 3.09 peak-to-valley ratios, 14.30% and 18.95% energy resolution, and 4.28 and 4.24 mm DOI resolution averaged over all crystals and all depths. A sub-millimeter spatial resolution was achieved at the center of the field of view (FOV). After applying ML-based DOI correction, maximum 36.92% improvement was achieved in the radial spatial resolution and a uniform resolution was observed within 5 cm of transverse PET FOV. We successfully acquired phantom and animal images with improved spatial resolution and contrast by using the DOI measurement. The proposed DOI-encoding method was successfully demonstrated in the system level and exhibited good performance, showing its feasibility for animal PET applications with high spatial resolution and sensitivity.
Efficient dense blur map estimation for automatic 2D-to-3D conversion
NASA Astrophysics Data System (ADS)
Vosters, L. P. J.; de Haan, G.
2012-03-01
Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.
X-ray imaging for security applications
NASA Astrophysics Data System (ADS)
Evans, J. Paul
2004-01-01
The X-ray screening of luggage by aviation security personnel may be badly hindered by the lack of visual cues to depth in an image that has been produced by transmitted radiation. Two-dimensional "shadowgraphs" with "organic" and "metallic" objects encoded using two different colors (usually orange and blue) are still in common use. In the context of luggage screening there are no reliable cues to depth present in individual shadowgraph X-ray images. Therefore, the screener is required to convert the 'zero depth resolution' shadowgraph into a three-dimensional mental picture to be able to interpret the relative spatial relationship of the objects under inspection. Consequently, additional cognitive processing is required e.g. integration, inference and memory. However, these processes can lead to serious misinterpretations of the actual physical structure being examined. This paper describes the development of a stereoscopic imaging technique enabling the screener to utilise binocular stereopsis and kinetic depth to enhance their interpretation of the actual nature of the objects under examination. Further work has led to the development of a technique to combine parallax data (to calculate the thickness of a target material) with the results of a basis material subtraction technique to approximate the target's effective atomic number and density. This has been achieved in preliminary experiments with a novel spatially interleaved dual-energy sensor which reduces the number of scintillation elements required by 50% in comparison to conventional sensor configurations.
Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei
2017-01-01
Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027
SIMS of organics—Advances in 2D and 3D imaging and future outlook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilmore, Ian S.
Secondary ion mass spectrometry (SIMS) has become a powerful technique for the label-free analysis of organics from cells to electronic devices. The development of cluster ion sources has revolutionized the field, increasing the sensitivity for organics by two or three orders of magnitude and for large clusters, such as C{sub 60} and argon clusters, allowing depth profiling of organics. The latter has provided the capability to generate stunning three dimensional images with depth resolutions of around 5 nm, simply unavailable by other techniques. Current state-of-the-art allows molecular images with a spatial resolution of around 500 nm to be achieved andmore » future developments are likely to progress into the sub-100 nm regime. This review is intended to bring those with some familiarity with SIMS up-to-date with the latest developments for organics, the fundamental principles that underpin this and define the future progress. State-of-the-art examples are showcased and signposts to more in-depth reviews about specific topics given for the specialist.« less
Endoscopic Optical Coherence Tomography for Clinical Gastroenterology
Tsai, Tsung-Han; Fujimoto, James G.; Mashimo, Hiroshi
2014-01-01
Optical coherence tomography (OCT) is a real-time optical imaging technique that is similar in principle to ultrasonography, but employs light instead of sound waves and allows depth-resolved images with near-microscopic resolution. Endoscopic OCT allows the evaluation of broad-field and subsurface areas and can be used ancillary to standard endoscopy, narrow band imaging, chromoendoscopy, magnification endoscopy, and confocal endomicroscopy. This review article will provide an overview of the clinical utility of endoscopic OCT in the gastrointestinal tract and of recent achievements using state-of-the-art endoscopic 3D-OCT imaging systems. PMID:26852678
Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick
2015-01-01
UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R2-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R2-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications. PMID:26437410
Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick
2015-09-30
UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R²-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R²-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications.
Three-dimensional wide-field pump-probe structured illumination microscopy
Kim, Yang-Hyo; So, Peter T.C.
2017-01-01
We propose a new structured illumination scheme for achieving depth resolved wide-field pump-probe microscopy with sub-diffraction limit resolution. By acquiring coherent pump-probe images using a set of 3D structured light illumination patterns, a 3D super-resolution pump-probe image can be reconstructed. We derive the theoretical framework to describe the coherent image formation and reconstruction scheme for this structured illumination pump-probe imaging system and carry out numerical simulations to investigate its imaging performance. The results demonstrate a lateral resolution improvement by a factor of three and providing 0.5 µm level axial optical sectioning. PMID:28380860
40 MHz high-frequency ultrafast ultrasound imaging.
Huang, Chih-Chung; Chen, Pei-Yu; Peng, Po-Hsun; Lee, Po-Yang
2017-06-01
Ultrafast high-frame-rate ultrasound imaging based on coherent-plane-wave compounding has been developed for many biomedical applications. Most coherent-plane-wave compounding systems typically operate at 3-15 MHz, and the image resolution for this frequency range is not sufficient for visualizing microstructure tissues. Therefore, the purpose of this study was to implement a high-frequency ultrafast ultrasound imaging operating at 40 MHz. The plane-wave compounding imaging and conventional multifocus B-mode imaging were performed using the Field II toolbox of MATLAB in simulation study. In experiments, plane-wave compounding images were obtained from a 256 channel ultrasound research platform with a 40 MHz array transducer. All images were produced by point-spread functions and cyst phantoms. The in vivo experiment was performed from zebrafish. Since high-frequency ultrasound exhibits a lower penetration, chirp excitation was applied to increase the imaging depth in simulation. The simulation results showed that a lateral resolution of up to 66.93 μm and a contrast of up to 56.41 dB were achieved when using 75-angles plane waves in compounding imaging. The experimental results showed that a lateral resolution of up to 74.83 μm and a contrast of up to 44.62 dB were achieved when using 75-angles plane waves in compounding imaging. The dead zone and compounding noise are about 1.2 mm and 2.0 mm in depth for experimental compounding imaging, respectively. The structure of zebrafish heart was observed clearly using plane-wave compounding imaging. The use of fewer than 23 angles for compounding allowed a frame rate higher than 1000 frames per second. However, the compounding imaging exhibits a similar lateral resolution of about 72 μm as the angle of plane wave is higher than 10 angles. This study shows the highest operational frequency for ultrafast high-frame-rate ultrasound imaging. © 2017 American Association of Physicists in Medicine.
The Athena Pancam and Color Microscopic Imager (CMI)
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Herkenhoff, K. E.; Schwochert, M.; Morris, R. V.; Sullivan, R.
2000-01-01
The Athena Mars rover payload includes two primary science-grade imagers: Pancam, a multispectral, stereo, panoramic camera system, and the Color Microscopic Imager (CMI), a multispectral and variable depth-of-field microscope. Both of these instruments will help to achieve the primary Athena science goals by providing information on the geology, mineralogy, and climate history of the landing site. In addition, Pancam provides important support for rover navigation and target selection for Athena in situ investigations. Here we describe the science goals, instrument designs, and instrument performance of the Pancam and CMI investigations.
Modelling of influence of spherical aberration coefficients on depth of focus of optical systems
NASA Astrophysics Data System (ADS)
Pokorný, Petr; Šmejkal, Filip; Kulmon, Pavel; Mikš, Antonín.; Novák, Jiří; Novák, Pavel
2017-06-01
This contribution describes how to model the influence of spherical aberration coefficients on the depth of focus of optical systems. Analytical formulas for the calculation of beam's caustics are presented. The conditions for aberration coefficients are derived for two cases when we require that either the Strehl definition or the gyration radius should be the identical in two symmetrically placed planes with respect to the paraxial image plane. One can calculate the maximum depth of focus and the minimum diameter of the circle of confusion of the optical system corresponding to chosen conditions. This contribution helps to understand how spherical aberration may affect the depth of focus and how to design such an optical system with the required depth of focus. One can perform computer modelling and design of the optical system and its spherical aberration in order to achieve the required depth of focus.
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes
Boulos, Maged N Kamel; Robinson, Larry R
2009-01-01
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.
Boulos, Maged N Kamel; Robinson, Larry R
2009-10-22
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.
Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes
Boulos, Maged N.K.; Robinson, Larry R.
2009-01-01
Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.
A 30-MHz piezo-composite ultrasound array for medical imaging applications.
Ritter, Timothy A; Shrout, Thomas R; Tutwiler, Rick; Shung, K Kirk
2002-02-01
Ultrasound imaging at frequencies above 20 MHz is capable of achieving improved resolution in clinical applications requiring limited penetration depth. High frequency arrays that allow real-time imaging are desired for these applications but are not yet currently available. In this work, a method for fabricating fine-scale 2-2 composites suitable for 30-MHz linear array transducers was successfully demonstrated. High thickness coupling, low mechanical loss, and moderate electrical loss were achieved. This piezo-composite was incorporated into a 30-MHz array that included acoustic matching, an elevation focusing lens, electrical matching, and an air-filled kerf between elements. Bandwidths near 60%, 15-dB insertion loss, and crosstalk less than -30 dB were measured. Images of both a phantom and an ex vivo human eye were acquired using a synthetic aperture reconstruction method, resulting in measured lateral and axial resolutions of approximately 100 microm.
Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
NASA Astrophysics Data System (ADS)
Fang, Qi; Curatolo, Andrea; Wijesinghe, Philip; Hamzah, Juliana; Ganss, Ruth; Noble, Peter B.; Karnowski, Karol; Sampson, David D.; Kim, Jun Ki; Lee, Wei M.; Kennedy, Brendan F.
2017-02-01
The mechanical forces that living cells experience represent an important framework in the determination of a range of intricate cellular functions and processes. Current insight into cell mechanics is typically provided by in vitro measurement systems; for example, atomic force microscopy (AFM) measurements are performed on cells in culture or, at best, on freshly excised tissue. Optical techniques, such as Brillouin microscopy and optical elastography, have been used for ex vivo and in situ imaging, recently achieving cellular-scale resolution. The utility of these techniques in cell mechanics lies in quick, three-dimensional and label-free mechanical imaging. Translation of these techniques toward minimally invasive in vivo imaging would provide unprecedented capabilities in tissue characterization. Here, we take the first steps along this path by incorporating a gradient-index micro-endoscope into an ultrahigh resolution optical elastography system. Using this endoscope, a lateral resolution of 2 µm is preserved over an extended depth-of-field of 80 µm, achieved by Bessel beam illumination. We demonstrate this combined system by imaging stiffness of a silicone phantom containing stiff inclusions and a freshly excised murine liver tissue. Additionally, we test this system on murine ribs in situ. We show that our approach can provide high quality extended depth-of-field images through an endoscope and has the potential to measure cell mechanics deep in tissue. Eventually, we believe this tool will be capable of studying biological processes and disease progression in vivo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreozzi, J; Bruza, P; Saunders, S
Purpose: To investigate the viability of using Cherenkov imaging as a fast and robust method for quality assurance tests in the presence of a magnetic field, where other instruments can be limited. Methods: Water tank measurements were acquired from a clinically utilized adaptive magnetic resonance image guided radiation therapy (MR-IGRT) machine with three multileaf-collimator equipped 60Co sources. Cherenkov imaging used an intensified charge coupled device (ICCD) camera placed 3.5m from the treatment isocenter, looking down the bore of the 0.35T MRI into a water tank. Images were post-processed to make quantitative comparison between Cherenkov light intensity with both film andmore » treatment planning system predictions, in terms of percent depth dose curves as well as lateral beam profile measurements. A TG-119 commissioning test plan (C4: C-Shape) was imaged in real-time at 6.33 frames per second to investigate the temporal and spatial resolution of the Cherenkov imaging technique. Results: A .33mm/pixel Cherenkov image resolution was achieved across 1024×1024 pixels in this setup. Analysis of the Cherenkov image of a 10.5×10.5cm treatment beam in the water tank successfully measured the beam width at the depth of maximum dose within 1.2% of the film measurement at the same point. The percent depth dose curve for the same beam was on average within 2% of ionization chamber measurements for corresponding depths between 3–100mm. Cherenkov video of the TG-119 test plan provided qualitative agreement with the treatment planning system dose predictions, and a novel temporal verification of the treatment. Conclusions: Cherenkov imaging was successfully used to make QA measurements of percent depth dose curves and cross beam profiles of MRI-IGRT radiotherapy machines after only several seconds of beam-on time and data capture; both curves were extracted from the same data set. Video-rate imaging of a dynamic treatment plan provided new information regarding temporal dose deposition. This study has been funded by NIH grants R21EB17559 and R01CA109558, as well as Norris Cotton Cancer Center Pilot funding.« less
NASA Astrophysics Data System (ADS)
Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei
2017-02-01
Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.
NASA Astrophysics Data System (ADS)
Wang, Z.; Li, T.; Pan, L.; Kang, Z.
2017-09-01
With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.
Bas-Relief Modeling from Normal Images with Intuitive Styles.
Ji, Zhongping; Ma, Weiyin; Sun, Xianfang
2014-05-01
Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.
Analysis of Rapid Multi-Focal Zone ARFI Imaging
Rosenzweig, Stephen; Palmeri, Mark; Nightingale, Kathryn
2015-01-01
Acoustic radiation force impulse (ARFI) imaging has shown promise for visualizing structure and pathology within multiple organs; however, because the contrast depends on the push beam excitation width, image quality suffers outside of the region of excitation. Multi-focal zone ARFI imaging has previously been used to extend the region of excitation (ROE), but the increased acquisition duration and acoustic exposure have limited its utility. Supersonic shear wave imaging has previously demonstrated that through technological improvements in ultrasound scanners and power supplies, it is possible to rapidly push at multiple locations prior to tracking displacements, facilitating extended depth of field shear wave sources. Similarly, ARFI imaging can utilize these same radiation force excitations to achieve tight pushing beams with a large depth of field. Finite element method simulations and experimental data are presented demonstrating that single- and rapid multi-focal zone ARFI have comparable image quality (less than 20% loss in contrast), but the multi-focal zone approach has an extended axial region of excitation. Additionally, as compared to single push sequences, the rapid multi-focal zone acquisitions improve the contrast to noise ratio by up to 40% in an example 4 mm diameter lesion. PMID:25643078
Adaptive sound speed correction for abdominal ultrasonography: preliminary results
NASA Astrophysics Data System (ADS)
Jin, Sungmin; Kang, Jeeun; Song, Tai-Kyung; Yoo, Yangmo
2013-03-01
Ultrasonography has been conducting a critical role in assessing abdominal disorders due to its noninvasive, real-time, low cost, and deep penetrating capabilities. However, for imaging obese patients with a thick fat layer, it is challenging to achieve appropriate image quality with a conventional beamforming (CON) method due to phase aberration caused by the difference between sound speeds (e.g., 1580 and 1450m/s for liver and fat, respectively). For this, various sound speed correction (SSC) methods that estimate the accumulated sound speed for a region-of interest (ROI) have been previously proposed. However, with the SSC methods, the improvement in image quality was limited only for a specific depth of ROI. In this paper, we present the adaptive sound speed correction (ASSC) method, which can enhance the image quality for whole depths by using estimated sound speeds from two different depths in the lower layer. Since these accumulated sound speeds contain the respective contributions of layers, an optimal sound speed for each depth can be estimated by solving contribution equations. To evaluate the proposed method, the phantom study was conducted with pre-beamformed radio-frequency (RF) data acquired with a SonixTouch research package (Ultrasonix Corp., Canada) with linear and convex probes from the gel pad-stacked tissue mimicking phantom (Parker Lab. Inc., USA and Model539, ATS, USA) whose sound speeds are 1610 and 1450m/s, respectively. From the study, compared to the CON and SSC methods, the ASSC method showed the improved spatial resolution and information entropy contrast (IEC) for convex and linear array transducers, respectively. These results indicate that the ASSC method can be applied for enhancing image quality when imaging obese patients in abdominal ultrasonography.
Latest advances in molecular imaging instrumentation.
Pichler, Bernd J; Wehrl, Hans F; Judenhofer, Martin S
2008-06-01
This review concentrates on the latest advances in molecular imaging technology, including PET, MRI, and optical imaging. In PET, significant improvements in tumor detection and image resolution have been achieved by introducing new scintillation materials, iterative image reconstruction, and correction methods. These advances enabled the first clinical scanners capable of time-of-flight detection and incorporating point-spread-function reconstruction to compensate for depth-of-interaction effects. In the field of MRI, the most important developments in recent years have mainly been MRI systems with higher field strengths and improved radiofrequency coil technology. Hyperpolarized imaging, functional MRI, and MR spectroscopy provide molecular information in vivo. A special focus of this review article is multimodality imaging and, in particular, the emerging field of combined PET/MRI.
Extended depth of field in an intrinsically wavefront-encoded biometric iris camera
NASA Astrophysics Data System (ADS)
Bergkoetter, Matthew D.; Bentley, Julie L.
2014-12-01
This work describes a design process which greatly increases the depth of field of a simple three-element lens system intended for biometric iris recognition. The system is optimized to produce a point spread function which is insensitive to defocus, so that recorded images may be deconvolved without knowledge of the exact object distance. This is essentially a variation on the technique of wavefront encoding, however the desired encoding effect is achieved by aberrations intrinsic to the lens system itself, without the need for a pupil phase mask.
NASA Astrophysics Data System (ADS)
Pires, Layla; Demidov, Valentin; Vitkin, I. Alex; Bagnato, Vanderlei; Kurachi, Cristina; Wilson, Brian C.
2016-08-01
Melanoma is the most aggressive type of skin cancer, with significant risk of fatality. Due to its pigmentation, light-based imaging and treatment techniques are limited to near the tumor surface, which is inadequate, for example, to evaluate the microvascular density that is associated with prognosis. White-light diffuse reflectance spectroscopy (DRS) and near-infrared optical coherence tomography (OCT) were used to evaluate the effect of a topically applied optical clearing agent (OCA) in melanoma in vivo and to image the microvascular network. DRS was performed using a contact fiber optic probe in the range from 450 to 650 nm. OCT imaging was performed using a swept-source system at 1310 nm. The OCT image data were processed using speckle variance and depth-encoded algorithms. Diffuse reflectance signals decreased with clearing, dropping by ˜90% after 45 min. OCT was able to image the microvasculature in the pigmented melanoma tissue with good spatial resolution up to a depth of ˜300 μm without the use of OCA; improved contrast resolution was achieved with optical clearing to a depth of ˜750 μm in tumor. These findings are relevant to potential clinical applications in melanoma, such as assessing prognosis and treatment responses. Optical clearing may also facilitate the use of light-based treatments such as photodynamic therapy.
Cannata, Jonathan M; Ritter, Timothy A; Chen, Wo-Hsing; Silverman, Ronald H; Shung, K Kirk
2003-11-01
This paper discusses the design, fabrication, and testing of sensitive broadband lithium niobate (LiNbO3) single-element ultrasonic transducers in the 20-80 MHz frequency range. Transducers of varying dimensions were built for an f# range of 2.0-3.1. The desired focal depths were achieved by either casting an acoustic lens on the transducer face or press-focusing the piezoelectric into a spherical curvature. For designs that required electrical impedance matching, a low impedance transmission line coaxial cable was used. All transducers were tested in a pulse-echo arrangement, whereby the center frequency, bandwidth, insertion loss, and focal depth were measured. Several transducers were fabricated with center frequencies in the 20-80 MHz range with the measured -6 dB bandwidths and two-way insertion loss values ranging from 57 to 74% and 9.6 to 21.3 dB, respectively. Both transducer focusing techniques proved successful in producing highly sensitive, high-frequency, single-element, ultrasonic-imaging transducers. In vivo and in vitro ultrasonic backscatter microscope (UBM) images of human eyes were obtained with the 50 MHz transducers. The high sensitivity of these devices could possibly allow for an increase in depth of penetration, higher image signal-to-noise ratio (SNR), and improved image contrast at high frequencies when compared to previously reported results.
Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bouchal, Petr; Bouchal, Zdeněk
2017-10-01
In the past decade, probe-based super-resolution using temporally resolved localization of emitters became a groundbreaking imaging strategy in fluorescence microscopy. Here we demonstrate a non-diffractive vortex microscope (NVM), enabling three-dimensional super-resolution fluorescence imaging and localization and tracking of metal and dielectric nanoparticles. The NVM benefits from vortex non-diffractive beams (NBs) creating a double-helix point spread function that rotates under defocusing while maintaining its size and shape unchanged. Using intrinsic properties of the NBs, the dark-field localization of weakly scattering objects is achieved in a large axial range exceeding the depth of field of the microscope objective up to 23 times. The NVM was developed using an upright microscope Nikon Eclipse E600 operating with a spiral lithographic mask optimized using Fisher information and built into an add-on imaging module or microscope objective. In evaluation of the axial localization accuracy the root mean square error below 18 nm and 280 nm was verified over depth ranges of 3.5 μm and 13.6 μm, respectively. Subwavelength gold and polystyrene beads were localized with isotropic precision below 10 nm in the axial range of 3.5 μm and the axial precision reduced to 30 nm in the extended range of 13.6 μm. In the fluorescence imaging, the localization with isotropic precision below 15 nm was demonstrated in the range of 2.5 μm, whereas in the range of 8.3 μm, the precision of 15 nm laterally and 30-50 nm axially was achieved. The tracking of nanoparticles undergoing Brownian motion was demonstrated in the volume of 14 × 10 × 16 μm3. Applicability of the NVM was tested by fluorescence imaging of LW13K2 cells and localization of cellular proteins.
Volumetric, dashboard-mounted augmented display
NASA Astrophysics Data System (ADS)
Kessler, David; Grabowski, Christopher
2017-11-01
The optical design of a compact volumetric display for drivers is presented. The system displays a true volume image with realistic physical depth cues, such as focal accommodation, parallax and convergence. A large eyebox is achieved with a pupil expander. The windshield is used as the augmented reality combiner. A freeform windshield corrector is placed at the dashboard.
The Fulfillment of Others' Needs Elevates Children's Body Posture
ERIC Educational Resources Information Center
Hepach, Robert; Vaish, Amrisha; Tomasello, Michael
2017-01-01
Much is known about young children's helping behavior, but little is known about the underlying motivations and emotions involved. In 2 studies we found that 2-year-old children showed positive emotions of similar magnitude--as measured by changes in their postural elevation using depth sensor imaging technology--after they achieved a goal for…
Wide-field three-photon excitation in biological samples
Rowlands, Christopher J; Park, Demian; Bruns, Oliver T; Piatkevich, Kiryl D; Fukumura, Dai; Jain, Rakesh K; Bawendi, Moungi G; Boyden, Edward S; So, Peter TC
2017-01-01
Three-photon wide-field depth-resolved excitation is used to overcome some of the limitations in conventional point-scanning two- and three-photon microscopy. Excitation of chromophores as diverse as channelrhodopsins and quantum dots is shown, and a penetration depth of more than 700 μm into fixed scattering brain tissue is achieved, approximately twice as deep as that achieved using two-photon wide-field excitation. Compatibility with live animal experiments is confirmed by imaging the cerebral vasculature of an anesthetized mouse; a complete focal stack was obtained without any evidence of photodamage. As an additional validation of the utility of wide-field three-photon excitation, functional excitation is demonstrated by performing three-photon optogenetic stimulation of cultured mouse hippocampal neurons expressing a channelrhodopsin; action potentials could reliably be excited without causing photodamage. PMID:29152380
Lidar measurements of boundary layers, aerosol scattering and clouds during project FIFE
NASA Technical Reports Server (NTRS)
Eloranta, Edwin W. (Principal Investigator)
1995-01-01
A detailed account of progress achieved under this grant funding is contained in five journal papers. The titles of these papers are: The calculation of area-averaged vertical profiles of the horizontal wind velocity using volume imaging lidar data; Volume imaging lidar observation of the convective structure surrounding the flight path of an instrumented aircraft; Convective boundary layer mean depths, cloud base altitudes, cloud top altitudes, cloud coverages, and cloud shadows obtained from Volume Imaging Lidar data; An accuracy analysis of the wind profiles calculated from Volume Imaging Lidar data; and Calculation of divergence and vertical motion from volume-imaging lidar data. Copies of these papers form the body of this report.
A digital gigapixel large-format tile-scan camera.
Ben-Ezra, M
2011-01-01
Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.
Optical mesoscopy without the scatter: broadband multispectral optoacoustic mesoscopy
Chekkoury, Andrei; Gateau, Jérôme; Driessen, Wouter; Symvoulidis, Panagiotis; Bézière, Nicolas; Feuchtinger, Annette; Walch, Axel; Ntziachristos, Vasilis
2015-01-01
Optical mesoscopy extends the capabilities of biological visualization beyond the limited penetration depth achieved by microscopy. However, imaging of opaque organisms or tissues larger than a few hundred micrometers requires invasive tissue sectioning or chemical treatment of the specimen for clearing photon scattering, an invasive process that is regardless limited with depth. We developed previously unreported broadband optoacoustic mesoscopy as a tomographic modality to enable imaging of optical contrast through several millimeters of tissue, without the need for chemical treatment of tissues. We show that the unique combination of three-dimensional projections over a broad 500 kHz–40 MHz frequency range combined with multi-wavelength illumination is necessary to render broadband multispectral optoacoustic mesoscopy (2B-MSOM) superior to previous optical or optoacoustic mesoscopy implementations. PMID:26417486
Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis
Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin
2016-01-01
This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687
Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.
Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin
2016-06-28
This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.
Effects of pupil filter patterns in line-scan focal modulation microscopy
NASA Astrophysics Data System (ADS)
Shen, Shuhao; Pant, Shilpa; Chen, Rui; Chen, Nanguang
2018-03-01
Line-scan focal modulation microscopy (LSFMM) is an emerging imaging technique that affords high imaging speed and good optical sectioning at the same time. We present a systematic investigation into optimal design of the pupil filter for LSFMM in an attempt to achieve the best performance in terms of spatial resolutions, optical sectioning, and modulation depth. Scalar diffraction theory was used to compute light propagation and distribution in the system and theoretical predictions on system performance, which were then compared with experimental results.
O'Connor, Michael K; Morrow, Melissa M; Tran, Thuy; Hruska, Carrie B; Conners, Amy L; Hunt, Katie N
2017-02-01
The purpose of this study was to perform a pilot evaluation of an integrated molecular breast imaging/ultrasound (MBI/US) system designed to enable, in real-time, the registration of US to MBI and diagnostic evaluation of breast lesions detected on MBI. The MBI/US system was constructed by modifying an existing dual-head cadmium zinc telluride (CZT)-based MBI gamma camera. The upper MBI detector head was replaced with a mesh panel, which allowed an ultrasound probe to access the breast. An optical tracking system was used to monitor the location of the ultrasound transducer, referenced to the MBI detector. The lesion depth at which ultrasound was targeted was estimated from analysis of previously acquired dual-head MBI datasets. A software tool was developed to project the US field of view onto the current MBI image. Correlation of lesion location between both modalities with real-time MBI/US scanning was confirmed in a breast phantom model and assessed in 12 patients with a breast lesion detected on MBI. Combined MBI/US scanning allowed for registration of lesions detected on US and MBI as validated in phantom experiments. In patient studies, successful registration was achieved in 8 of 12 (67%) patients, with complete registration achieved in seven and partial registration achieved in one patient. In 4 of 12 (37%) patients, lesion registration was not achieved, partially attributed to uncertainty in lesion depth estimates from MBI. The MBI/US system enabled successful registration of US to MBI in over half of patients studied in this pilot evaluation. Future studies are needed to determine if real-time, registered US imaging of MBI-detected lesions may obviate the need to proceed to more expensive procedures such as contrast-enhanced breast MRI for diagnostic workup or biopsy of MBI findings. © 2016 American Association of Physicists in Medicine.
Multicontrast photoacoustic in vivo imaging using near-infrared fluorescent proteins
NASA Astrophysics Data System (ADS)
Krumholz, Arie; Shcherbakova, Daria M.; Xia, Jun; Wang, Lihong V.; Verkhusha, Vladislav V.
2014-02-01
Non-invasive imaging of biological processes in vivo is invaluable in advancing biology. Photoacoustic tomography is a scalable imaging technique that provides higher resolution at greater depths in tissue than achievable by purely optical methods. Here we report the application of two spectrally distinct near-infrared fluorescent proteins, iRFP670 and iRFP720, engineered from bacterial phytochromes, as photoacoustic contrast agents. iRFPs provide tissue-specific contrast without the need for delivery of any additional substances. Compared to conventional GFP-like red-shifted fluorescent proteins, iRFP670 and iRFP720 demonstrate stronger photoacoustic signals at longer wavelengths, and can be spectrally resolved from each other and hemoglobin. We simultaneously visualized two differently labeled tumors, one with iRFP670 and the other with iRFP720, as well as blood vessels. We acquired images of a mouse as 2D sections of a whole animal, and as localized 3D volumetric images with high contrast and sub-millimeter resolution at depths up to 8 mm. Our results suggest iRFPs are genetically-encoded probes of choice for simultaneous photoacoustic imaging of several tissues or processes in vivo.
Zhao, Jianhu; Zhang, Hongmei; Wang, Shiqi
2017-01-01
Multibeam echosounder systems (MBES) can record backscatter strengths of gas plumes in the water column (WC) images that may be an indicator of possible occurrence of gas at certain depths. Manual or automatic detection is generally adopted in finding gas plumes, but frequently results in low efficiency and high false detection rates because of WC images that are polluted by noise. To improve the efficiency and reliability of the detection, a comprehensive detection method is proposed in this paper. In the proposed method, the characteristics of WC background noise are first analyzed and given. Then, the mean standard deviation threshold segmentations are respectively used for the denoising of time-angle and depth-angle images, an intersection operation is performed for the two segmented images to further weaken noise in the WC data, and the gas plumes in the WC data are detected from the intersection image by the morphological constraint. The proposed method was tested by conducting shallow-water and deepwater experiments. In these experiments, the detections were conducted automatically and higher correct detection rates than the traditional methods were achieved. The performance of the proposed method is analyzed and discussed. PMID:29186014
Fu, Yong; Ji, Zhong; Ding, Wenzheng; Ye, Fanghao; Lou, Cunguang
2014-11-01
Previous studies demonstrated that thermoacoustic imaging (TAI) has great potential for breast tumor detection. However, large field of view (FOV) imaging remains a long-standing challenge for three-dimensional (3D) breast tumor localization. Here, the authors propose a practical TAI system for noninvasive 3D localization of breast tumors with large FOV through the use of ultrashort microwave pulse (USMP). A USMP generator was employed for TAI. The energy density required for quality imaging and the corresponding microwave-to-acoustic conversion efficiency were compared with that of conventional TAI. The microwave energy distribution, imaging depth, resolution, and 3D imaging capabilities were then investigated. Finally, a breast phantom embedded with a laboratory-grown tumor was imaged to evaluate the FOV performance of the USMP TAI system, under a simulated clinical situation. A radiation energy density equivalent to just 1.6%-2.2% of that for conventional submicrosecond microwave TAI was sufficient to obtain a thermoacoustic signal with the required signal-to-noise ratio. This result clearly demonstrated a significantly higher microwave-to-acoustic conversion efficiency of USMP TAI compared to that of conventional TAI. The USMP TAI system achieved 61 mm imaging depth and 12 × 12 cm(2) microwave radiation area. The volumetric image of a copper target measured at depth of 4-6 cm matched well with the actual shape and the resolution reaches 230 μm. The TAI of the breast phantom was precisely localized to an accuracy of 0.1 cm over an 8 × 8 cm(2) FOV. The experimental results demonstrated that the USMP TAI system offered significant potential for noninvasive clinical detection and 3D localization of deep breast tumors, with low microwave radiation dose and high spatial resolution over a sufficiently large FOV.
NASA Astrophysics Data System (ADS)
Chen, Liang-Chia; Chen, Yi-Shiuan; Chang, Yi-Wei; Lin, Shyh-Tsong; Yeh, Sheng Lih
2013-01-01
In this research, new nano-scale measurement methodology based on spectrally-resolved chromatic confocal interferometry (SRCCI) was successfully developed by employing integration of chromatic confocal sectioning and spectrally-resolve white light interferometry (SRWLI) for microscopic three dimensional surface profilometry. The proposed chromatic confocal method (CCM) using a broad band while light in combination with a specially designed chromatic dispersion objective is capable of simultaneously acquiring multiple images at a large range of object depths to perform surface 3-D reconstruction by single image shot without vertical scanning and correspondingly achieving a high measurement depth range up to hundreds of micrometers. A Linnik-type interferometric configuration based on spectrally resolved white light interferometry is developed and integrated with the CCM to simultaneously achieve nanoscale axis resolution for the detection point. The white-light interferograms acquired at the exit plane of the spectrometer possess a continuous variation of wavelength along the chromaticity axis, in which the light intensity reaches to its peak when the optical path difference equals to zero between two optical arms. To examine the measurement accuracy of the developed system, a pre-calibrated accurate step height target with a total step height of 10.10 μm was measured. The experimental result shows that the maximum measurement error was verified to be less than 0.3% of the overall measuring height.
NASA Astrophysics Data System (ADS)
Wang, Jinhai; Liu, Dongyuan; Sun, Jinggong; Zhang, Yanjun; Sun, Qiuming; Ma, Jun; Zheng, Yu; Wang, Huiquan
2016-10-01
Near-infrared (NIR) brain imaging is one of the most promising techniques for brain research in recent years. As a significant supplement to the clinical imaging technique, such as CT and MRI, the NIR technique can achieve a fast, non-invasive, and low cost imaging of the brain, which is widely used for the brain functional imaging and hematoma detection. NIR imaging can achieve an imaging depth up to only several centimeters due to the reduced optical attenuation. The structure of the human brain is so particularly complex, from the perspective of optical detection, the measurement light needs go through the skin, skull, cerebrospinal fluid (CSF), grey matter, and white matter, and then reverses the order reflected by the detector. The more photons from the Depth of Interest (DOI) in brain the detector capture, the better detection accuracy and stability can be obtained. In this study, the Equivalent Signal to Noise Ratio (ESNR) was defined as the proportion of the photons from the DOI to the total photons the detector evaluated the best Source and Detector (SD) separation. The Monte-Carlo (MC) simulation was used to establish a multi brain layer model to analyze the distribution of the ESNR along the radial direction for different DOIs and several basic brain optical and structure parameters. A map between the best detection SD separation, in which distance the ESNR was the highest, and the brain parameters was established for choosing the best detection point in the NIR brain imaging application. The results showed that the ESNR was very sensitivity to the SD separation. So choosing the best SD separation based on the ESNR is very significant for NIR brain imaging application. It provides an important reference and new thinking for the brain imaging in the near infrared.
Analysis of flood inundation in ungauged basins based on multi-source remote sensing data.
Gao, Wei; Shen, Qiu; Zhou, Yuehua; Li, Xin
2018-02-09
Floods are among the most expensive natural hazards experienced in many places of the world and can result in heavy losses of life and economic damages. The objective of this study is to analyze flood inundation in ungauged basins by performing near-real-time detection with flood extent and depth based on multi-source remote sensing data. Via spatial distribution analysis of flood extent and depth in a time series, the inundation condition and the characteristics of flood disaster can be reflected. The results show that the multi-source remote sensing data can make up the lack of hydrological data in ungauged basins, which is helpful to reconstruct hydrological sequence; the combination of MODIS (moderate-resolution imaging spectroradiometer) surface reflectance productions and the DFO (Dartmouth Flood Observatory) flood database can achieve the macro-dynamic monitoring of the flood inundation in ungauged basins, and then the differential technique of high-resolution optical and microwave images before and after floods can be used to calculate flood extent to reflect spatial changes of inundation; the monitoring algorithm for the flood depth combining RS and GIS is simple and easy and can quickly calculate the depth with a known flood extent that is obtained from remote sensing images in ungauged basins. Relevant results can provide effective help for the disaster relief work performed by government departments.
A Guide to Structured Illumination TIRF Microscopy at High Speed with Multiple Colors
Young, Laurence J.; Ströhl, Florian; Kaminski, Clemens F.
2016-01-01
Optical super-resolution imaging with structured illumination microscopy (SIM) is a key technology for the visualization of processes at the molecular level in the chemical and biomedical sciences. Although commercial SIM systems are available, systems that are custom designed in the laboratory can outperform commercial systems, the latter typically designed for ease of use and general purpose applications, both in terms of imaging fidelity and speed. This article presents an in-depth guide to building a SIM system that uses total internal reflection (TIR) illumination and is capable of imaging at up to 10 Hz in three colors at a resolution reaching 100 nm. Due to the combination of SIM and TIRF, the system provides better image contrast than rival technologies. To achieve these specifications, several optical elements are used to enable automated control over the polarization state and spatial structure of the illumination light for all available excitation wavelengths. Full details on hardware implementation and control are given to achieve synchronization between excitation light pattern generation, wavelength, polarization state, and camera control with an emphasis on achieving maximum acquisition frame rate. A step-by-step protocol for system alignment and calibration is presented and the achievable resolution improvement is validated on ideal test samples. The capability for video-rate super-resolution imaging is demonstrated with living cells. PMID:27285848
Color difference threshold of chromostereopsis induced by flat display emission.
Ozolinsh, Maris; Muizniece, Kristine
2015-01-01
The study of chromostereopsis has gained attention in the backdrop of the use of computer displays in daily life. In this context, we analyze the illusory depth sense using planar color images presented on a computer screen. We determine the color difference threshold required to induce an illusory sense of depth psychometrically using a constant stimuli paradigm. Isoluminant stimuli are presented on a computer screen, which stimuli are aligned along the blue-red line in the computer display CIE xyY color space. Stereo disparity is generated by increasing the color difference between the central and surrounding areas of the stimuli with both areas consisting of random dots on a black background. The observed altering of illusory depth sense, thus also stereo disparity is validated using the "center-of-gravity" model. The induced illusory sense of the depth effect undergoes color reversal upon varying the binocular lateral eye pupil covering conditions (lateral or medial). Analysis of the retinal image point spread function for the display red and blue pixel radiation validates the altering of chromostereopsis retinal disparity achieved by increasing the color difference, and also the chromostereopsis color reversal caused by varying the eye pupil covering conditions.
Expansion-based passive ranging
NASA Technical Reports Server (NTRS)
Barniv, Yair
1993-01-01
A new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases is described. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they were used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts - as well as the other parameters - can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline - resulting in a proportionately higher depth accuracy.
Expansion-based passive ranging
NASA Technical Reports Server (NTRS)
Barniv, Yair
1993-01-01
This paper describes a new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they have been used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts--as well as the other parameters--can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline resulting in a proportionately higher depth accuracy.
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
NASA Astrophysics Data System (ADS)
Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.
2017-02-01
Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.
NASA Astrophysics Data System (ADS)
Wu, L.; San Segundo Bello, D.; Coppejans, P.; Craninckx, J.; Wambacq, P.; Borremans, J.
2017-02-01
This paper presents a 20 Mfps 32 × 84 pixels CMOS burst-mode imager featuring high frame depth with a passive in-pixel amplifier. Compared to the CCD alternatives, CMOS burst-mode imagers are attractive for their low power consumption and integration of circuitry such as ADCs. Due to storage capacitor size and its noise limitations, CMOS burst-mode imagers usually suffer from a lower frame depth than CCD implementations. In order to capture fast transitions over a longer time span, an in-pixel CDS technique has been adopted to reduce the required memory cells for each frame by half. Moreover, integrated with in-pixel CDS, an in-pixel NMOS-only passive amplifier alleviates the kTC noise requirements of the memory bank allowing the usage of smaller capacitors. Specifically, a dense 108-cell MOS memory bank (10fF/cell) has been implemented inside a 30μm pitch pixel, with an area of 25 × 30μm2 occupied by the memory bank. There is an improvement of about 4x in terms of frame depth per pixel area by applying in-pixel CDS and amplification. With the amplifier's gain of 3.3, an FD input-referred RMS noise of 1mV is achieved at 20 Mfps operation. While the amplification is done without burning DC current, including the pixel source follower biasing, the full pixel consumes 10μA at 3.3V supply voltage at full speed. The chip has been fabricated in imec's 130nm CMOS CIS technology.
In-plane ultrasonic needle tracking using a fiber-optic hydrophone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Wenfeng, E-mail: wenfeng.xia@ucl.ac.uk; Desjardins, Adrien E.; Mari, Jean Martial
Purpose: Accurate and efficient guidance of needles to procedural targets is critically important during percutaneous interventional procedures. Ultrasound imaging is widely used for real-time image guidance in a variety of clinical contexts, but with this modality, uncertainties about the location of the needle tip within the image plane lead to significant complications. Whilst several methods have been proposed to improve the visibility of the needle, achieving accuracy and compatibility with current clinical practice is an ongoing challenge. In this paper, the authors present a method for directly visualizing the needle tip using an integrated fiber-optic ultrasound receiver in conjunction withmore » the imaging probe used to acquire B-mode ultrasound images. Methods: Needle visualization and ultrasound imaging were performed with a clinical ultrasound imaging system. A miniature fiber-optic ultrasound hydrophone was integrated into a 20 gauge injection needle tip to receive transmissions from individual transducer elements of the ultrasound imaging probe. The received signals were reconstructed to create an image of the needle tip. Ultrasound B-mode imaging was interleaved with needle tip imaging. A first set of measurements was acquired in water and tissue ex vivo with a wide range of insertion angles (15°–68°) to study the accuracy and sensitivity of the tracking method. A second set was acquired in an in vivo swine model, with needle insertions to the brachial plexus. A third set was acquired in an in vivo ovine model for fetal interventions, with insertions to different locations within the uterine cavity. Two linear ultrasound imaging probes were used: a 14–5 MHz probe for the first and second sets, and a 9–4 MHz probe for the third. Results: During insertions in tissue ex vivo and in vivo, the imaged needle tip had submillimeter axial and lateral dimensions. The signal-to-noise (SNR) of the needle tip was found to depend on the insertion angle. With the needle tip in water, the SNR of the needle tip varied with insertion angle, attaining values of 284 at 27° and 501 at 68°. In swine tissue ex vivo, the SNR decreased from 80 at 15° to 16 at 61°. In swine tissue in vivo, the SNR varied with depth, from 200 at 17.5 mm to 48 at 26 mm, with a constant insertion angle of 40°. In ovine tissue in vivo, within the uterine cavity, the SNR varied from 46.4 at 25 mm depth to 18.4 at 32 mm depth, with insertion angles in the range of 26°–65°. Conclusions: A fiber-optic ultrasound receiver integrated into the needle cannula in combination with single-element transmissions from the imaging probe allows for direct visualization of the needle tip within the ultrasound imaging plane. Visualization of the needle tip was achieved at depths and insertion angles that are encountered during nerve blocks and fetal interventions. The method presented in this paper has strong potential to improve the safety and efficiency of ultrasound-guided needle insertions.« less
X-ray mask and method for providing same
Morales, Alfredo M [Pleasanton, CA; Skala, Dawn M [Fremont, CA
2004-09-28
The present invention describes a method for fabricating an x-ray mask tool which can achieve pattern features having lateral dimension of less than 1 micron. The process uses a thin photoresist and a standard lithographic mask to transfer an trace image pattern in the surface of a silicon wafer by exposing and developing the resist. The exposed portion of the silicon substrate is then anisotropically etched to provide an etched image of the trace image pattern consisting of a series of channels in the silicon having a high depth-to-width aspect ratio. These channels are then filled by depositing a metal such as gold to provide an inverse image of the trace image and thereby providing a robust x-ray mask tool.
X-ray mask and method for providing same
Morales, Alfredo M.; Skala, Dawn M.
2002-01-01
The present invention describes a method for fabricating an x-ray mask tool which can achieve pattern features having lateral dimension of less than 1 micron. The process uses a thin photoresist and a standard lithographic mask to transfer an trace image pattern in the surface of a silicon wafer by exposing and developing the resist. The exposed portion of the silicon substrate is then anisotropically etched to provide an etched image of the trace image pattern consisting of a series of channels in the silicon having a high depth-to-width aspect ratio. These channels are then filled by depositing a metal such as gold to provide an inverse image of the trace image and thereby providing a robust x-ray mask tool.
End-to-end deep neural network for optical inversion in quantitative photoacoustic imaging.
Cai, Chuangjian; Deng, Kexin; Ma, Cheng; Luo, Jianwen
2018-06-15
An end-to-end deep neural network, ResU-net, is developed for quantitative photoacoustic imaging. A residual learning framework is used to facilitate optimization and to gain better accuracy from considerably increased network depth. The contracting and expanding paths enable ResU-net to extract comprehensive context information from multispectral initial pressure images and, subsequently, to infer a quantitative image of chromophore concentration or oxygen saturation (sO 2 ). According to our numerical experiments, the estimations of sO 2 and indocyanine green concentration are accurate and robust against variations in both optical property and object geometry. An extremely short reconstruction time of 22 ms is achieved.
In vivo correlation mapping microscopy
NASA Astrophysics Data System (ADS)
McGrath, James; Alexandrov, Sergey; Owens, Peter; Subhash, Hrebesh; Leahy, Martin
2016-04-01
To facilitate regular assessment of the microcirculation in vivo, noninvasive imaging techniques such as nailfold capillaroscopy are required in clinics. Recently, a correlation mapping technique has been applied to optical coherence tomography (OCT), which extends the capabilities of OCT to microcirculation morphology imaging. This technique, known as correlation mapping optical coherence tomography, has been shown to extract parameters, such as capillary density and vessel diameter, and key clinical markers associated with early changes in microvascular diseases. However, OCT has limited spatial resolution in both the transverse and depth directions. Here, we extend this correlation mapping technique to other microscopy modalities, including confocal microscopy, and take advantage of the higher spatial resolution offered by these modalities. The technique is achieved as a processing step on microscopy images and does not require any modification to the microscope hardware. Results are presented which show that this correlation mapping microscopy technique can extend the capabilities of conventional microscopy to enable mapping of vascular networks in vivo with high spatial resolution in both the transverse and depth directions.
Three-dimensional displays and stereo vision
Westheimer, Gerald
2011-01-01
Procedures for three-dimensional image reconstruction that are based on the optical and neural apparatus of human stereoscopic vision have to be designed to work in conjunction with it. The principal methods of implementing stereo displays are described. Properties of the human visual system are outlined as they relate to depth discrimination capabilities and achieving optimal performance in stereo tasks. The concept of depth rendition is introduced to define the change in the parameters of three-dimensional configurations for cases in which the physical disposition of the stereo camera with respect to the viewed object differs from that of the observer's eyes. PMID:21490023
Exploiting chromatic aberration to spectrally encode depth in reflectance confocal microscopy
NASA Astrophysics Data System (ADS)
Carrasco-Zevallos, Oscar; Shelton, Ryan L.; Olsovsky, Cory; Saldua, Meagan; Applegate, Brian E.; Maitland, Kristen C.
2011-06-01
We present chromatic confocal microscopy as a technique to axially scan the sample by spectrally encoding depth information to avoid mechanical scanning of the lens or sample. We have achieved an 800 μm focal shift over a range of 680-1080 nm using a hyperchromat lens as the imaging lens. A more complex system that incorporates a water immersion objective to improve axial resolution was built and tested. We determined that increasing objective magnification decreases chromatic shift while improving axial resolution. Furthermore, collimating after the hyperchromat at longer wavelengths yields an increase in focal shift.
Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system
NASA Astrophysics Data System (ADS)
Zheng, Yipeng; Tan, Wenjiang; Si, Jinhai; Ren, YuHu; Xu, Shichao; Tong, Junyi; Hou, Xun
2016-09-01
We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. This imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.
Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Yipeng; Tan, Wenjiang, E-mail: tanwenjiang@mail.xjtu.edu.cn; Si, Jinhai
2016-09-07
We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. Thismore » imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.« less
Potential of coded excitation in medical ultrasound imaging.
Misaridis, T X; Gammelmark, K; Jørgensen, C H; Lindberg, N; Thomsen, A H; Pedersen, M H; Jensen, J A
2000-03-01
Improvement in signal-to-noise ratio (SNR) and/or penetration depth can be achieved in medical ultrasound by using long coded waveforms, in a similar manner as in radars or sonars. However, the time-bandwidth product (TB) improvement, and thereby SNR improvement is considerably lower in medical ultrasound, due to the lower available bandwidth. There is still space for about 20 dB improvement in the SNR, which will yield a penetration depth up to 20 cm at 5 MHz [M. O'Donnell, IEEE Trans. Ultrason. Ferroelectr. Freq. Contr., 39(3) (1992) 341]. The limited TB additionally yields unacceptably high range sidelobes. However, the frequency weighting from the ultrasonic transducer's bandwidth, although suboptimal, can be beneficial in sidelobe reduction. The purpose of this study is an experimental evaluation of the above considerations in a coded excitation ultrasound system. A coded excitation system based on a modified commercial scanner is presented. A predistorted FM signal is proposed in order to keep the resulting range sidelobes at acceptably low levels. The effect of the transducer is taken into account in the design of the compression filter. Intensity levels have been considered and simulations on the expected improvement in SNR are also presented. Images of a wire phantom and clinical images have been taken with the coded system. The images show a significant improvement in penetration depth and they preserve both axial resolution and contrast.
Forty-five degree backscattering-mode nonlinear absorption imaging in turbid media.
Cui, Liping; Knox, Wayne H
2010-01-01
Two-color nonlinear absorption imaging has been previously demonstrated with endogenous contrast of hemoglobin and melanin in turbid media using transmission-mode detection and a dual-laser technology approach. For clinical applications, it would be generally preferable to use backscattering mode detection and a simpler single-laser technology. We demonstrate that imaging in backscattering mode in turbid media using nonlinear absorption can be obtained with as little as 1-mW average power per beam with a single laser source. Images have been achieved with a detector receiving backscattered light at a 45-deg angle relative to the incoming beams' direction. We obtain images of capillary tube phantoms with resolution as high as 20 microm and penetration depth up to 0.9 mm for a 300-microm tube at SNR approximately 1 in calibrated scattering solutions. Simulation results of the backscattering and detection process using nonimaging optics are demonstrated. A Monte Carlo-based method shows that the nonlinear signal drops exponentially as the depth increases, which agrees well with our experimental results. Simulation also shows that with our current detection method, only 2% of the signal is typically collected with a 5-mm-radius detector.
4D Light Field Imaging System Using Programmable Aperture
NASA Technical Reports Server (NTRS)
Bae, Youngsam
2012-01-01
Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need near complete stereo images for their autonomous navigation, manipulation, and depth approximation. The imaging system can provide visual feedback
NASA Astrophysics Data System (ADS)
Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Mubarok, Syahrul; Deny, Agus; Widowati, Sri; Kurniadi, Rizal
2012-06-01
Migration is important issue for seismic imaging in complex structure. In this decade, depth imaging becomes important tools for producing accurate image in depth imaging instead of time domain imaging. The challenge of depth migration method, however, is in revealing the complex structure of subsurface. There are many methods of depth migration with their advantages and weaknesses. In this paper, we show our propose method of pre-stack depth migration based on time domain inverse scattering wave equation. Hopefully this method can be as solution for imaging complex structure in Indonesia, especially in rich thrusting fault zones. In this research, we develop a recent advance wave equation migration based on time domain inverse scattering wave which use more natural wave propagation using scattering wave. This wave equation pre-stack depth migration use time domain inverse scattering wave equation based on Helmholtz equation. To provide true amplitude recovery, an inverse of divergence procedure and recovering transmission loss are considered of pre-stack migration. Benchmarking the propose inverse scattering pre-stack depth migration with the other migration methods are also presented, i.e.: wave equation pre-stack depth migration, waveequation depth migration, and pre-stack time migration method. This inverse scattering pre-stack depth migration could image successfully the rich fault zone which consist extremely dip and resulting superior quality of seismic image. The image quality of inverse scattering migration is much better than the others migration methods.
Optical coherence tomography of lymphatic vessel endothelial hyaluronan receptors in vivo
NASA Astrophysics Data System (ADS)
Si, Peng; Sen, Debasish; Dutta, Rebecca; Yousefi, Siavash; Dalal, Roopa; Winetraub, Yonatan; Liba, Orly; de la Zerda, Adam
2018-02-01
Optical Coherence Tomography (OCT) imaging of living subjects offers millimeters depth of penetration into tissue while maintaining high spatial resolution. However, because most molecular biomarkers do not produce inherent OCT contrast signals, exogenous contrast agents must be employed to achieve molecular imaging. Here we demonstrate that microbeads (μBs) can be used as effective contrast agents to target cellular biomarkers in lymphatic vessels and can be detected by OCT using a phase variance algorithm. We applied this technique to image the molecular dynamics of lymphatic vessel endothelial hyaluronan receptor 1 (LYVE-1) in vivo, which showed significant down-regulation during tissue inflammation.
Real-time free-viewpoint DIBR for large-size 3DLED
NASA Astrophysics Data System (ADS)
Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru
2017-10-01
Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.
Optical coherence tomography - principles and applications
NASA Astrophysics Data System (ADS)
Fercher, A. F.; Drexler, W.; Hitzenberger, C. K.; Lasser, T.
2003-02-01
There have been three basic approaches to optical tomography since the early 1980s: diffraction tomography, diffuse optical tomography and optical coherence tomography (OCT). Optical techniques are of particular importance in the medical field, because these techniques promise to be safe and cheap and, in addition, offer a therapeutic potential. Advances in OCT technology have made it possible to apply OCT in a wide variety of applications but medical applications are still dominating. Specific advantages of OCT are its high depth and transversal resolution, the fact, that its depth resolution is decoupled from transverse resolution, high probing depth in scattering media, contact-free and non-invasive operation, and the possibility to create various function dependent image contrasting methods. This report presents the principles of OCT and the state of important OCT applications. OCT synthesises cross-sectional images from a series of laterally adjacent depth-scans. At present OCT is used in three different fields of optical imaging, in macroscopic imaging of structures which can be seen by the naked eye or using weak magnifications, in microscopic imaging using magnifications up to the classical limit of microscopic resolution and in endoscopic imaging, using low and medium magnification. First, OCT techniques, like the reflectometry technique and the dual beam technique were based on time-domain low coherence interferometry depth-scans. Later, Fourier-domain techniques have been developed and led to new imaging schemes. Recently developed parallel OCT schemes eliminate the need for lateral scanning and, therefore, dramatically increase the imaging rate. These schemes use CCD cameras and CMOS detector arrays as photodetectors. Video-rate three-dimensional OCT pictures have been obtained. Modifying interference microscopy techniques has led to high-resolution optical coherence microscopy that achieved sub-micrometre resolution. This report is concluded with a short presentation of important OCT applications. Ophthalmology is, due to the transparent ocular structures, still the main field of OCT application. The first commercial instrument too has been introduced for ophthalmic diagnostics (Carl Zeiss Meditec AG). Advances in using near-infrared light, however, opened the path for OCT imaging in strongly scattering tissues. Today, optical in vivo biopsy is one of the most challenging fields of OCT application. High resolution, high penetration depth, and its potential for functional imaging attribute to OCT an optical biopsy quality, which can be used to assess tissue and cell function and morphology in situ. OCT can already clarify the relevant architectural tissue morphology. For many diseases, however, including cancer in its early stages, higher resolution is necessary. New broad-bandwidth light sources, like photonic crystal fibres and superfluorescent fibre sources, and new contrasting techniques, give access to new sample properties and unmatched sensitivity and resolution.
Estimation of object motion parameters from noisy images.
Broida, T J; Chellappa, R
1986-01-01
An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.
Validation of MODIS Aerosol Optical Depth Retrieval Over Land
NASA Technical Reports Server (NTRS)
Chu, D. A.; Kaufman, Y. J.; Ichoku, C.; Remer, L. A.; Tanre, D.; Holben, B. N.; Einaudi, Franco (Technical Monitor)
2001-01-01
Aerosol optical depths are derived operationally for the first time over land in the visible wavelengths by MODIS (Moderate Resolution Imaging Spectroradiometer) onboard the EOSTerra spacecraft. More than 300 Sun photometer data points from more than 30 AERONET (Aerosol Robotic Network) sites globally were used in validating the aerosol optical depths obtained during July - September 2000. Excellent agreement is found with retrieval errors within (Delta)tau=+/- 0.05 +/- 0.20 tau, as predicted, over (partially) vegetated surfaces, consistent with pre-launch theoretical analysis and aircraft field experiments. In coastal and semi-arid regions larger errors are caused predominantly by the uncertainty in evaluating the surface reflectance. The excellent fit was achieved despite the ongoing improvements in instrument characterization and calibration. This results show that MODIS-derived aerosol optical depths can be used quantitatively in many applications with cautions for residual clouds, snow/ice, and water contamination.
NASA Astrophysics Data System (ADS)
Yamagiwa, Masatomo; Ogawa, Takayuki; Minamikawa, Takeo; Abdelsalam, Dahi Ghareab; Okabe, Kyosuke; Tsurumachi, Noriaki; Mizutani, Yasuhiro; Iwata, Testuo; Yamamoto, Hirotsugu; Yasui, Takeshi
2018-06-01
Terahertz digital holography (THz-DH) has the potential to be used for non-destructive inspection of visibly opaque soft materials due to its good immunity to optical scattering and absorption. Although previous research on full-field off-axis THz-DH has usually been performed using Fresnel diffraction reconstruction, its minimum reconstruction distance occasionally prevents a sample from being placed near a THz imager to increase the signal-to-noise ratio in the hologram. In this article, we apply the angular spectrum method (ASM) for wavefront reconstruction in full-filed off-axis THz-DH because ASM is more accurate at short reconstruction distances. We demonstrate real-time phase imaging of a visibly opaque plastic sample with a phase resolution power of λ/49 at a frame rate of 3.5 Hz in addition to real-time amplitude imaging. We also perform digital focusing of the amplitude image for the same object with a depth selectivity of 447 μm. Furthermore, 3D imaging of visibly opaque silicon objects was achieved with a depth precision of 1.7 μm. The demonstrated results indicate the high potential of the proposed method for in-line or in-process non-destructive inspection of soft materials.
Rapid prototyping of biomimetic vascular phantoms for hyperspectral reflectance imaging
Ghassemi, Pejhman; Wang, Jianting; Melchiorri, Anthony J.; Ramella-Roman, Jessica C.; Mathews, Scott A.; Coburn, James C.; Sorg, Brian S.; Chen, Yu; Joshua Pfefer, T.
2015-01-01
Abstract. The emerging technique of rapid prototyping with three-dimensional (3-D) printers provides a simple yet revolutionary method for fabricating objects with arbitrary geometry. The use of 3-D printing for generating morphologically biomimetic tissue phantoms based on medical images represents a potentially major advance over existing phantom approaches. Toward the goal of image-defined phantoms, we converted a segmented fundus image of the human retina into a matrix format and edited it to achieve a geometry suitable for printing. Phantoms with vessel-simulating channels were then printed using a photoreactive resin providing biologically relevant turbidity, as determined by spectrophotometry. The morphology of printed vessels was validated by x-ray microcomputed tomography. Channels were filled with hemoglobin (Hb) solutions undergoing desaturation, and phantoms were imaged with a near-infrared hyperspectral reflectance imaging system. Additionally, a phantom was printed incorporating two disjoint vascular networks at different depths, each filled with Hb solutions at different saturation levels. Light propagation effects noted during these measurements—including the influence of vessel density and depth on Hb concentration and saturation estimates, and the effect of wavelength on vessel visualization depth—were evaluated. Overall, our findings indicated that 3-D-printed biomimetic phantoms hold significant potential as realistic and practical tools for elucidating light–tissue interactions and characterizing biophotonic system performance. PMID:26662064
NASA Astrophysics Data System (ADS)
Liba, Orly; Sorelle, Elliott D.; Sen, Debasish; de La Zerda, Adam
2016-03-01
Optical Coherence Tomography (OCT) enables real-time imaging of living tissues at cell-scale resolution over millimeters in three dimensions. Despite these advantages, functional biological studies with OCT have been limited by a lack of exogenous contrast agents that can be distinguished from tissue. Here we report an approach to functional OCT imaging that implements custom algorithms to spectrally identify unique contrast agents: large gold nanorods (LGNRs). LGNRs exhibit 110-fold greater spectral signal per particle than conventional GNRs, which enables detection of individual LGNRs in water and concentrations as low as 250 pM in the circulation of living mice. This translates to ~40 particles per imaging voxel in vivo. Unlike previous implementations of OCT spectral detection, the methods described herein adaptively compensate for depth and processing artifacts on a per sample basis. Collectively, these methods enable high-quality noninvasive contrast-enhanced imaging of OCT in living subjects, including detection of tumor microvasculature at twice the depth achievable with conventional OCT. Additionally, multiplexed detection of spectrally-distinct LGNRs was demonstrated to observe discrete patterns of lymphatic drainage and identify individual lymphangions and lymphatic valve functional states. These capabilities provide a powerful platform for molecular imaging and characterization of tissue noninvasively at cellular resolution, called MOZART.
Raphael, David T.; Li, Xiang; Park, Jinhyoung; Chen, Ruimin; Chabok, Hamid; Barukh, Arthur; Zhou, Qifa; Elgazery, Mahmoud; Shung, K. Kirk
2012-01-01
Feasibility is demonstrated for a forward-imaging beam steering system involving a single-element 20 MHz angled-face acoustic transducer combined with an internal rotating variable-angle reflecting surface (VARS). Rotation of the VARS structure, for a fixed position of the transducer, generates a 2-D angular sector scan. If these VARS revolutions were to be accompanied by successive rotations of the single-element transducer, 3-D imaging would be achieved. In the design of this device, a single-element 20 MHz PMN-PT press-focused angled-face transducer is focused on the circle of midpoints of a micro-machined VARS within the distal end of an endoscope. The 2-D imaging system was tested in water bath experiments with phantom wire structures at a depth of 10 mm, and exhibited an axial resolution of 66 μm and a lateral resolution of 520 μm. Chirp coded excitation was used to enhance the signal-to-noise ratio, and to increase the depth of penetration. Images of an ex vivo cow eye were obtained. This VARS-based approach offers a novel forward-looking beam-steering method, which could be useful in intra-cavity imaging. PMID:23122968
Raphael, David T; Li, Xiang; Park, Jinhyoung; Chen, Ruimin; Chabok, Hamid; Barukh, Arthur; Zhou, Qifa; Elgazery, Mahmoud; Shung, K Kirk
2013-02-01
Feasibility is demonstrated for a forward-imaging beam steering system involving a single-element 20MHz angled-face acoustic transducer combined with an internal rotating variable-angle reflecting surface (VARS). Rotation of the VARS structure, for a fixed position of the transducer, generates a 2-D angular sector scan. If these VARS revolutions were to be accompanied by successive rotations of the single-element transducer, 3-D imaging would be achieved. In the design of this device, a single-element 20MHz PMN-PT press-focused angled-face transducer is focused on the circle of midpoints of a micro-machined VARS within the distal end of an endoscope. The 2-D imaging system was tested in water bath experiments with phantom wire structures at a depth of 10mm, and exhibited an axial resolution of 66μm and a lateral resolution of 520μm. Chirp coded excitation was used to enhance the signal-to-noise ratio, and to increase the depth of penetration. Images of an ex vivo cow eye were obtained. This VARS-based approach offers a novel forward-looking beam-steering method, which could be useful in intra-cavity imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yamagiwa, Masatomo; Ogawa, Takayuki; Minamikawa, Takeo; Abdelsalam, Dahi Ghareab; Okabe, Kyosuke; Tsurumachi, Noriaki; Mizutani, Yasuhiro; Iwata, Testuo; Yamamoto, Hirotsugu; Yasui, Takeshi
2018-04-01
Terahertz digital holography (THz-DH) has the potential to be used for non-destructive inspection of visibly opaque soft materials due to its good immunity to optical scattering and absorption. Although previous research on full-field off-axis THz-DH has usually been performed using Fresnel diffraction reconstruction, its minimum reconstruction distance occasionally prevents a sample from being placed near a THz imager to increase the signal-to-noise ratio in the hologram. In this article, we apply the angular spectrum method (ASM) for wavefront reconstruction in full-filed off-axis THz-DH because ASM is more accurate at short reconstruction distances. We demonstrate real-time phase imaging of a visibly opaque plastic sample with a phase resolution power of λ/49 at a frame rate of 3.5 Hz in addition to real-time amplitude imaging. We also perform digital focusing of the amplitude image for the same object with a depth selectivity of 447 μm. Furthermore, 3D imaging of visibly opaque silicon objects was achieved with a depth precision of 1.7 μm. The demonstrated results indicate the high potential of the proposed method for in-line or in-process non-destructive inspection of soft materials.
Rapid prototyping of biomimetic vascular phantoms for hyperspectral reflectance imaging.
Ghassemi, Pejhman; Wang, Jianting; Melchiorri, Anthony J; Ramella-Roman, Jessica C; Mathews, Scott A; Coburn, James C; Sorg, Brian S; Chen, Yu; Pfefer, T Joshua
2015-01-01
The emerging technique of rapid prototyping with three-dimensional (3-D) printers provides a simple yet revolutionary method for fabricating objects with arbitrary geometry. The use of 3-D printing for generating morphologically biomimetic tissue phantoms based on medical images represents a potentially major advance over existing phantom approaches. Toward the goal of image-defined phantoms, we converted a segmented fundus image of the human retina into a matrix format and edited it to achieve a geometry suitable for printing. Phantoms with vessel-simulating channels were then printed using a photoreactive resin providing biologically relevant turbidity, as determined by spectrophotometry. The morphology of printed vessels was validated by x-ray microcomputed tomography. Channels were filled with hemoglobin (Hb) solutions undergoing desaturation, and phantoms were imaged with a near-infrared hyperspectral reflectance imaging system. Additionally, a phantom was printed incorporating two disjoint vascular networks at different depths, each filled with Hb solutions at different saturation levels. Light propagation effects noted during these measurements—including the influence of vessel density and depth on Hb concentration and saturation estimates, and the effect of wavelength on vessel visualization depth—were evaluated. Overall, our findings indicated that 3-D-printed biomimetic phantoms hold significant potential as realistic and practical tools for elucidating light–tissue interactions and characterizing biophotonic system performance.
High-frequency ultrasound annular array imaging. Part II: digital beamformer design and imaging.
Hu, Chang-Hong; Snook, Kevin A; Cao, Pei-Jie; Shung, K Kirk
2006-02-01
This is the second part of a two-paper series reporting a recent effort in the development of a high-frequency annular array ultrasound imaging system. In this paper an imaging system composed of a six-element, 43 MHz annular array transducer, a six-channel analog front-end, a field programmable gate array (FPGA)-based beamformer, and a digital signal processor (DSP) microprocessor-based scan converter will be described. A computer is used as the interface for image display. The beamformer that applies delays to the echoes for each channel is implemented with the strategy of combining the coarse and fine delays. The coarse delays that are integer multiples of the clock periods are achieved by using a first-in-first-out (FIFO) structure, and the fine delays are obtained with a fractional delay (FD) filter. Using this principle, dynamic receiving focusing is achieved. The image from a wire phantom obtained with the imaging system was compared to that from a prototype ultrasonic backscatter microscope with a 45 MHz single-element transducer. The improved lateral resolution and depth of field from the wire phantom image were observed. Images from an excised rabbit eye sample also were obtained, and fine anatomical structures were discerned.
NASA Astrophysics Data System (ADS)
Subashini, L.; Vasudevan, M.
2012-02-01
Type 316 LN stainless steel is the major structural material used in the construction of nuclear reactors. Activated flux tungsten inert gas (A-TIG) welding has been developed to increase the depth of penetration because the depth of penetration achievable in single-pass TIG welding is limited. Real-time monitoring and control of weld processes is gaining importance because of the requirement of remoter welding process technologies. Hence, it is essential to develop computational methodologies based on an adaptive neuro fuzzy inference system (ANFIS) or artificial neural network (ANN) for predicting and controlling the depth of penetration and weld bead width during A-TIG welding of type 316 LN stainless steel. In the current work, A-TIG welding experiments have been carried out on 6-mm-thick plates of 316 LN stainless steel by varying the welding current. During welding, infrared (IR) thermal images of the weld pool have been acquired in real time, and the features have been extracted from the IR thermal images of the weld pool. The welding current values, along with the extracted features such as length, width of the hot spot, thermal area determined from the Gaussian fit, and thermal bead width computed from the first derivative curve were used as inputs, whereas the measured depth of penetration and weld bead width were used as output of the respective models. Accurate ANFIS models have been developed for predicting the depth of penetration and the weld bead width during TIG welding of 6-mm-thick 316 LN stainless steel plates. A good correlation between the measured and predicted values of weld bead width and depth of penetration were observed in the developed models. The performance of the ANFIS models are compared with that of the ANN models.
Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging
Manley, Suliana
2015-01-01
Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope’s pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467
Second harmonic generation imaging of skeletal muscle tissue and myofibrils
NASA Astrophysics Data System (ADS)
Campagnola, Paul J.; Mohler, William H.; Plotnikov, Sergey; Millard, Andrew C.
2006-02-01
Second Harmonic Generation (SHG) imaging microscopy is used to examine the morphology and structural properties of intact muscle tissue. Using biochemical and optical analysis, we characterize the molecular structure underlying SHG from the complex muscle sarcomere. We find that SHG from isolated myofibrils is abolished by extraction of myosin, but is unaffected by removal or addition of actin filaments. We thus determined that the SHG emission arises from domains of the sarcomere containing thick filaments. By fitting the SHG polarization anisotropy to theoretical response curves, we find an orientation for the harmonophore that corresponds well to the pitch angle of the myosin rod α-helix with respect to the thick filament axis. Taken together, these data indicate that myosin rod domains are the key structures giving rise to SHG from striated muscle. Using SHG imaging microscopy, we have also examined the effect of optical clearing with glycerol to achieve greater penetration into specimens of skeletal muscle tissue. We find that treatment with 50% glycerol results in a 2.5 fold increase in achievable SHG imaging depth. Fast Fourier Transform (FFT) analysis shows quantitatively that the periodicity of the sarcomere structure is unaltered by the clearing process. Also, comparison of the SHG angular polarization dependence shows no change in the supramolecular organization of acto-myosin complexes. We suggest that the primary mechanism of optical clearing in muscle with glycerol treatment results from the reduction of cytoplasmic protein concentration and concomitant decrease in the secondary inner filter effect on the SHG signal. The pronounced lack of dependence of glycerol concentration on the imaging depth indicates that refractive index matching plays only a minor role in the optical clearing of muscle.
Automated motion artifact removal for intravital microscopy, without a priori information.
Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph
2014-03-28
Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber.
Automated motion artifact removal for intravital microscopy, without a priori information
Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph
2014-01-01
Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber. PMID:24676021
A depth enhancement strategy for kinect depth image
NASA Astrophysics Data System (ADS)
Quan, Wei; Li, Hua; Han, Cheng; Xue, Yaohong; Zhang, Chao; Hu, Hanping; Jiang, Zhengang
2018-03-01
Kinect is a motion sensing input device which is widely used in computer vision and other related fields. However, there are many inaccurate depth data in Kinect depth images even Kinect v2. In this paper, an algorithm is proposed to enhance Kinect v2 depth images. According to the principle of its depth measuring, the foreground and the background are considered separately. As to the background, the holes are filled according to the depth data in the neighborhood. And as to the foreground, a filling algorithm, based on the color image concerning about both space and color information, is proposed. An adaptive joint bilateral filtering method is used to reduce noise. Experimental results show that the processed depth images have clean background and clear edges. The results are better than ones of traditional Strategies. It can be applied in 3D reconstruction fields to pretreat depth image in real time and obtain accurate results.
Photon-efficient super-resolution laser radar
NASA Astrophysics Data System (ADS)
Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.
2017-08-01
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
Polychromatic wave-optics models for image-plane speckle. 2. Unresolved objects.
Van Zandt, Noah R; Spencer, Mark F; Steinbock, Michael J; Anderson, Brian M; Hyde, Milo W; Fiorino, Steven T
2018-05-20
Polychromatic laser light can reduce speckle noise in many wavefront-sensing and imaging applications. To help quantify the achievable reduction in speckle noise, this study investigates the accuracy of three polychromatic wave-optics models under the specific conditions of an unresolved object. Because existing theory assumes a well-resolved object, laboratory experiments are used to evaluate model accuracy. The three models use Monte-Carlo averaging, depth slicing, and spectral slicing, respectively, to simulate the laser-object interaction. The experiments involve spoiling the temporal coherence of laser light via a fiber-based, electro-optic modulator. After the light scatters off of the rough object, speckle statistics are measured. The Monte-Carlo method is found to be highly inaccurate, while depth-slicing error peaks at 7.8% but is generally much lower in comparison. The spectral-slicing method is the most accurate, always producing results within the error bounds of the experiment.
Eddy current imaging for electrical characterization of silicon solar cells and TCO layers
NASA Astrophysics Data System (ADS)
Hwang, Byungguk; Hillmann, Susanne; Schulze, Martin; Klein, Marcus; Heuer, Henning
2015-03-01
Eddy Current Testing has been mainly used to determine defects of conductive materials and wall thicknesses in heavy industries such as construction or aerospace. Recently, high frequency Eddy Current imaging technology was developed. This enables the acquirement of information of different depth level in conductive thin-film structures by realizing proper standard penetration depth. In this paper, we summarize the state of the art applications focusing on PV industry and extend the analysis implementing achievements by applying spatially resolved Eddy Current Testing. The specific state of frequency and complex phase angle rotation demonstrates diverse defects from front to back side of silicon solar cells and characterizes homogeneity of sheet resistance in Transparent Conductive Oxide (TCO) layers. In order to verify technical feasibility, measurement results from the Multi Parameter Eddy Current Scanner, MPECS are compared to the results from Electroluminescence.
GPU-based real-time trinocular stereo vision
NASA Astrophysics Data System (ADS)
Yao, Yuanbin; Linton, R. J.; Padir, Taskin
2013-01-01
Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.
Ju, Bing-Feng; Chen, Yuan-Liu; Zhang, Wei; Zhu, Wule; Jin, Chao; Fang, F Z
2012-05-01
A compact but practical scanning tunneling microscope (STM) with high aspect ratio and high depth capability has been specially developed. Long range scanning mechanism with tilt-adjustment stage is adopted for the purpose of adjusting the probe-sample relative angle to compensate the non-parallel effects. A periodical trench microstructure with a pitch of 10 μm has been successfully imaged with a long scanning range up to 2.0 mm. More innovatively, a deep trench with depth and step height of 23.0 μm has also been successfully measured, and slope angle of the sidewall can approximately achieve 67°. The probe can continuously climb the high step and exploring the trench bottom without tip crashing. The new STM could perform long range measurement for the deep trench and high step surfaces without image distortion. It enables accurate measurement and quality control of periodical trench microstructures.
X-ray Radiation-Controlled NO-Release for On-Demand Depth-Independent Hypoxic Radiosensitization.
Fan, Wenpei; Bu, Wenbo; Zhang, Zhen; Shen, Bo; Zhang, Hui; He, Qianjun; Ni, Dalong; Cui, Zhaowen; Zhao, Kuaile; Bu, Jiwen; Du, Jiulin; Liu, Jianan; Shi, Jianlin
2015-11-16
Multifunctional stimuli-responsive nanotheranostic systems are highly desirable for realizing simultaneous biomedical imaging and on-demand therapy with minimized adverse effects. Herein, we present the construction of an intelligent X-ray-controlled NO-releasing upconversion nanotheranostic system (termed as PEG-USMSs-SNO) by engineering UCNPs with S-nitrosothiol (R-SNO)-grafted mesoporous silica. The PEG-USMSs-SNO is designed to respond sensitively to X-ray radiation for breaking down the S-N bond of SNO to release NO, which leads to X-ray dose-controlled NO release for on-demand hypoxic radiosensitization besides upconversion luminescent imaging through UCNPs in vitro and in vivo. Thanks to the high live-body permeability of X-ray, our developed PEG-USMSs-SNO may provide a new technique for achieving depth-independent controlled NO release and positioned radiotherapy enhancement against deep-seated solid tumors. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Shi, Lingyan; Rodríguez-Contreras, Adrián; Budansky, Yury; Pu, Yang; An Nguyen, Thien; Alfano, Robert R.
2014-06-01
Two-photon (2P) excitation of the second singlet (S) state was studied to achieve deep optical microscopic imaging in brain tissue when both the excitation (800 nm) and emission (685 nm) wavelengths lie in the "tissue optical window" (650 to 950 nm). S2 state technique was used to investigate chlorophyll α (Chl α) fluorescence inside a spinach leaf under a thick layer of freshly sliced rat brain tissue in combination with 2P microscopic imaging. Strong emission at the peak wavelength of 685 nm under the 2P S state of Chl α enabled the imaging depth up to 450 μm through rat brain tissue.
Huang, Hongxin; Inoue, Takashi; Tanaka, Hiroshi
2011-08-01
We studied the long-term optical performance of an adaptive optics scanning laser ophthalmoscope that uses a liquid crystal on silicon spatial light modulator to correct ocular aberrations. The system achieved good compensation of aberrations while acquiring images of fine retinal structures, excepting during sudden eye movements. The residual wavefront aberrations collected over several minutes in several situations were statistically analyzed. The mean values of the root-mean-square residual wavefront errors were 23-30 nm, and for around 91-94% of the effective time the errors were below the Marechal criterion for diffraction limited imaging. The ability to axially shift the imaging plane to different retinal depths was also demonstrated.
Shi, Lingyan; Rodríguez-Contreras, Adrián; Budansky, Yury; Pu, Yang; Nguyen, Thien An; Alfano, Robert R
2014-06-01
Two-photon (2P) excitation of the second singlet (S₂) state was studied to achieve deep optical microscopic imaging in brain tissue when both the excitation (800 nm) and emission (685 nm) wavelengths lie in the "tissue optical window" (650 to 950 nm). S₂ state technique was used to investigate chlorophyll α (Chl α) fluorescence inside a spinach leaf under a thick layer of freshly sliced rat brain tissue in combination with 2P microscopic imaging. Strong emission at the peak wavelength of 685 nm under the 2P S₂ state of Chl α enabled the imaging depth up to 450 μm through rat brain tissue.
NASA Astrophysics Data System (ADS)
Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried
2017-09-01
Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.
NASA Astrophysics Data System (ADS)
Chen, Yu; Fingler, Jeff; Trinh, Le A.; Fraser, Scott E.
2016-03-01
A phase variance optical coherence microscope (pvOCM) has been created to visualize blood flow in the vasculature of zebrafish embryos, without using exogenous labels. The pvOCM imaging system has axial and lateral resolutions of 2 μm in tissue, and imaging depth of more than 100 μm. Imaging of 2-5 days post-fertilization zebrafish embryos identified the detailed structures of somites, spinal cord, gut and notochord based on intensity contrast. Visualization of the blood flow in the aorta, veins and intersegmental vessels was achieved with phase variance contrast. The pvOCM vasculature images were confirmed with corresponding fluorescence microscopy of a zebrafish transgene that labels the vasculature with green fluorescent protein. The pvOCM images also revealed functional information of the blood flow activities that is crucial for the study of vascular development.
2018-01-01
Molecular imaging is advantageous for screening diseases such as breast cancer by providing precise spatial information on disease-associated biomarkers, something neither blood tests nor anatomical imaging can achieve. However, the high cost and risks of ionizing radiation for several molecular imaging modalities have prevented a feasible and scalable approach for screening. Clinical studies have demonstrated the ability to detect breast tumors using nonspecific probes such as indocyanine green, but the lack of molecular information and required intravenous contrast agent does not provide a significant benefit over current noninvasive imaging techniques. Here we demonstrate that negatively charged sulfate groups, commonly used to improve solubility of near-infrared fluorophores, enable sufficient oral absorption and targeting of fluorescent molecular imaging agents for completely noninvasive detection of diseased tissue such as breast cancer. These functional groups improve the pharmacokinetic properties of affinity ligands to achieve targeting efficiencies compatible with clinical imaging devices using safe, nonionizing radiation (near-infrared light). Together, this enables development of a “disease screening pill” capable of oral absorption and systemic availability, target binding, background clearance, and imaging at clinically relevant depths for breast cancer screening. This approach should be adaptable to other molecular targets and diseases for use as a new class of screening agents. PMID:29696981
Bhatnagar, Sumit; Verma, Kirti Dhingra; Hu, Yongjun; Khera, Eshita; Priluck, Aaron; Smith, David E; Thurber, Greg M
2018-05-07
Molecular imaging is advantageous for screening diseases such as breast cancer by providing precise spatial information on disease-associated biomarkers, something neither blood tests nor anatomical imaging can achieve. However, the high cost and risks of ionizing radiation for several molecular imaging modalities have prevented a feasible and scalable approach for screening. Clinical studies have demonstrated the ability to detect breast tumors using nonspecific probes such as indocyanine green, but the lack of molecular information and required intravenous contrast agent does not provide a significant benefit over current noninvasive imaging techniques. Here we demonstrate that negatively charged sulfate groups, commonly used to improve solubility of near-infrared fluorophores, enable sufficient oral absorption and targeting of fluorescent molecular imaging agents for completely noninvasive detection of diseased tissue such as breast cancer. These functional groups improve the pharmacokinetic properties of affinity ligands to achieve targeting efficiencies compatible with clinical imaging devices using safe, nonionizing radiation (near-infrared light). Together, this enables development of a "disease screening pill" capable of oral absorption and systemic availability, target binding, background clearance, and imaging at clinically relevant depths for breast cancer screening. This approach should be adaptable to other molecular targets and diseases for use as a new class of screening agents.
NASA Astrophysics Data System (ADS)
Fan, Tiantian; Yu, Hongbin
2018-03-01
A novel shape from focus method combining 3D steerable filter for improved performance on treating textureless region was proposed in this paper. Different from conventional spatial methods focusing on the search of maximum edges' response to estimate the depth map, the currently proposed method took both of the edges' response and the axial imaging blur degree into consideration during treatment. As a result, more robust and accurate identification for the focused location can be achieved, especially when treating textureless objects. Improved performance in depth measurement has been successfully demonstrated from both of the simulation and experiment results.
Thallium Bromide as an Alternative Material for Room-Temperature Gamma-Ray Spectroscopy and Imaging
NASA Astrophysics Data System (ADS)
Koehler, William
Thallium bromide is an attractive material for room-temperature gamma-ray spectroscopy and imaging because of its high atomic number (Tl: 81, Br: 35), high density (7.56 g/cm3), and a wide bandgap (2.68 eV). In this work, 5 mm thick TlBr detectors achieved 0.94% FWHM at 662 keV for all single-pixel events and 0.72% FWHM at 662 keV from the best pixel and depth using three-dimensional position sensing technology. However, these results were limited to stable operation at -20°C. After days to months of room-temperature operation, ionic conduction caused these devices to fail. Depth-dependent signal analysis was used to isolate room-temperature degradation effects to within 0.5 mm of the anode surface. This was verified by refabricating the detectors after complete failure at room temperature; after refabrication, similar performance and functionality was recovered. As part of this work, the improvement in electron drift velocity and energy resolution during conditioning at -20°C was quantified. A new method was developed to measure the impurity concentration without changing the gamma ray measurement setup. The new method was used to show that detector conditioning was likely the result of charged impurities drifting out of the active volume. This space charge reduction then caused a more stable and uniform electric field. Additionally, new algorithms were developed to remove hole contributions in high-hole-mobility detectors to improve depth reconstruction. These algorithms improved the depth reconstruction (accuracy) without degrading the depth uncertainty (precision). Finally, spectroscopic and imaging performance of new 11 x 11 pixelated-anode TlBr detectors was characterized. The larger detectors were used to show that energy resolution can be improved by identifying photopeak events from their Tl characteristic x-rays.
Detective quantum efficiency of photon-counting x-ray detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanguay, Jesse, E-mail: jessetan@mail.ubc.ca; Yun, Seungman; Kim, Ho Kyung
Purpose: Single-photon-counting (SPC) x-ray imaging has the potential to improve image quality and enable novel energy-dependent imaging methods. Similar to conventional detectors, optimizing image SPC quality will require systems that produce the highest possible detective quantum efficiency (DQE). This paper builds on the cascaded-systems analysis (CSA) framework to develop a comprehensive description of the DQE of SPC detectors that implement adaptive binning. Methods: The DQE of SPC systems can be described using the CSA approach by propagating the probability density function (PDF) of the number of image-forming quanta through simple quantum processes. New relationships are developed to describe PDF transfermore » through serial and parallel cascades to accommodate scatter reabsorption. Results are applied to hypothetical silicon and selenium-based flat-panel SPC detectors including the effects of reabsorption of characteristic/scatter photons from photoelectric and Compton interactions, stochastic conversion of x-ray energy to secondary quanta, depth-dependent charge collection, and electronic noise. Results are compared with a Monte Carlo study. Results: Depth-dependent collection efficiency can result in substantial broadening of photopeaks that in turn may result in reduced DQE at lower x-ray energies (20–45 keV). Double-counting interaction events caused by reabsorption of characteristic/scatter photons may result in falsely inflated image signal-to-noise ratio and potential overestimation of the DQE. Conclusions: The CSA approach is extended to describe signal and noise propagation through photoelectric and Compton interactions in SPC detectors, including the effects of escape and reabsorption of emission/scatter photons. High-performance SPC systems can be achieved but only for certain combinations of secondary conversion gain, depth-dependent collection efficiency, electronic noise, and reabsorption characteristics.« less
Detective quantum efficiency of photon-counting x-ray detectors.
Tanguay, Jesse; Yun, Seungman; Kim, Ho Kyung; Cunningham, Ian A
2015-01-01
Single-photon-counting (SPC) x-ray imaging has the potential to improve image quality and enable novel energy-dependent imaging methods. Similar to conventional detectors, optimizing image SPC quality will require systems that produce the highest possible detective quantum efficiency (DQE). This paper builds on the cascaded-systems analysis (CSA) framework to develop a comprehensive description of the DQE of SPC detectors that implement adaptive binning. The DQE of SPC systems can be described using the CSA approach by propagating the probability density function (PDF) of the number of image-forming quanta through simple quantum processes. New relationships are developed to describe PDF transfer through serial and parallel cascades to accommodate scatter reabsorption. Results are applied to hypothetical silicon and selenium-based flat-panel SPC detectors including the effects of reabsorption of characteristic/scatter photons from photoelectric and Compton interactions, stochastic conversion of x-ray energy to secondary quanta, depth-dependent charge collection, and electronic noise. Results are compared with a Monte Carlo study. Depth-dependent collection efficiency can result in substantial broadening of photopeaks that in turn may result in reduced DQE at lower x-ray energies (20-45 keV). Double-counting interaction events caused by reabsorption of characteristic/scatter photons may result in falsely inflated image signal-to-noise ratio and potential overestimation of the DQE. The CSA approach is extended to describe signal and noise propagation through photoelectric and Compton interactions in SPC detectors, including the effects of escape and reabsorption of emission/scatter photons. High-performance SPC systems can be achieved but only for certain combinations of secondary conversion gain, depth-dependent collection efficiency, electronic noise, and reabsorption characteristics.
Depth profile measurement with lenslet images of the plenoptic camera
NASA Astrophysics Data System (ADS)
Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei
2018-03-01
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.
Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu
2018-09-01
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
Decoding representations of face identity that are tolerant to rotation.
Anzellotti, Stefano; Fairhall, Scott L; Caramazza, Alfonso
2014-08-01
In order to recognize the identity of a face we need to distinguish very similar images (specificity) while also generalizing identity information across image transformations such as changes in orientation (tolerance). Recent studies investigated the representation of individual faces in the brain, but it remains unclear whether the human brain regions that were found encode representations of individual images (specificity) or face identity (specificity plus tolerance). In the present article, we use multivoxel pattern analysis in the human ventral stream to investigate the representation of face identity across rotations in depth, a kind of transformation in which no point in the face image remains unchanged. The results reveal representations of face identity that are tolerant to rotations in depth in occipitotemporal cortex and in anterior temporal cortex, even when the similarity between mirror symmetrical views cannot be used to achieve tolerance. Converging evidence from different analysis techniques shows that the right anterior temporal lobe encodes a comparable amount of identity information to occipitotemporal regions, but this information is encoded over a smaller extent of cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Extended depth of field system for long distance iris acquisition
NASA Astrophysics Data System (ADS)
Chen, Yuan-Lin; Hsieh, Sheng-Hsun; Hung, Kuo-En; Yang, Shi-Wen; Li, Yung-Hui; Tien, Chung-Hao
2012-10-01
Using biometric signatures for identity recognition has been practiced for centuries. Recently, iris recognition system attracts much attention due to its high accuracy and high stability. The texture feature of iris provides a signature that is unique for each subject. Currently most commercial iris recognition systems acquire images in less than 50 cm, which is a serious constraint that needs to be broken if we want to use it for airport access or entrance that requires high turn-over rate . In order to capture the iris patterns from a distance, in this study, we developed a telephoto imaging system with image processing techniques. By using the cubic phase mask positioned front of the camera, the point spread function was kept constant over a wide range of defocus. With adequate decoding filter, the blurred image was restored, where the working distance between the subject and the camera can be achieved over 3m associated with 500mm focal length and aperture F/6.3. The simulation and experimental results validated the proposed scheme, where the depth of focus of iris camera was triply extended over the traditional optics, while keeping sufficient recognition accuracy.
Time-of-Flight Microwave Camera.
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-05
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
Shadow analysis via the C+K Visioline: A technical note.
Houser, T; Zerweck, C; Grove, G; Wickett, R
2017-11-01
This research investigated the ability of shadow analysis (via the Courage + Khazaka Visioline and Image Pro Premiere 9.0 software) to accurately assess the differences in skin topography associated with photo aging. Analyses were performed on impressions collected from a microfinish comparator scale (GAR Electroforming) as well a series of impressions collected from the crow's feet region of 9 women who represent each point on the Zerweck Crow's Feet classification scale. Analyses were performed using a Courage + Khazaka Visioline VL 650 as well as Image Pro Premiere 9.0 software. Shadow analysis showed an ability to accurately measure the groove depth when measuring impressions collected from grooves of known depth. Several shadow analysis parameters showed a correlation with the expert grader ratings of crow's feet when averaging measurements taken from the North and South directions. The Max Depth parameter in particular showed a strong correlation with the expert grader's ratings which improved when a more sophisticated analysis was performed using Image Pro Premiere. When used properly, shadow analysis is effective at accurately measuring skin surface impressions for differences in skin topography. Shadow analysis is shown to accurately assess the differences across a range of crow's feet severity correlating to a 0-8 grader scale. The Visioline VL 650 is a good tool for this measurement, with room for improvement in analysis which can be achieved through third party image analysis software. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qiang; Niu, Sijie; Yuan, Songtao
Purpose: In clinical research, it is important to measure choroidal thickness when eyes are affected by various diseases. The main purpose is to automatically segment choroid for enhanced depth imaging optical coherence tomography (EDI-OCT) images with five B-scans averaging. Methods: The authors present an automated choroid segmentation method based on choroidal vasculature characteristics for EDI-OCT images with five B-scans averaging. By considering the large vascular of the Haller’s layer neighbor with the choroid-sclera junction (CSJ), the authors measured the intensity ascending distance and a maximum intensity image in the axial direction from a smoothed and normalized EDI-OCT image. Then, basedmore » on generated choroidal vessel image, the authors constructed the CSJ cost and constrain the CSJ search neighborhood. Finally, graph search with smooth constraints was utilized to obtain the CSJ boundary. Results: Experimental results with 49 images from 10 eyes in 8 normal persons and 270 images from 57 eyes in 44 patients with several stages of diabetic retinopathy and age-related macular degeneration demonstrate that the proposed method can accurately segment the choroid of EDI-OCT images with five B-scans averaging. The mean choroid thickness difference and overlap ratio between the authors’ proposed method and manual segmentation drawn by experts were −11.43 μm and 86.29%, respectively. Conclusions: Good performance was achieved for normal and pathologic eyes, which proves that the authors’ method is effective for the automated choroid segmentation of the EDI-OCT images with five B-scans averaging.« less
Time-of-flight depth image enhancement using variable integration time
NASA Astrophysics Data System (ADS)
Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong
2013-03-01
Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.
a New Paradigm for Matching - and Aerial Images
NASA Astrophysics Data System (ADS)
Koch, T.; Zhuo, X.; Reinartz, P.; Fraundorfer, F.
2016-06-01
This paper investigates the performance of SIFT-based image matching regarding large differences in image scaling and rotation, as this is usually the case when trying to match images captured from UAVs and airplanes. This task represents an essential step for image registration and 3d-reconstruction applications. Various real world examples presented in this paper show that SIFT, as well as A-SIFT perform poorly or even fail in this matching scenario. Even if the scale difference in the images is known and eliminated beforehand, the matching performance suffers from too few feature point detections, ambiguous feature point orientations and rejection of many correct matches when applying the ratio-test afterwards. Therefore, a new feature matching method is provided that overcomes these problems and offers thousands of matches by a novel feature point detection strategy, applying a one-to-many matching scheme and substitute the ratio-test by adding geometric constraints to achieve geometric correct matches at repetitive image regions. This method is designed for matching almost nadir-directed images with low scene depth, as this is typical in UAV and aerial image matching scenarios. We tested the proposed method on different real world image pairs. While standard SIFT failed for most of the datasets, plenty of geometrical correct matches could be found using our approach. Comparing the estimated fundamental matrices and homographies with ground-truth solutions, mean errors of few pixels can be achieved.
Thériault, Gabrielle; Cottet, Martin; Castonguay, Annie; McCarthy, Nathalie; De Koninck, Yves
2014-01-01
Two-photon microscopy has revolutionized functional cellular imaging in tissue, but although the highly confined depth of field (DOF) of standard set-ups yields great optical sectioning, it also limits imaging speed in volume samples and ease of use. For this reason, we recently presented a simple and retrofittable modification to the two-photon laser-scanning microscope which extends the DOF through the use of an axicon (conical lens). Here we demonstrate three significant benefits of this technique using biological samples commonly employed in the field of neuroscience. First, we use a sample of neurons grown in culture and move it along the z-axis, showing that a more stable focus is achieved without compromise on transverse resolution. Second, we monitor 3D population dynamics in an acute slice of live mouse cortex, demonstrating that faster volumetric scans can be conducted. Third, we acquire a stereoscopic image of neurons and their dendrites in a fixed sample of mouse cortex, using only two scans instead of the complete stack and calculations required by standard systems. Taken together, these advantages, combined with the ease of integration into pre-existing systems, make the extended depth-of-field imaging based on Bessel beams a strong asset for the field of microscopy and life sciences in general.
Transurethral illumination probe design for deep photoacoustic imaging of prostate
NASA Astrophysics Data System (ADS)
Ai, Min; Salcudean, Tim; Rohling, Robert; Abolmaesumi, Purang; Tang, Shuo
2018-02-01
Photoacoustic (PA) imaging with internal light illumination through optical fiber could enable imaging of internal organs at deep penetration. We have developed a transurethral probe with a multimode fiber inserted in a rigid cystoscope sheath for illuminating the prostate. At the distal end, the fiber tip is processed to diffuse light circumferentially over 2 cm length. A parabolic cylinder mirror then reflects the light to form a rectangular-shaped parallel beam which has at least 1 cm2 at the probe surface. The relatively large rectangular beam size can reduce the laser fluence rate on the urethral wall and thus reduce the potential of tissue damage. A 3 cm optical penetration in chicken tissue is achieved at a fluence rate around 7 mJ/cm2 . For further validation, a prostate phantom was built with similar optical properties of the human prostate. A 1.5 cm penetration depth is achieved in the prostate mimicking phantom at 10 mJ/cm2 fluence rate. PA imaging of prostate can potentially be carried out in the future by combining a transrectal ultrasound transducer and the transurethral illumination.
NASA Astrophysics Data System (ADS)
Lowell, A.; Boggs, S.; Chiu, J. L.; Kierans, C.; McBride, S.; Tseng, C. H.; Zoglauer, A.; Amman, M.; Chang, H. K.; Jean, P.; Lin, C. H.; Sleator, C.; Tomsick, J.; von Ballmoos, P.; Yang, C. Y.
2016-08-01
The Compton Spectrometer and Imager (COSI) is a medium energy gamma ray (0.2 - 10 MeV) imager designed to observe high-energy processes in the universe from a high altitude balloon platform. At its core, COSI is comprised of twelve high purity germanium double sided strip detectors which measure particle interaction energies and locations with high precision. This manuscript focuses on the positional calibrations of the COSI detectors. The interaction depth in a detector is inferred from the charge collection time difference between the two sides of the detector. We outline our previous approach to this depth calibration and also describe a new approach we have recently developed. Two dimensional localization of interactions along the faces of the detector (x and y) is straightforward, as the location of the triggering strips is simply used. However, we describe a possible technique to improve the x/y position resolution beyond the detector strip pitch of 2 mm. With the current positional calibrations, COSI achieves an angular resolution of 5.6 +/- 0.1 degrees at 662 keV, close to our expectations from simulations.
Depth estimation and camera calibration of a focused plenoptic camera for visual odometry
NASA Astrophysics Data System (ADS)
Zeller, Niclas; Quint, Franz; Stilla, Uwe
2016-08-01
This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.
Micro-optical system based 3D imaging for full HD depth image capturing
NASA Astrophysics Data System (ADS)
Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan
2012-03-01
20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.
Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.
Fischmeister, Florian Ph S; Bauer, Herbert
2006-10-01
Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.
Wang, Fang; Sun, Ying; Cao, Meng; Nishi, Ryuji
2016-04-01
This study investigates the influence of structure depth on image blurring of micrometres-thick films by experiment and simulation with a conventional transmission electron microscope (TEM). First, ultra-high-voltage electron microscope (ultra-HVEM) images of nanometer gold particles embedded in thick epoxy-resin films were acquired in the experiment and compared with simulated images. Then, variations of image blurring of gold particles at different depths were evaluated by calculating the particle diameter. The results showed that with a decrease in depth, image blurring increased. This depth-related property was more apparent for thicker specimens. Fortunately, larger particle depth involves less image blurring, even for a 10-μm-thick epoxy-resin film. The quality dependence on depth of a 3D reconstruction of particle structures in thick specimens was revealed by electron tomography. The evolution of image blurring with structure depth is determined mainly by multiple elastic scattering effects. Thick specimens of heavier materials produced more blurring due to a larger lateral spread of electrons after scattering from the structure. Nevertheless, increasing electron energy to 2MeV can reduce blurring and produce an acceptable image quality for thick specimens in the TEM. Copyright © 2016 Elsevier Ltd. All rights reserved.
In vivo photoacoustic imaging of mouse embryos
NASA Astrophysics Data System (ADS)
Laufer, Jan; Norris, Francesca; Cleary, Jon; Zhang, Edward; Treeby, Bradley; Cox, Ben; Johnson, Peter; Scambler, Pete; Lythgoe, Mark; Beard, Paul
2012-06-01
The ability to noninvasively image embryonic vascular anatomy in mouse models is an important requirement for characterizing the development of the normal cardiovascular system and malformations in the heart and vascular supply. Photoacoustic imaging, which can provide high resolution non invasive images of the vasculature based upon optical absorption by endogenous hemoglobin, is well suited to this application. In this study, photoacoustic images of mouse embryos were obtained ex vivo and in vivo. The images show intricate details of the embryonic vascular system to depths of up to 10 mm, which allowed whole embryos to be imaged in situ. To achieve this, an all-optical photoacoustic scanner and a novel time reversal image reconstruction algorithm, which provide deep tissue imaging capability while maintaining high spatial resolution and contrast were employed. This technology may find application as an imaging tool for preclinical embryo studies in developmental biology as well as more generally in preclinical and clinical medicine for studying pathologies characterized by changes in the vasculature.
Dense real-time stereo matching using memory efficient semi-global-matching variant based on FPGAs
NASA Astrophysics Data System (ADS)
Buder, Maximilian
2012-06-01
This paper presents a stereo image matching system that takes advantage of a global image matching method. The system is designed to provide depth information for mobile robotic applications. Typical tasks of the proposed system are to assist in obstacle avoidance, SLAM and path planning. Mobile robots pose strong requirements about size, energy consumption, reliability and output quality of the image matching subsystem. Current available systems either rely on active sensors or on local stereo image matching algorithms. The first are only suitable in controlled environments while the second suffer from low quality depth-maps. Top ranking quality results are only achieved by an iterative approach using global image matching and color segmentation techniques which are computationally demanding and therefore difficult to be executed in realtime. Attempts were made to still reach realtime performance with global methods by simplifying the routines. The depth maps are at the end almost comparable to local methods. An equally named semi-global algorithm was proposed earlier that shows both very good image matching results and relatively simple operations. A memory efficient variant of the Semi-Global-Matching algorithm is reviewed and adopted for an implementation based on reconfigurable hardware. The implementation is suitable for realtime execution in the field of robotics. It will be shown that the modified version of the efficient Semi-Global-Matching method is delivering equivalent result compared to the original algorithm based on the Middlebury dataset. The system has proven to be capable of processing VGA sized images with a disparity resolution of 64 pixel at 33 frames per second based on low cost to mid-range hardware. In case the focus is shifted to a higher image resolution, 1024×1024-sized stereo frames may be processed with the same hardware at 10 fps. The disparity resolution settings stay unchanged. A mobile system that covers preprocessing, matching and interfacing operations is also presented.
NASA Astrophysics Data System (ADS)
Mo, Weirong; Rohrbach, Daniel; Sunar, Ulas
2012-07-01
We report the tomographic imaging of a photodynamic therapy (PDT) photosensitizer, 2-(1-hexyloxyethyl)-2-devinyl pyropheophorbide-a (HPPH) in vivo with time-domain fluorescence diffuse optical tomography (TD-FDOT). Simultaneous reconstruction of fluorescence yield and lifetime of HPPH was performed before and after PDT. The methodology was validated in phantom experiments, and depth-resolved in vivo imaging was achieved through simultaneous three-dimensional (3-D) mappings of fluorescence yield and lifetime contrasts. The tomographic images of a human head-and-neck xenograft in a mouse confirmed the preferential uptake and retention of HPPH by the tumor 24-h post-injection. HPPH-mediated PDT induced significant changes in fluorescence yield and lifetime. This pilot study demonstrates that TD-FDOT may be a good imaging modality for assessing photosensitizer distributions in deep tissue during PDT monitoring.
Singh, Manmohan; Raghunathan, Raksha; Piazza, Victor; Davis-Loiacono, Anjul M.; Cable, Alex; Vedakkan, Tegy J.; Janecek, Trevor; Frazier, Michael V.; Nair, Achuth; Wu, Chen; Larina, Irina V.; Dickinson, Mary E.; Larin, Kirill V.
2016-01-01
We present an analysis of imaging murine embryos at various embryonic developmental stages (embryonic day 9.5, 11.5, and 13.5) by optical coherence tomography (OCT) and optical projection tomography (OPT). We demonstrate that while OCT was capable of rapid high-resolution live 3D imaging, its limited penetration depth prevented visualization of deeper structures, particularly in later stage embryos. In contrast, OPT was able to image the whole embryos, but could not be used in vivo because the embryos must be fixed and cleared. Moreover, the fixation process significantly altered the embryo morphology, which was quantified by the volume of the eye-globes before and after fixation. All of these factors should be weighed when determining which imaging modality one should use to achieve particular goals of a study. PMID:27375945
Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee
2012-05-01
Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.
Nagy-Simon, Timea; Tatar, Andra-Sorina; Craciun, Ana-Maria; Vulpoi, Adriana; Jurj, Maria-Ancuta; Florea, Adrian; Tomuleasa, Ciprian; Berindan-Neagoe, Ioana; Astilean, Simion; Boca, Sanda
2017-06-28
In this Research Article, we propose a new class of contrast agents for the detection and multimodal imaging of CD19(+) cancer lymphoblasts. The agents are based on NIR responsive hollow gold-silver nanospheres conjugated with antiCD19 monoclonal antibodies and marked with Nile Blue (NB) SERS active molecules (HNS-NB-PEG-antiCD19). Proof of concept experiments on specificity of the complex for the investigated cells was achieved by transmission electron microscopy (TEM). The microspectroscopic investigations via dark field (DF), surface-enhanced Raman spectroscopy (SERS), and two-photon excited fluorescence lifetime imaging microscopy (TPE-FLIM) corroborate with TEM and demonstrate successful and preferential internalization of the antibody-nanocomplex. The combination of the microspectroscopic techniques enables contrast and sensitivity that competes with more invasive and time demanding cell imaging modalities, while depth sectioning images provide real time localization of the nanoparticles in the whole cytoplasm at the entire depth of the cells. Our findings prove that HNS-NB-PEG-antiCD19 represent a promising type of new contrast agents with great possibility of being detected by multiple, non invasive, rapid and accessible microspectroscopic techniques and real applicability for specific targeting of CD19(+) cancer cells. Such versatile nanocomplexes combine in one single platform the detection and imaging of cancer lymphoblasts by DF, SERS, and TPE-FLIM microspectroscopy.
Tian, Peifang; Devor, Anna; Sakadžić, Sava; Dale, Anders M.; Boas, David A.
2011-01-01
Absorption or fluorescence-based two-dimensional (2-D) optical imaging is widely employed in functional brain imaging. The image is a weighted sum of the real signal from the tissue at different depths. This weighting function is defined as “depth sensitivity.” Characterizing depth sensitivity and spatial resolution is important to better interpret the functional imaging data. However, due to light scattering and absorption in biological tissues, our knowledge of these is incomplete. We use Monte Carlo simulations to carry out a systematic study of spatial resolution and depth sensitivity for 2-D optical imaging methods with configurations typically encountered in functional brain imaging. We found the following: (i) the spatial resolution is <200 μm for NA ≤0.2 or focal plane depth ≤300 μm. (ii) More than 97% of the signal comes from the top 500 μm of the tissue. (iii) For activated columns with lateral size larger than spatial resolution, changing numerical aperature (NA) and focal plane depth does not affect depth sensitivity. (iv) For either smaller columns or large columns covered by surface vessels, increasing NA and∕or focal plane depth may improve depth sensitivity at deeper layers. Our results provide valuable guidance for the optimization of optical imaging systems and data interpretation. PMID:21280912
X-ray microlaminography with polycapillary optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dabrowski, K. M.; Dul, D. T.; Wrobel, A.
2013-06-03
We demonstrate layer-by-layer x-ray microimaging using polycapillary optics. The depth resolution is achieved without sample or source rotation and in a way similar to classical tomography or laminography. The method takes advantage from large angular apertures of polycapillary optics and from their specific microstructure, which is treated as a coded aperture. The imaging geometry is compatible with polychromatic x-ray sources and with scanning and confocal x-ray fluorescence setups.
NASA Astrophysics Data System (ADS)
Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook
2016-09-01
Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea
Lynch, S K; Liu, C; Morgan, N Y; Xiao, X; Gomella, A A; Mazilu, D; Bennett, E E; Assoufid, L; de Carlo, F; Wen, H
2012-01-01
We describe the design and fabrication trials of x-ray absorption gratings of 200 nm period and up to 100:1 depth-to-period ratios for full-field hard x-ray imaging applications. Hard x-ray phase-contrast imaging relies on gratings of ultra-small periods and sufficient depth to achieve high sensitivity. Current grating designs utilize lithographic processes to produce periodic vertical structures, where grating periods below 2.0 μm are difficult due to the extreme aspect ratios of the structures. In our design, multiple bilayers of x-ray transparent and opaque materials are deposited on a staircase substrate, and mostly on the floor surfaces of the steps only. When illuminated by an x-ray beam horizontally, the multilayer stack on each step functions as a micro-grating whose grating period is the thickness of a bilayer. The array of micro-gratings over the length of the staircase works as a single grating over a large area when continuity conditions are met. Since the layers can be nanometers thick and many microns wide, this design allows sub-micron grating periods and sufficient grating depth to modulate hard x-rays. We present the details of the fabrication process and diffraction profiles and contact radiography images showing successful intensity modulation of a 25 keV x-ray beam. PMID:23066175
Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.
2014-01-01
A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367
Lu, Hangwen; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei
2016-01-01
Differential phase contrast (DPC) is a non-interferometric quantitative phase imaging method achieved by using an asymmetric imaging procedure. We report a pupil modulation differential phase contrast (PMDPC) imaging method by filtering a sample’s Fourier domain with half-circle pupils. A phase gradient image is captured with each half-circle pupil, and a quantitative high resolution phase image is obtained after a deconvolution process with a minimum of two phase gradient images. Here, we introduce PMDPC quantitative phase image reconstruction algorithm and realize it experimentally in a 4f system with an SLM placed at the pupil plane. In our current experimental setup with the numerical aperture of 0.36, we obtain a quantitative phase image with a resolution of 1.73μm after computationally removing system aberrations and refocusing. We also extend the depth of field digitally by 20 times to ±50μm with a resolution of 1.76μm. PMID:27828473
Saliency detection algorithm based on LSC-RC
NASA Astrophysics Data System (ADS)
Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu
2018-02-01
Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.
Integrated interpretation of overlapping AEM datasets achieved through standardisation
NASA Astrophysics Data System (ADS)
Sørensen, Camilla C.; Munday, Tim; Heinson, Graham
2015-12-01
Numerous airborne electromagnetic surveys have been acquired in Australia using a variety of systems. It is not uncommon to find two or more surveys covering the same ground, but acquired using different systems and at different times. Being able to combine overlapping datasets and get a spatially coherent resistivity-depth image of the ground can assist geological interpretation, particularly when more subtle geophysical responses are important. Combining resistivity-depth models obtained from the inversion of airborne electromagnetic (AEM) data can be challenging, given differences in system configuration, geometry, flying height and preservation or monitoring of system acquisition parameters such as waveform. In this study, we define and apply an approach to overlapping AEM surveys, acquired by fixed wing and helicopter time domain electromagnetic (EM) systems flown in the vicinity of the Goulds Dam uranium deposit in the Frome Embayment, South Australia, with the aim of mapping the basement geometry and the extent of the Billeroo palaeovalley. Ground EM soundings were used to standardise the AEM data, although results indicated that only data from the REPTEM system needed to be corrected to bring the two surveys into agreement and to achieve coherent spatial resistivity-depth intervals.
Bayesian depth estimation from monocular natural images.
Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C
2017-05-01
Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.
Liu, Jingfei; Foiret, Josquin; Stephens, Douglas N.; Le Baron, Olivier; Ferrara, Katherine W.
2016-01-01
A 1.5 MHz prolate spheroidal therapeutic array with 128 circular elements was designed to accommodate standard imaging arrays for ultrasonic image-guided hyperthermia. The implementation of this dual-array system integrates real-time therapeutic and imaging functions with a single ultrasound system (Vantage 256, Verasonics). To facilitate applications involving small animal imaging and therapy the array was designed to have a beam depth of field smaller than 3.5 mm and to electronically steer over distances greater than 1 cm in both the axial and lateral directions. In order to achieve the required f number of 0.69, 1-3 piezocomposite modules were mated within the transducer housing. The performance of the prototype array was experimentally evaluated with excellent agreement with numerical simulation. A focal volume (2.70 mm (axial) × 0.65 mm (transverse) × 0.35 mm (transverse)) defined by the −6 dB focal intensity was obtained to address the dimensions needed for small animal therapy. An electronic beam steering range defined by the −3 dB focal peak intensity (17 mm (axial) × 14 mm (transverse) × 12 mm (transverse)) and −8 dB lateral grating lobes (24 mm (axial) × 18 mm (transverse) × 16 mm (transverse)) was achieved. The combined testing of imaging and therapeutic functions confirmed well-controlled local heating generation and imaging in a tissue mimicking phantom. This dual-array implementation offers a practical means to achieve hyperthermia and ablation in small animal models and can be incorporated within protocols for ultrasound-mediated drug delivery. PMID:27353347
Nikitin, Sergey M.; Chigarev, Nikolay; Tournat, Vincent; Bulou, Alain; Gasteau, Damien; Castagnede, Bernard; Zerr, Andreas; Gusev, Vitalyi E.
2015-01-01
The time-domain Brillouin scattering technique, also known as picosecond ultrasonic interferometry, allows monitoring of the propagation of coherent acoustic pulses, having lengths ranging from nanometres to fractions of a micrometre, in samples with dimension of less than a micrometre to tens of micrometres. In this study, we applied this technique to depth-profiling of a polycrystalline aggregate of ice compressed in a diamond anvil cell to megabar pressures. The method allowed examination of the characteristic dimensions of ice texturing in the direction normal to the diamond anvil surfaces with sub-micrometre spatial resolution via time-resolved measurements of the propagation velocity of the acoustic pulses travelling in the compressed sample. The achieved imaging of ice in depth and in one of the lateral directions indicates the feasibility of three-dimensional imaging and quantitative characterisation of the acoustical, optical and acousto-optical properties of transparent polycrystalline aggregates in a diamond anvil cell with tens of nanometres in-depth resolution and a lateral spatial resolution controlled by pump laser pulses focusing, which could approach hundreds of nanometres. PMID:25790808
Johnson, Jared M; Im, Soohyun; Windl, Wolfgang; Hwang, Jinwoo
2017-01-01
We propose a new scanning transmission electron microscopy (STEM) technique that can realize the three-dimensional (3D) characterization of vacancies, lighter and heavier dopants with high precision. Using multislice STEM imaging and diffraction simulations of β-Ga 2 O 3 and SrTiO 3 , we show that selecting a small range of low scattering angles can make the contrast of the defect-containing atomic columns substantially more depth-dependent. The origin of the depth-dependence is the de-channeling of electrons due to the existence of a point defect in the atomic column, which creates extra "ripples" at low scattering angles. The highest contrast of the point defect can be achieved when the de-channeling signal is captured using the 20-40mrad detection angle range. The effect of sample thickness, crystal orientation, local strain, probe convergence angle, and experimental uncertainty to the depth-dependent contrast of the point defect will also be discussed. The proposed technique therefore opens new possibilities for highly precise 3D structural characterization of individual point defects in functional materials. Copyright © 2016 Elsevier B.V. All rights reserved.
A depth-of-interaction PET detector using mutual gain-equalized silicon photomultiplier
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. Xi, A.G, Weisenberger, H. Dong, Brian Kross, S. Lee, J. McKisson, Carl Zorn
We developed a prototype high resolution, high efficiency depth-encoding detector for PET applications based on dual-ended readout of LYSO array with two silicon photomultipliers (SiPMs). Flood images, energy resolution, and depth-of-interaction (DOI) resolution were measured for a LYSO array - 0.7 mm in crystal pitch and 10 mm in thickness - with four unpolished parallel sides. Flood images were obtained such that individual crystal element in the array is resolved. The energy resolution of the entire array was measured to be 33%, while individual crystal pixel elements utilizing the signal from both sides ranged from 23.3% to 27%. By applyingmore » a mutual-gain equalization method, a DOI resolution of 2 mm for the crystal array was obtained in the experiments while simulations indicate {approx}1 mm DOI resolution could possibly be achieved. The experimental DOI resolution can be further improved by obtaining revised detector supporting electronics with better energy resolutions. This study provides a detailed detector calibration and DOI response characterization of the dual-ended readout SiPM-based PET detectors, which will be important in the design and calibration of a PET scanner in the future.« less
Yi, Faliu; Lee, Jieun; Moon, Inkyu
2014-05-01
The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.
SPAD array based TOF SoC design for unmanned vehicle
NASA Astrophysics Data System (ADS)
Pan, An; Xu, Yuan; Xie, Gang; Huang, Zhiyu; Zheng, Yanghao; Shi, Weiwei
2018-03-01
As for the requirement of unmanned-vehicle mobile Lidar system, this paper presents a SoC design based on pulsed TOF depth image sensor. This SoC has a detection range of 300m and detecting resolution of 1.5cm. Pixels are made of SPAD. Meanwhile, SoC adopts a structure of multi-pixel sharing TDC, which significantly reduces chip area and improve the fill factor of light-sensing surface area. SoC integrates a TCSPC module to achieve the functionality of receiving each photon, measuring photon flight time and processing depth information in one chip. The SOC is designed in the SMIC 0.13μm CIS CMOS technology
Application of preconditioned alternating direction method of multipliers in depth from focal stack
NASA Astrophysics Data System (ADS)
Javidnia, Hossein; Corcoran, Peter
2018-03-01
Postcapture refocusing effect in smartphone cameras is achievable using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map, which has been an open issue for decades. To tackle this issue, a framework is proposed based on a preconditioned alternating direction method of multipliers for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy, the optimization function of the proposed framework can, in fact, converge faster and better than state-of-the-art methods. The qualitative evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against five other methods. Later, 10 light field image sets have been transformed into focal stacks for quantitative evaluation purposes. Preliminary results indicate that the proposed framework has a better performance in terms of structural accuracy and optimization in comparison to the current state-of-the-art methods.
A fusion network for semantic segmentation using RGB-D data
NASA Astrophysics Data System (ADS)
Yuan, Jiahui; Zhang, Kun; Xia, Yifan; Qi, Lin; Dong, Junyu
2018-04-01
Semantic scene parsing is considerable in many intelligent field, including perceptual robotics. For the past few years, pixel-wise prediction tasks like semantic segmentation with RGB images has been extensively studied and has reached very remarkable parsing levels, thanks to convolutional neural networks (CNNs) and large scene datasets. With the development of stereo cameras and RGBD sensors, it is expected that additional depth information will help improving accuracy. In this paper, we propose a semantic segmentation framework incorporating RGB and complementary depth information. Motivated by the success of fully convolutional networks (FCN) in semantic segmentation field, we design a fully convolutional networks consists of two branches which extract features from both RGB and depth data simultaneously and fuse them as the network goes deeper. Instead of aggregating multiple model, our goal is to utilize RGB data and depth data more effectively in a single model. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and achieve competitive results with the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Tsunoi, Yasuyuki; Sato, Shunichi; Ashida, Hiroshi; Terakawa, Mitsuhiro
2012-02-01
For efficient photodynamic treatment of wound infection, a photosensitizer must be distributed in the whole infected tissue region. To ensure this, depth profiling of a photosensitizer is necessary in vivo. In this study, we applied photoacoustic (PA) imaging to visualize the depth profile of an intravenously injected photosensitizer in rat burn models. In burned tissue, pharmacokinetics is complicated; vascular occlusion takes place in the injured tissue, while vascular permeability increases due to thermal invasion. In this study, we first used Evans Blue (EB) as a test drug to examine the feasibility of photosensitizer dosimetry based on PA imaging. On the basis of the results, an actual photosensitizer, talaporfin sodium was used. An EB solution was intravenously injected into a rat deep dermal burn model. PA imaging was performed on the wound with 532 nm and 610 nm nanosecond light pulses for visualizing vasculatures (blood) and EB, respectively. Two hours after injection, the distribution of EB-originated signal spatially coincided well with that of blood-originated signal measured after injury, indicating that EB molecules leaked out from the blood vessels due to increased permeability. Afterwards, the distribution of EB signal was broadened in the depth direction due to diffusion. At 12 hours after injection, clear EB signals were observed even in the zone of stasis, demonstrating that the leaked EB molecules were delivered to the injured tissue layer. The level and time course of talaporfin sodium-originated signals were different compared with those of EB-originated signals, showing animal-dependent and/or drug-dependent permeabilization and diffusion in the tissue. Thus, photosensitizer dosimetry should be needed before every treatment to achieve desirable outcome of photodynamic treatment, for which PA imaging can be concluded to be valid and useful.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
Miniature all-optical probe for photoacoustic and ultrasound dual-modality imaging
NASA Astrophysics Data System (ADS)
Li, Guangyao; Guo, Zhendong; Chen, Sung-Liang
2018-02-01
Photoacoustic (PA) imaging forms an image based on optical absorption contrasts with ultrasound (US) resolution. In contrast, US imaging is based on acoustic backscattering to provide structural information. In this study, we develop a miniature all-optical probe for high-resolution PA-US dual-modality imaging over a large imaging depth range. The probe employs three individual optical fibers (F1-F3) to achieve optical generation and detection of acoustic waves for both PA and US modalities. To offer wide-angle laser illumination, fiber F1 with a large numerical aperture (NA) is used for PA excitation. On the other hand, wide-angle US waves are generated by laser illumination on an optically absorbing composite film which is coated on the end face of fiber F2. Both the excited PA and backscattered US waves are detected by a Fabry-Pérot cavity on the tip of fiber F3 for wide-angle acoustic detection. The wide angular features of the three optical fibers make large-NA synthetic aperture focusing technique possible and thus high-resolution PA and US imaging. The probe diameter is less than 2 mm. Over a depth range of 4 mm, lateral resolutions of PA and US imaging are 104-154 μm and 64-112 μm, respectively, and axial resolutions of PA and US imaging are 72-117 μm and 31-67 μm, respectively. To show the imaging capability of the probe, phantom imaging with both PA and US contrasts is demonstrated. The results show that the probe has potential for endoscopic and intravascular imaging applications that require PA and US contrast with high resolution.
Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.
Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso
2018-07-01
There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.
Shen, Xin; Javidi, Bahram
2018-03-01
We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.
Plasmonics and metamaterials based super-resolution imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Liu, Zhaowei
2017-05-01
In recent years, surface imaging of various biological dynamics and biomechanical phenomena has seen a surge of interest. Imaging of processes such as exocytosis and kinesin motion are most effective when depth is limited to a very thin region of interest at the edge of the cell or specimen. However, many objects and processes of interest are of size scales below the diffraction limit for safe, visible wavelength illumination. Super-resolution imaging methods such as structured illumination microscopy and others have offered various compromises between resolution, imaging speed, and bio-compatibility. In this talk, I will present our most recent progress in plasmonic structured illumination microscopy (PSIM) and localized plasmonic structured illumination microscopy (LPSIM), and their applications in bio-imaging. We have achieved wide-field surface imaging with resolution down to 75 nm while maintaining reasonable speed and compatibility with biological specimens. These plasmonic enhanced super resolution techniques offer unique solutions to obtain 50nm spatial resolution and 50 frames per second wide imaging speed at the same time.
Isobe, Keisuke; Kawano, Hiroyuki; Kumagai, Akiko; Miyawaki, Atsushi; Midorikawa, Katsumi
2013-01-01
A spatial overlap modulation (SPOM) technique is a nonlinear optical microscopy technique which enhances the three-dimensional spatial resolution and rejects the out-of-focus background limiting the imaging depth inside a highly scattering sample. Here, we report on the implementation of SPOM in which beam pointing modulation is achieved by an electro-optic deflector. The modulation and demodulation frequencies are enhanced to 200 kHz and 400 kHz, respectively, resulting in a 200-fold enhancement compared with the previously reported system. The resolution enhancement and suppression of the out-of-focus background are demonstrated by sum-frequency-generation imaging of pounded granulated sugar and deep imaging of fluorescent beads in a tissue-like phantom, respectively. PMID:24156055
NASA Astrophysics Data System (ADS)
Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.
2018-02-01
Most reported photoacoustic ocular imaging work to date uses small animals, such as mice and rats, the eyes of which are small and less than one-third the size of a human eye, which poses a challenge for clinical translation. Here we achieved chorioretinal imaging of larger animals, i.e. rabbits, using a dual-modality photoacoustic microscopy (PAM) and optical coherence tomography (OCT) system. Preliminary experimental results in living rabbits demonstrate that the PAM can noninvasively visualize depth-resolved retinal and choroidal vessels using a safe laser exposure dose; and the OCT can finely distinguish different retinal layers, the choroid, and the sclera. This reported work might be a major step forward in clinical translation of photoacoustic microscopy.
Smartphone-Based Android app for Determining UVA Aerosol Optical Depth and Direct Solar Irradiances.
Igoe, Damien P; Parisi, Alfio; Carter, Brad
2014-01-01
This research describes the development and evaluation of the accuracy and precision of an Android app specifically designed, written and installed on a smartphone for detecting and quantifying incident solar UVA radiation and subsequently, aerosol optical depth at 340 and 380 nm. Earlier studies demonstrated that a smartphone image sensor can detect UVA radiation and the responsivity can be calibrated to measured direct solar irradiance. This current research provides the data collection, calibration, processing, calculations and display all on a smartphone. A very strong coefficient of determination of 0.98 was achieved when the digital response was recalibrated and compared to the Microtops sun photometer direct UVA irradiance observations. The mean percentage discrepancy for derived direct solar irradiance was only 4% and 6% for observations at 380 and 340 nm, respectively, lessening with decreasing solar zenith angle. An 8% mean percent difference discrepancy was observed when comparing aerosol optical depth, also decreasing as solar zenith angle decreases. The results indicate that a specifically designed Android app linking and using a smartphone image sensor, calendar and clock, with additional external narrow bandpass and neutral density filters can be used as a field sensor to evaluate both direct solar UVA irradiance and low aerosol optical depths for areas with low aerosol loads. © 2013 The American Society of Photobiology.
Synthetic-Focusing Strategies for Real-Time Annular-Array Imaging
Ketterling, Jeffrey A.; Filoux, Erwan
2012-01-01
Annular arrays provide a means to achieve enhanced image quality with a limited number of elements. Synthetic-focusing (SF) strategies that rely on beamforming data from individual transmit-to-receive (TR) element pairs provide a means to improve image quality without specialized TR delay electronics. Here, SF strategies are examined in the context of high-frequency ultrasound (>15 MHz) annular arrays composed of five elements, operating at 18 and 38 MHz. Acoustic field simulations are compared with experimental data acquired from wire and anechoic-sphere phantoms, and the values of lateral beamwidth, SNR, contrast-to-noise ratio (CNR), and depth of field (DOF) are compared as a function of depth. In each case, data were acquired for all TR combinations (25 in total) and processed with SF using all 25 TR pairs and SF with the outer receive channels removed one by one. The results show that removing the outer receive channels led to an overall degradation of lateral resolution, an overall decrease in SNR, and did not reduce the DOF, although the DOF profile decreased in amplitude. The CNR was >1 and remained fairly constant as a function of depth, with a slight decrease in CNR for the case with just the central element receiving. The relative changes between the calculated and measured quantities were nearly identical for the 18- and 38-MHz arrays. B-mode images of the anechoic phantom and an in vivo mouse embryo using full SF with 25 TR pairs or reduced TR-pair approaches showed minimal qualitative difference. PMID:22899130
The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images.
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J
2015-01-01
We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer's own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.
The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images
Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.
2015-01-01
We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793
Kim, Yunhee; Choi, Heejin; Kim, Joohwan; Cho, Seong-Woo; Kim, Youngmin; Park, Gilbae; Lee, Byoungho
2007-06-20
A depth-enhanced three-dimensional integral imaging system with electrically variable image planes is proposed. For implementing the variable image planes, polymer-dispersed liquid-crystal (PDLC) films and a projector are adopted as a new display system in the integral imaging. Since the transparencies of PDLC films are electrically controllable, we can make each film diffuse the projected light successively with a different depth from the lens array. As a result, the proposed method enables control of the location of image planes electrically and enhances the depth. The principle of the proposed method is described, and experimental results are also presented.
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
NASA Technical Reports Server (NTRS)
Frey, B. J.; Barry, R. K.; Danchi, W. C.; Hyde, T. T.; Lee, K. Y.; Martino, A. J.; Zuray, M. S.
2006-01-01
The Fourier-Kelvin Stellar Interferometer (FKSI) is a mission concept for an imaging and nulling interferometer in the near to mid-infrared spectral region (3-8 microns), and will be a scientific and technological pathfinder for upcoming missions including TPF-I/DARWIN, SPECS, and SPIRIT. At NASA's Goddard Space Flight Center, we have constructed a symmetric Mach-Zehnder nulling testbed to demonstrate techniques and algorithms that can be used to establish and maintain the 10(exp 4) null depth that will be required for such a mission. Among the challenges inherent in such a system is the ability to acquire and track the null fringe to the desired depth for timescales on the order of hours in a laboratory environment. In addition, it is desirable to achieve this stability without using conventional dithering techniques. We describe recent testbed metrology and control system developments necessary to achieve these goals and present our preliminary results.
Bessel beam fluorescence lifetime tomography of live embryos (Conference Presentation)
NASA Astrophysics Data System (ADS)
Xu, Dongli; Peng, Leilei
2016-03-01
Optical tomography allows isotropic 3D imaging of embryos. Scanning-laser optical tomography (SLOT) has superior light collecting efficiency than wide-field optical tomography, making it ideal for fluorescence imaging of live embryos. We previously reported an imaging system that combines SLOT with a novel Fourier-multiplexed fluorescence lifetime imaging (FmFLIM) technique named FmFLIM-SLOT. FmFLIM-SLOT performs multiplexed FLIM-FRET readout of multiple FRET sensors in live embryos. Here we report a recent effort on improving the spatial resolution of the FmFLIM-SLOT system in order to image complex biochemical processes in live embryos at the cellular level. Optical tomography has to compromise between resolution and the depth of view. In SLOT, the commonly-used focused Gaussian beam diverges quickly from the focal plane, making it impossible to achieve high resolution imaging in a large volume specimen. We thus introduce Bessel beam laser-scanning tomography, which illuminates the sample with a spatial-light-modulator-generated Bessel beam that has an extended focal depth. The Bessel beam is scanned across the whole specimen. Fluorescence projection images are acquired at equal angular intervals as the sample rotates. Reconstruction artifacts due to annular-rings of the Bessel beam are removed by a modified 3D filtered back projection algorithm. Furthermore, in combination of Fourier-multiplexing fluorescence lifetime imaging (FmFLIM) method, the Bessel FmFLIM-SLOT system is capable of perform 3D lifetime imaging of live embryos at cellular resolution. The system is applied to in-vivo imaging of transgenic Zebrafish embryos. Results prove that Bessel FmFLIM-SLOT is a promising imaging method in development biology research.
An x-ray fluorescence imaging system for gold nanoparticle detection.
Ricketts, K; Guazzoni, C; Castoldi, A; Gibson, A P; Royle, G J
2013-11-07
Gold nanoparticles (GNPs) may be used as a contrast agent to identify tumour location and can be modified to target and image specific tumour biological parameters. There are currently no imaging systems in the literature that have sufficient sensitivity to GNP concentration and distribution measurement at sufficient tissue depth for use in in vivo and in vitro studies. We have demonstrated that high detecting sensitivity of GNPs can be achieved using x-ray fluorescence; furthermore this technique enables greater depth imaging in comparison to optical modalities. Two x-ray fluorescence systems were developed and used to image a range of GNP imaging phantoms. The first system consisted of a 10 mm(2) silicon drift detector coupled to a slightly focusing polycapillary optic which allowed 2D energy resolved imaging in step and scan mode. The system has sensitivity to GNP concentrations as low as 1 ppm. GNP concentrations different by a factor of 5 could be resolved, offering potential to distinguish tumour from non-tumour. The second system was designed to avoid slow step and scan image acquisition; the feasibility of excitation of the whole specimen with a wide beam and detection of the fluorescent x-rays with a pixellated controlled drift energy resolving detector without scanning was investigated. A parallel polycapillary optic coupled to the detector was successfully used to ascertain the position where fluorescence was emitted. The tissue penetration of the technique was demonstrated to be sufficient for near-surface small-animal studies, and for imaging 3D in vitro cellular constructs. Previous work demonstrates strong potential for both imaging systems to form quantitative images of GNP concentration.
Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging
NASA Astrophysics Data System (ADS)
Micó, Vicente; Zalevsky, Zeev
2010-07-01
Digital in-line holographic microscopy (DIHM) is a modern approach capable of achieving micron-range lateral and depth resolutions in three-dimensional imaging. DIHM in combination with numerical imaging reconstruction uses an extremely simplified setup while retaining the advantages provided by holography with enhanced capabilities derived from algorithmic digital processing. We introduce superresolved DIHM incoming from time and angular multiplexing of the sample spatial frequency information and yielding in the generation of a synthetic aperture (SA). The SA expands the cutoff frequency of the imaging system, allowing submicron resolutions in both transversal and axial directions. The proposed approach can be applied when imaging essentially transparent (low-concentration dilutions) and static (slow dynamics) samples. Validation of the method for both a synthetic object (U.S. Air Force resolution test) to quantify the resolution improvement and a biological specimen (sperm cells biosample) are reported showing the generation of high synthetic numerical aperture values working without lenses.
Fast, Deep-Record-Length, Fiber-Coupled Photodiode Imaging Array for Plasma Diagnostics
NASA Astrophysics Data System (ADS)
Brockington, Samuel; Case, Andrew; Witherspoon, F. Douglas
2014-10-01
HyperV Technologies has been developing an imaging diagnostic comprised of an array of fast, low-cost, long-record-length, fiber-optically-coupled photodiode channels to investigate plasma dynamics and other fast, bright events. By coupling an imaging fiber bundle to a bank of amplified photodiode channels, imagers and streak imagers of 100 to 1000 pixels can be constructed. By interfacing analog photodiode systems directly to commercial analog-to-digital converters and modern memory chips, a prototype 100 pixel array with an extremely deep record length (128 k points at 20 Msamples/s) and 10 bit pixel resolution has already been achieved. HyperV now seeks to extend these techniques to construct a prototype 1000 Pixel framing camera with up to 100 Msamples/sec rate and 10 to 12 bit depth. Preliminary experimental results as well as Phase 2 plans will be discussed. Work supported by USDOE Phase 2 SBIR Grant DE-SC0009492.
Ovanesyan, Zaven; Mimun, L. Christopher; Kumar, Gangadharan Ajith; Yust, Brian G.; Dannangoda, Chamath; Martirosyan, Karen S.; Sardar, Dhiraj K.
2015-01-01
Molecular imaging is very promising technique used for surgical guidance, which requires advancements related to properties of imaging agents and subsequent data retrieval methods from measured multispectral images. In this article, an upconversion material is introduced for subsurface near-infrared imaging and for the depth recovery of the material embedded below the biological tissue. The results confirm significant correlation between the analytical depth estimate of the material under the tissue and the measured ratio of emitted light from the material at two different wavelengths. Experiments with biological tissue samples demonstrate depth resolved imaging using the rare earth doped multifunctional phosphors. In vitro tests reveal no significant toxicity, whereas the magnetic measurements of the phosphors show that the particles are suitable as magnetic resonance imaging agents. The confocal imaging of fibroblast cells with these phosphors reveals their potential for in vivo imaging. The depth-resolved imaging technique with such phosphors has broad implications for real-time intraoperative surgical guidance. PMID:26322519
Contact detection for nanomanipulation in a scanning electron microscope.
Ru, Changhai; To, Steve
2012-07-01
Nanomanipulation systems require accurate knowledge of the end-effector position in all three spatial coordinates, XYZ, for reliable manipulation of nanostructures. Although the images acquired by a scanning electron microscope (SEM) provide high resolution XY information, the lack of depth information in the Z-direction makes 3D nanomanipulation time-consuming. Existing approaches for contact detection of end-effectors inside SEM typically utilize fragile touch sensors that are difficult to integrate into a nanomanipulation system. This paper presents a method for determining the contact between an end-effector and a target surface during nanomanipulation inside SEM, purely based on the processing of SEM images. A depth-from-focus method is used in the fast approach of the end-effector to the substrate, followed by fine contact detection. Experimental results demonstrate that the contact detection approach is capable of achieving an accuracy of 21.5 nm at 50,000× magnification while inducing little end-effector damage. Copyright © 2012 Elsevier B.V. All rights reserved.
Laser-based volumetric flow visualization by digital color imaging of a spectrally coded volume.
McGregor, T J; Spence, D J; Coutts, D W
2008-01-01
We present the framework for volumetric laser-based flow visualization instrumentation using a spectrally coded volume to achieve three-component three-dimensional particle velocimetry. By delivering light from a frequency doubled Nd:YAG laser with an optical fiber, we exploit stimulated Raman scattering within the fiber to generate a continuum spanning the visible spectrum from 500 to 850 nm. We shape and disperse the continuum light to illuminate a measurement volume of 20 x 10 x 4 mm(3), in which light sheets of differing spectral properties overlap to form an unambiguous color variation along the depth direction. Using a digital color camera we obtain images of particle fields in this volume. We extract the full spatial distribution of particles with depth inferred from particle color. This paper provides a proof of principle of this instrument, examining the spatial distribution of a static field and a spray field of water droplets ejected by the nozzle of an airbrush.
Long-range depth profiling of camouflaged targets using single-photon detection
NASA Astrophysics Data System (ADS)
Tobin, Rachael; Halimi, Abderrahim; McCarthy, Aongus; Ren, Ximing; McEwan, Kenneth J.; McLaughlin, Stephen; Buller, Gerald S.
2018-03-01
We investigate the reconstruction of depth and intensity profiles from data acquired using a custom-designed time-of-flight scanning transceiver based on the time-correlated single-photon counting technique. The system had an operational wavelength of 1550 nm and used a Peltier-cooled InGaAs/InP single-photon avalanche diode detector. Measurements were made of human figures, in plain view and obscured by camouflage netting, from a stand-off distance of 230 m in daylight using only submilliwatt average optical powers. These measurements were analyzed using a pixelwise cross correlation approach and compared to analysis using a bespoke algorithm designed for the restoration of multilayered three-dimensional light detection and ranging images. This algorithm is based on the optimization of a convex cost function composed of a data fidelity term and regularization terms, and the results obtained show that it achieves significant improvements in image quality for multidepth scenarios and for reduced acquisition times.
The fulfillment of others' needs elevates children's body posture.
Hepach, Robert; Vaish, Amrisha; Tomasello, Michael
2017-01-01
Much is known about young children's helping behavior, but little is known about the underlying motivations and emotions involved. In 2 studies we found that 2-year-old children showed positive emotions of similar magnitude-as measured by changes in their postural elevation using depth sensor imaging technology-after they achieved a goal for themselves and after they helped another person achieve her goal. Conversely, children's posture decreased in elevation when their actions did not result in a positive outcome. These results suggest that for young children, working for themselves and helping others are similarly rewarding. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Focusing-schlieren visualization in a dual-mode scramjet
NASA Astrophysics Data System (ADS)
Kouchi, Toshinori; Goyne, Christopher P.; Rockwell, Robert D.; McDaniel, James C.
2015-12-01
Schlieren imaging is particularly suited to measuring density gradients in compressible flowfields and can be used to capture shock waves and expansion fans, as well as the turbulent structures of mixing and wake flows. Conventional schlieren imaging, however, has difficulty clearly capturing such structures in long-duration supersonic combustion test facilities. This is because the severe flow temperatures locally change the refractive index of the window glass that is being used to provide optical access. On the other hand, focusing-schlieren imaging presents the potential of reduced sensitivity to thermal distortion of the windows and to clearly capture the flow structures even during a combustion test. This reduced sensitivity is due the technique's ability to achieve a narrow depth of focus. As part of this study, a focusing-schlieren system was developed with a depth of focus near ±5 mm and was applied to a direct-connect, continuous-flow type, supersonic combustion test facility with a stagnation temperature near 1200 K. The present system was used to successfully visualize the flowfield inside a dual-mode scramjet. The imaging system captured combustion-induced volumetric expansion of the fuel jet and an anchored bifurcated shock wave at the trailing edge of the ramp fuel injector. This is the first time successful focusing-schlieren measurements have been reported for a dual-mode scramjet.
Compression and accelerated rendering of volume data using DWT
NASA Astrophysics Data System (ADS)
Kamath, Preyas; Akleman, Ergun; Chan, Andrew K.
1998-09-01
2D images cannot convey information on object depth and location relative to the surfaces. The medical community is increasingly using 3D visualization techniques to view data from CT scans, MRI etc. 3D images provide more information on depth and location in the spatial domain to help surgeons making better diagnoses of the problem. 3D images can be constructed from 2D images using 3D scalar algorithms. With recent advances in communication techniques, it is possible for doctors to diagnose and plan treatment of a patient who lives at a remote location. It is made possible by transmitting relevant data of the patient via telephone lines. If this information is to be reconstructed in 3D, then 2D images must be transmitted. However 2D dataset storage occupies a lot of memory. In addition, visualization algorithms are slow. We describe in this paper a scheme which reduces the data transfer time by only transmitting information that the doctor wants. Compression is achieved by reducing the amount of data transfer. This is possible by using the 3D wavelet transform applied to 3D datasets. Since the wavelet transform is localized in frequency and spatial domain, we transmit detail only in the region where the doctor needs it. Since only ROM (Region of Interest) is reconstructed in detail, we need to render only ROI in detail, thus we can reduce the rendering time.
Thériault, Gabrielle; Cottet, Martin; Castonguay, Annie; McCarthy, Nathalie; De Koninck, Yves
2014-01-01
Two-photon microscopy has revolutionized functional cellular imaging in tissue, but although the highly confined depth of field (DOF) of standard set-ups yields great optical sectioning, it also limits imaging speed in volume samples and ease of use. For this reason, we recently presented a simple and retrofittable modification to the two-photon laser-scanning microscope which extends the DOF through the use of an axicon (conical lens). Here we demonstrate three significant benefits of this technique using biological samples commonly employed in the field of neuroscience. First, we use a sample of neurons grown in culture and move it along the z-axis, showing that a more stable focus is achieved without compromise on transverse resolution. Second, we monitor 3D population dynamics in an acute slice of live mouse cortex, demonstrating that faster volumetric scans can be conducted. Third, we acquire a stereoscopic image of neurons and their dendrites in a fixed sample of mouse cortex, using only two scans instead of the complete stack and calculations required by standard systems. Taken together, these advantages, combined with the ease of integration into pre-existing systems, make the extended depth-of-field imaging based on Bessel beams a strong asset for the field of microscopy and life sciences in general. PMID:24904284
Spectroscopy as a tool for geochemical modeling
NASA Astrophysics Data System (ADS)
Kopacková, Veronika; Chevrel, Stephane; Bourguignon, Anna
2011-11-01
This study focused on testing the feasibility of up-scaling ground-spectra-derived parameters to HyMap spectral and spatial resolution and whether they could be further used for a quantitative determination of the following geochemical parameters: As, pH and Clignite content. The study was carried on the Sokolov lignite mine as it represents a site with extreme material heterogeneity and high heavy-metal gradients. A new segmentation method based on the unique spectral properties of acid materials was developed and applied to the multi-line HyMap image data corrected for BRDF and atmospheric effects. The quantitative parameters were calculated for multiple absorption features identified within the VIS/VNIR/SWIR regions (simple band ratios, absorption band depth and quantitative spectral feature parameters calculated dynamically for each spectral measurement (centre of the absorption band (λ), depth of the absorption band (D), width of the absorption band (Width), and asymmetry of the absorption band (S)). The degree of spectral similarity between the ground and image spectra was assessed. The linear models for pH, As and the Clignite content of the whole and segmented images were cross-validated on the selected homogenous areas defined in the HS images using ground truth. For the segmented images, reliable results were achieved as follows: As: R2=0.84, Clignite: R2=0.88 and R2 pH: R2= 0.57.
Extended depth of focus adaptive optics spectral domain optical coherence tomography
Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Yasuno, Yoshiaki
2012-01-01
We present an adaptive optics spectral domain optical coherence tomography (AO-SDOCT) with a long focal range by active phase modulation of the pupil. A long focal range is achieved by introducing AO-controlled third-order spherical aberration (SA). The property of SA and its effects on focal range are investigated in detail using the Huygens-Fresnel principle, beam profile measurement and OCT imaging of a phantom. The results indicate that the focal range is extended by applying SA, and the direction of extension can be controlled by the sign of applied SA. Finally, we demonstrated in vivo human retinal imaging by altering the applied SA. PMID:23082278
Wide-field two-photon microscopy with temporal focusing and HiLo background rejection
NASA Astrophysics Data System (ADS)
Yew, Elijah Y. S.; Choi, Heejin; Kim, Daekeun; So, Peter T. C.
2011-03-01
Scanningless depth-resolved microscopy is achieved through spatial-temporal focusing and has been demonstrated previously. The advantage of this method is that a large area may be imaged without scanning resulting in higher throughput of the imaging system. Because it is a widefield technique, the optical sectioning effect is considerably poorer than with conventional spatial focusing two-photon microscopy. Here we propose wide-field two-photon microscopy based on spatio-temporal focusing and employing background rejection based on the HiLo microscope principle. We demonstrate the effects of applying HiLo microscopy to widefield temporally focused two-photon microscopy.
Extended depth of focus adaptive optics spectral domain optical coherence tomography.
Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Yasuno, Yoshiaki
2012-10-01
We present an adaptive optics spectral domain optical coherence tomography (AO-SDOCT) with a long focal range by active phase modulation of the pupil. A long focal range is achieved by introducing AO-controlled third-order spherical aberration (SA). The property of SA and its effects on focal range are investigated in detail using the Huygens-Fresnel principle, beam profile measurement and OCT imaging of a phantom. The results indicate that the focal range is extended by applying SA, and the direction of extension can be controlled by the sign of applied SA. Finally, we demonstrated in vivo human retinal imaging by altering the applied SA.
Optical registration of spaceborne low light remote sensing camera
NASA Astrophysics Data System (ADS)
Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long
2018-02-01
For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.
Increasing the information acquisition volume in iris recognition systems.
Barwick, D Shane
2008-09-10
A significant hurdle for the widespread adoption of iris recognition in security applications is that the typically small imaging volume for eye placement results in systems that are not user friendly. Separable cubic phase plates at the lens pupil have been shown to ameliorate this disadvantage by increasing the depth of field. However, these phase masks have limitations on how efficiently they can capture the information-bearing spatial frequencies in iris images. The performance gains in information acquisition that can be achieved by more general, nonseparable phase masks is demonstrated. A detailed design method is presented, and simulations using representative designs allow for performance comparisons.
Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter
2018-01-01
Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512
Time-of-Flight Microwave Camera
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-01-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598
High speed multiphoton imaging
NASA Astrophysics Data System (ADS)
Li, Yongxiao; Brustle, Anne; Gautam, Vini; Cockburn, Ian; Gillespie, Cathy; Gaus, Katharina; Lee, Woei Ming
2016-12-01
Intravital multiphoton microscopy has emerged as a powerful technique to visualize cellular processes in-vivo. Real time processes revealed through live imaging provided many opportunities to capture cellular activities in living animals. The typical parameters that determine the performance of multiphoton microscopy are speed, field of view, 3D imaging and imaging depth; many of these are important to achieving data from in-vivo. Here, we provide a full exposition of the flexible polygon mirror based high speed laser scanning multiphoton imaging system, PCI-6110 card (National Instruments) and high speed analog frame grabber card (Matrox Solios eA/XA), which allows for rapid adjustments between frame rates i.e. 5 Hz to 50 Hz with 512 × 512 pixels. Furthermore, a motion correction algorithm is also used to mitigate motion artifacts. A customized control software called Pscan 1.0 is developed for the system. This is then followed by calibration of the imaging performance of the system and a series of quantitative in-vitro and in-vivo imaging in neuronal tissues and mice.
Time-of-Flight Microwave Camera
NASA Astrophysics Data System (ADS)
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
First cosmic-ray images of bone and soft tissue
NASA Astrophysics Data System (ADS)
Mrdja, Dusan; Bikit, Istvan; Bikit, Kristina; Slivka, Jaroslav; Hansman, Jan; Oláh, László; Varga, Dezső
2016-11-01
More than 120 years after Roentgen's first X-ray image, the first cosmic-ray muon images of bone and soft tissue are created. The pictures, shown in the present paper, represent the first radiographies of structures of organic origin ever recorded by cosmic rays. This result is achieved by a uniquely designed, simple and versatile cosmic-ray muon-imaging system, which consists of four plastic scintillation detectors and a muon tracker. This system does not use scattering or absorption of muons in order to deduct image information, but takes advantage of the production rate of secondaries in the target materials, detected in coincidence with muons. The 2D image slices of cow femur bone are obtained at several depths along the bone axis, together with the corresponding 3D image. Real organic soft tissue, polymethyl methacrylate and water, never seen before by any other muon imaging techniques, are also registered in the images. Thus, similar imaging systems, placed around structures of organic or inorganic origin, can be used for tomographic imaging using only the omnipresent cosmic radiation.
A surgical confocal microlaparoscope for real-time optical biopsies
NASA Astrophysics Data System (ADS)
Tanbakuchi, Anthony Amir
The first real-time fluorescence confocal microlaparoscope has been developed that provides instant in vivo cellular images, comparable to those provided by histology, through a nondestructive procedure. The device includes an integrated contrast agent delivery mechanism and a computerized depth scan system. The instrument uses a fiber bundle to relay the image plane of a slit-scan confocal microlaparoscope into tissue. The confocal laparoscope was used to image the ovaries of twenty-one patients in vivo using fluorescein sodium and acridine orange as the fluorescent contrast agents. The results indicate that the device is safe and functions as designed. A Monte Carlo model was developed to characterize the system performance in a scattering media representative of human tissues. The results indicate that a slit aperture has limited ability to image below the surface of tissue. In contrast, the results show that multi-pinhole apertures such as a Nipkow disk or a linear pinhole array can achieve nearly the same depth performance as a single pinhole aperture. The model was used to determine the optimal aperture spacing for the multi-pinhole apertures. The confocal microlaparoscope represents a new type of in vivo imaging device. With its ability to image cellular details in real time, it has the potential to aid in the early diagnosis of cancer. Initially, the device may be used to locate unusual regions for guided biopsies. In the long term, the device may be able to supplant traditional biopsies and allow the surgeon to identify early stage cancer in vivo.
Intracranial dual-mode IVUS and hyperthermia using circular arrays: preliminary experiments.
Patel, Vivek; Light, Edward; Herickhoff, Carl; Grant, Gerald; Britz, Gavin; Wilson, Christy; Palmeri, Mark; Smith, Stephen
2013-01-01
In this study, we investigated the feasibility of using 3.5-Fr (3 Fr = 1 mm) circular phased-array intravascular ultrasound (IVUS) catheters for minimally invasive, image-guided hyperthermia treatment of tumors in the brain. Feasibility was demonstrated in two ways: (1) by inserting a 3.5-Fr IVUS catheter through skull burr holes, for 20 MHz brain imaging in the pig model, and (2) by testing a modified circular array for therapy potential with 18.5-MHz and 9-MHz continuous wave (CW) excitation. The imaging transducer's performance was superior to our previous 9-MHz mechanical IVUS prototype. The therapy catheter transducer was driven by CW electrical power at 18.5 MHz, achieving temperature changes reaching +8°C at a depth of 2 mm in a human glioblastoma grown on the flank of a mouse with minimal transducer resistive heating of +2°C. Further hyperthermia trials showed that 9-MHz CW excitation produced temperature changes of +4.5°C at a depth of 12 mm-a sufficient temperature rise for our long-term goal of targeted, controlled drug release via thermosensitive liposomes for therapeutic treatment of 1-cm-diameter glioblastomas.
Intracranial Dual-Mode IVUS and Hyperthermia Using Circular Arrays: Preliminary Experiments
Patel, Vivek; Light, Edward; Herickhoff, Carl; Grant, Gerald; Britz, Gavin; Wilson, Christy; Palmeri, Mark; Smith, Stephen
2013-01-01
In this study, we investigated the feasibility of using 3.5-Fr (3 Fr = 1 mm) circular phased-array intravascular ultrasound (IVUS) catheters for minimally invasive, image-guided hyperthermia treatment of tumors in the brain. Feasibility was demonstrated in two ways: (1) by inserting a 3.5-Fr IVUS catheter through skull burr holes, for 20 MHz brain imaging in the pig model, and (2) by testing a modified circular array for therapy potential with 18.5-MHz and 9-MHz continuous wave (CW) excitation. The imaging transducer’s performance was superior to our previous 9-MHz mechanical IVUS prototype. The therapy catheter transducer was driven by CW electrical power at 18.5 MHz, achieving temperature changes reaching +8°C at a depth of 2 mm in a human glioblastoma grown on the flank of a mouse with minimal transducer resistive heating of +2°C. Further hyperthermia trials showed that 9-MHz CW excitation produced temperature changes of +4.5°C at a depth of 12 mm—a sufficient temperature rise for our long-term goal of targeted, controlled drug release via thermosensitive liposomes for therapeutic treatment of 1-cm-diameter glioblastomas. PMID:23287504
NASA Astrophysics Data System (ADS)
Bradu, Adrian; Marques, Manuel J.; Bouchal, Petr; Podoleanu, Adrian Gh.
2013-03-01
The purpose of this study was to show how to favorably mix two e_ects to improve the sensitivity with depth in Fourier domain optical coherence tomography (OCT): Talbot bands (TB) and Gabor-based fusion (GF) technique. TB operation is achieved by directing the two beams, from the object arm and from the reference arm in the OCT interferometer, along parallel separate paths towards the spectrometer. By changing the lateral gap between the two beams in their path towards the spectrometer, the position for the maximum sensitivity versus the optical path difference in the interferometer is adjusted. For five values of the focus position, the gap between the two beams is readjusted to reach maximum sensitivity. Then, similar to the procedure employed in the GF technique, a composite image is formed by edging together the parts of the five images that exhibited maximum brightness. The combined procedure, TB/GF is examined for four different values of the beam diameters of the two beams. Also we demonstrate volumetric FD-OCT images with mirror term attenuation and sensitivity profile shifted towards higher OPD values by applying a Talbot bands configuration.
NASA Astrophysics Data System (ADS)
Liu, Jingfei; Foiret, Josquin; Stephens, Douglas N.; Le Baron, Olivier; Ferrara, Katherine W.
2016-07-01
A 1.5 MHz prolate spheroidal therapeutic array with 128 circular elements was designed to accommodate standard imaging arrays for ultrasonic image-guided hyperthermia. The implementation of this dual-array system integrates real-time therapeutic and imaging functions with a single ultrasound system (Vantage 256, Verasonics). To facilitate applications involving small animal imaging and therapy the array was designed to have a beam depth of field smaller than 3.5 mm and to electronically steer over distances greater than 1 cm in both the axial and lateral directions. In order to achieve the required f number of 0.69, 1-3 piezocomposite modules were mated within the transducer housing. The performance of the prototype array was experimentally evaluated with excellent agreement with numerical simulation. A focal volume (2.70 mm (axial) × 0.65 mm (transverse) × 0.35 mm (transverse)) defined by the -6 dB focal intensity was obtained to address the dimensions needed for small animal therapy. An electronic beam steering range defined by the -3 dB focal peak intensity (17 mm (axial) × 14 mm (transverse) × 12 mm (transverse)) and -8 dB lateral grating lobes (24 mm (axial) × 18 mm (transverse) × 16 mm (transverse)) was achieved. The combined testing of imaging and therapeutic functions confirmed well-controlled local heating generation and imaging in a tissue mimicking phantom. This dual-array implementation offers a practical means to achieve hyperthermia and ablation in small animal models and can be incorporated within protocols for ultrasound-mediated drug delivery.
Liu, Jingfei; Foiret, Josquin; Stephens, Douglas N; Le Baron, Olivier; Ferrara, Katherine W
2016-07-21
A 1.5 MHz prolate spheroidal therapeutic array with 128 circular elements was designed to accommodate standard imaging arrays for ultrasonic image-guided hyperthermia. The implementation of this dual-array system integrates real-time therapeutic and imaging functions with a single ultrasound system (Vantage 256, Verasonics). To facilitate applications involving small animal imaging and therapy the array was designed to have a beam depth of field smaller than 3.5 mm and to electronically steer over distances greater than 1 cm in both the axial and lateral directions. In order to achieve the required f number of 0.69, 1-3 piezocomposite modules were mated within the transducer housing. The performance of the prototype array was experimentally evaluated with excellent agreement with numerical simulation. A focal volume (2.70 mm (axial) × 0.65 mm (transverse) × 0.35 mm (transverse)) defined by the -6 dB focal intensity was obtained to address the dimensions needed for small animal therapy. An electronic beam steering range defined by the -3 dB focal peak intensity (17 mm (axial) × 14 mm (transverse) × 12 mm (transverse)) and -8 dB lateral grating lobes (24 mm (axial) × 18 mm (transverse) × 16 mm (transverse)) was achieved. The combined testing of imaging and therapeutic functions confirmed well-controlled local heating generation and imaging in a tissue mimicking phantom. This dual-array implementation offers a practical means to achieve hyperthermia and ablation in small animal models and can be incorporated within protocols for ultrasound-mediated drug delivery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Na, Y; Qian, X; Wuu, C
Purpose: To verify the dosimetric characteristics of a small animal image-guided irradiator using a high-resolution of optical CT imaging of 3D dosimeters. Methods: PRESAEGE 3D dosimeters were used to determine dosimetric characteristics of a small animal image-guided irradiator and compared with EBT2 films. Cylindrical PRESAGE dosimeters with 7cm height and 6cm diameter were placed along the central axis of the beam. The films were positioned between 6×6cm{sup 2} cubed plastic water phantoms perpendicular to the beam direction with multiple depths. PRESAGE dosimeters and EBT2 films were then irradiated with the irradiator beams at 220kVp and 13mA. Each of irradiated PRESAGEmore » dosimeters named PA1, PA2, PB1, and PB2, was independently scanned using a high-resolution single laser beam optical CT scanner. The transverse images were reconstructed with a 0.1mm high-resolution pixel. A commercial Epson Expression 10000XL flatbed scanner was used for readout of irradiated EBT2 films at a 0.4mm pixel resolution. PDD curves and beam profiles were measured for the irradiated PRESAGE dosimeters and EBT2 films. Results: The PDD agreements between the irradiated PRESAGE dosimeter PA1, PA2, PB1, PB2 and the EB2 films were 1.7, 2.3, 1.9, and 1.9% for the multiple depths at 1, 5, 10, 15, 20, 30, 40 and 50mm, respectively. The FWHM measurements for each PRESAEGE dosimeter and film agreed with 0.5, 1.1, 0.4, and 1.7%, respectively, at 30mm depth. Both PDD and FWHM measurements for the PRESAGE dosimeters and the films agreed overall within 2%. The 20%–80% penumbral widths of each PRESAGE dosimeter and the film at a given depth were respectively found to be 0.97, 0.91, 0.79, 0.88, and 0.37mm. Conclusion: Dosimetric characteristics of a small animal image-guided irradiator have been demonstrated with the measurements of PRESAGE dosimeter and EB2 film. With the high resolution and accuracy obtained from this 3D dosimetry system, precise targeting small animal irradiation can be achieved.« less
Liu, Dan; Liu, Xuejun; Wu, Yiguang
2018-04-24
This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.
Non-Cartesian Parallel Imaging Reconstruction
Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole
2014-01-01
Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499
Liang, Yicheng; Peng, Hao
2015-02-07
Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.
Evaluating methods for controlling depth perception in stereoscopic cinematography
NASA Astrophysics Data System (ADS)
Sun, Geng; Holliman, Nick
2009-02-01
Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.
NASA Astrophysics Data System (ADS)
Liu, Yehe; Gu, Shi; Watanabe, Michiko; Rollins, Andrew M.; Jenkins, Michael W.
2017-02-01
Abnormal coronary development causes various health problems. However, coronary development remains one of the highly neglected areas in developmental cardiology due to limited technology. Currently, there is not a robust method available to map the microvasculature throughout the entire embryonic heart in 3D. This is a challenging task because it requires both micron level resolution over a large field of view and sufficient imaging depth. Speckle-variance optical coherence tomography (OCT) has reasonable resolution for coronary vessel mapping, but limited penetration depth and sensitivity to bulk motion made it impossible to apply this method to late-stage beating hearts. Some success has been achieved with coronary dye perfusion, but smaller vessels are not efficiently stained and penetration depth is still an issue. To address this problem, we present an OCT imaging procedure using optical clearing and a contrast agent (titanium dioxide) that enables 3D mapping of the coronary microvasculature in developing embryonic hearts. In brief, the hearts of stage 36 quail embryos were perfused with a low viscosity mixture of polyvinyl acetate (PVA) and titanium dioxide through the aorta using micropipette injection. After perfusion, the viscosity of the solution was increased by crosslinking the PVA polymer chains with borate ions. The tissue was then optically cleared. The titanium dioxide particles remaining in the coronaries provided a strong OCT signal, while the rest of the cardiac structures became relatively transparent. Using this technique, we are able to investigate coronary morphologies in different disease models.
Computational adaptive optics for broadband optical interferometric tomography of biological tissue
NASA Astrophysics Data System (ADS)
Boppart, Stephen A.
2015-03-01
High-resolution real-time tomography of biological tissues is important for many areas of biological investigations and medical applications. Cellular level optical tomography, however, has been challenging because of the compromise between transverse imaging resolution and depth-of-field, the system and sample aberrations that may be present, and the low imaging sensitivity deep in scattering tissues. The use of computed optical imaging techniques has the potential to address several of these long-standing limitations and challenges. Two related techniques are interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO). Through three-dimensional Fourierdomain resampling, in combination with high-speed OCT, ISAM can be used to achieve high-resolution in vivo tomography with enhanced depth sensitivity over a depth-of-field extended by more than an order-of-magnitude, in realtime. Subsequently, aberration correction with CAO can be performed in a tomogram, rather than to the optical beam of a broadband optical interferometry system. Based on principles of Fourier optics, aberration correction with CAO is performed on a virtual pupil using Zernike polynomials, offering the potential to augment or even replace the more complicated and expensive adaptive optics hardware with algorithms implemented on a standard desktop computer. Interferometric tomographic reconstructions are characterized with tissue phantoms containing sub-resolution scattering particles, and in both ex vivo and in vivo biological tissue. This review will collectively establish the foundation for high-speed volumetric cellular-level optical interferometric tomography in living tissues.
Riverine Bathymetry Imaging with Indirect Observations
NASA Astrophysics Data System (ADS)
Farthing, M.; Lee, J. H.; Ghorbanidehno, H.; Hesser, T.; Darve, E. F.; Kitanidis, P. K.
2017-12-01
Bathymetry, i.e, depth, imaging in a river is of crucial importance for shipping operations and flood management. With advancements in sensor technology and computational resources, various types of indirect measurements can be used to estimate high-resolution riverbed topography. Especially, the use of surface velocity measurements has been actively investigated recently since they are easy to acquire at a low cost in all river conditions and surface velocities are sensitive to the river depth. In this work, we image riverbed topography using depth-averaged quasi-steady velocity observations related to the topography through the 2D shallow water equations (SWE). The principle component geostatistical approach (PCGA), a fast and scalable variational inverse modeling method powered by low-rank representation of covariance matrix structure, is presented and applied to two "twin" riverine bathymetry identification problems. To compare the efficiency and effectiveness of the proposed method, an ensemble-based approach is also applied to the test problems. Results demonstrate that PCGA is superior to the ensemble-based approach in terms of computational effort and accuracy. Especially, the results obtained from PCGA capture small-scale bathymetry features irrespective of the initial guess through the successive linearization of the forward model. Analysis on the direct survey data of the riverine bathymetry used in one of the test problems shows an efficient, parsimonious choice of the solution basis in PCGA so that the number of the numerical model runs used to achieve the inversion results is close to the minimum number that reconstructs the underlying bathymetry.
Deep Tissue Fluorescent Imaging in Scattering Specimens Using Confocal Microscopy
Clendenon, Sherry G.; Young, Pamela A.; Ferkowicz, Michael; Phillips, Carrie; Dunn, Kenneth W.
2015-01-01
In scattering specimens, multiphoton excitation and nondescanned detection improve imaging depth by a factor of 2 or more over confocal microscopy; however, imaging depth is still limited by scattering. We applied the concept of clearing to deep tissue imaging of highly scattering specimens. Clearing is a remarkably effective approach to improving image quality at depth using either confocal or multiphoton microscopy. Tissue clearing appears to eliminate the need for multiphoton excitation for deep tissue imaging. PMID:21729357
Super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging
NASA Astrophysics Data System (ADS)
Wei, Lu; Zhu, Xinxin; Chen, Zhixing; Min, Wei
2014-02-01
Two-photon excited fluorescence microscopy (TPFM) offers the highest penetration depth with subcellular resolution in light microscopy, due to its unique advantage of nonlinear excitation. However, a fundamental imaging-depth limit, accompanied by a vanishing signal-to-background contrast, still exists for TPFM when imaging deep into scattering samples. Formally, the focusing depth, at which the in-focus signal and the out-of-focus background are equal to each other, is defined as the fundamental imaging-depth limit. To go beyond this imaging-depth limit of TPFM, we report a new class of super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging, including multiphoton activation and imaging (MPAI) harnessing novel photo-activatable fluorophores, stimulated emission reduced fluorescence (SERF) microscopy by adding a weak laser beam for stimulated emission, and two-photon induced focal saturation imaging with preferential depletion of ground-state fluorophores at focus. The resulting image contrasts all exhibit a higher-order (third- or fourth- order) nonlinear signal dependence on laser intensity than that in the standard TPFM. Both the physical principles and the imaging demonstrations will be provided for each super-nonlinear microscopy. In all these techniques, the created super-nonlinearity significantly enhances the imaging contrast and concurrently extends the imaging depth-limit of TPFM. Conceptually different from conventional multiphoton processes mediated by virtual states, our strategy constitutes a new class of fluorescence microscopy where high-order nonlinearity is mediated by real population transfer.
Wang, Jinyu; Léger, Jean-François; Binding, Jonas; Boccara, A. Claude; Gigan, Sylvain; Bourdieu, Laurent
2012-01-01
Aberrations limit the resolution, signal intensity and achievable imaging depth in microscopy. Coherence-gated wavefront sensing (CGWS) allows the fast measurement of aberrations in scattering samples and therefore the implementation of adaptive corrections. However, CGWS has been demonstrated so far only in weakly scattering samples. We designed a new CGWS scheme based on a Linnik interferometer and a SLED light source, which is able to compensate dispersion automatically and can be implemented on any microscope. In the highly scattering rat brain tissue, where multiply scattered photons falling within the temporal gate of the CGWS can no longer be neglected, we have measured known defocus and spherical aberrations up to a depth of 400 µm. PMID:23082292
Wang, Jinyu; Léger, Jean-François; Binding, Jonas; Boccara, A Claude; Gigan, Sylvain; Bourdieu, Laurent
2012-10-01
Aberrations limit the resolution, signal intensity and achievable imaging depth in microscopy. Coherence-gated wavefront sensing (CGWS) allows the fast measurement of aberrations in scattering samples and therefore the implementation of adaptive corrections. However, CGWS has been demonstrated so far only in weakly scattering samples. We designed a new CGWS scheme based on a Linnik interferometer and a SLED light source, which is able to compensate dispersion automatically and can be implemented on any microscope. In the highly scattering rat brain tissue, where multiply scattered photons falling within the temporal gate of the CGWS can no longer be neglected, we have measured known defocus and spherical aberrations up to a depth of 400 µm.
Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin
2014-07-02
Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.
Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin
2014-01-01
Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942
Robust pedestrian detection and tracking from a moving vehicle
NASA Astrophysics Data System (ADS)
Tuong, Nguyen Xuan; Müller, Thomas; Knoll, Alois
2011-01-01
In this paper, we address the problem of multi-person detection, tracking and distance estimation in a complex scenario using multi-cameras. Specifically, we are interested in a vision system for supporting the driver in avoiding any unwanted collision with the pedestrian. We propose an approach using Histograms of Oriented Gradients (HOG) to detect pedestrians on static images and a particle filter as a robust tracking technique to follow targets from frame to frame. Because the depth map requires expensive computation, we extract depth information of targets using Direct Linear Transformation (DLT) to reconstruct 3D-coordinates of correspondent points found by running Speeded Up Robust Features (SURF) on two input images. Using the particle filter the proposed tracker can efficiently handle target occlusions in a simple background environment. However, to achieve reliable performance in complex scenarios with frequent target occlusions and complex cluttered background, results from the detection module are integrated to create feedback and recover the tracker from tracking failures due to the complexity of the environment and target appearance model variability. The proposed approach is evaluated on different data sets both in a simple background scenario and a cluttered background environment. The result shows that, by integrating detector and tracker, a reliable and stable performance is possible even if occlusion occurs frequently in highly complex environment. A vision-based collision avoidance system for an intelligent car, as a result, can be achieved.
NASA Astrophysics Data System (ADS)
Ahi, Kiarash; Anwar, Mehdi
2016-04-01
This paper introduces a novel reconstruction approach for enhancing the resolution of the terahertz (THz) images. For this purpose the THz imaging equation is derived. According to our best knowledge we are reporting the first THz imaging equation by this paper. This imaging equation is universal for THz far-field imaging systems and can be used for analyzing, describing and modeling of these systems. The geometry and behavior of Gaussian beams in far-field region imply that the FWHM of the THz beams diverge as the frequencies of the beams decrease. Thus, the resolution of the measurement decreases in lower frequencies. On the other hand, the depth of penetration of THz beams decreases as frequency increases. Roughly speaking beams in sub 1.5 THz, are transmitted into integrated circuit (IC) packages and the similar packaged objects. Thus, it is not possible to use the THz pulse with higher frequencies in order to achieve higher resolution inspection of packaged items. In this paper, after developing the 3-D THz point spread function (PSF) of the scanning THz beam and then the THz imaging equation, THz images are enhanced through deconvolution of the THz PSF and THz images. As a result, the resolution has been improved several times beyond the physical limitations of the THz measurement setup in the far-field region and sub-Nyquist images have been achieved. Particularly, MSE and SSIḾ have been increased by 27% and 50% respectively. Details as small as 0.2 mm were made visible in the THz images which originally reveals no details smaller than 2.2 mm. In other words the resolution of the images has been increased by 10 times. The accuracy of the reconstructed images was proved by high resolution X-ray images.
Laser speckle imaging based on photothermally driven convection.
Regan, Caitlin; Choi, Bernard
2016-02-01
Laser speckle imaging (LSI) is an interferometric technique that provides information about the relative speed of moving scatterers in a sample. Photothermal LSI overcomes limitations in depth resolution faced by conventional LSI by incorporating an excitation pulse to target absorption by hemoglobin within the vascular network. Here we present results from experiments designed to determine the mechanism by which photothermal LSI decreases speckle contrast. We measured the impact of mechanical properties on speckle contrast, as well as the spatiotemporal temperature dynamics and bulk convective motion occurring during photothermal LSI. Our collective data strongly support the hypothesis that photothermal LSI achieves a transient reduction in speckle contrast due to bulk motion associated with thermally driven convection. The ability of photothermal LSI to image structures below a scattering medium may have important preclinical and clinical applications.
NASA Astrophysics Data System (ADS)
Corucci, Linda; Masini, Andrea; Cococcioni, Marco
2011-01-01
This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.
Xiao, Jingjing; Stolkin, Rustam; Gao, Yuqing; Leonardis, Ales
2017-09-06
This paper presents a novel robust method for single target tracking in RGB-D images, and also contributes a substantial new benchmark dataset for evaluating RGB-D trackers. While a target object's color distribution is reasonably motion-invariant, this is not true for the target's depth distribution, which continually varies as the target moves relative to the camera. It is therefore nontrivial to design target models which can fully exploit (potentially very rich) depth information for target tracking. For this reason, much of the previous RGB-D literature relies on color information for tracking, while exploiting depth information only for occlusion reasoning. In contrast, we propose an adaptive range-invariant target depth model, and show how both depth and color information can be fully and adaptively fused during the search for the target in each new RGB-D image. We introduce a new, hierarchical, two-layered target model (comprising local and global models) which uses spatio-temporal consistency constraints to achieve stable and robust on-the-fly target relearning. In the global layer, multiple features, derived from both color and depth data, are adaptively fused to find a candidate target region. In ambiguous frames, where one or more features disagree, this global candidate region is further decomposed into smaller local candidate regions for matching to local-layer models of small target parts. We also note that conventional use of depth data, for occlusion reasoning, can easily trigger false occlusion detections when the target moves rapidly toward the camera. To overcome this problem, we show how combining target information with contextual information enables the target's depth constraint to be relaxed. Our adaptively relaxed depth constraints can robustly accommodate large and rapid target motion in the depth direction, while still enabling the use of depth data for highly accurate reasoning about occlusions. For evaluation, we introduce a new RGB-D benchmark dataset with per-frame annotated attributes and extensive bias analysis. Our tracker is evaluated using two different state-of-the-art methodologies, VOT and object tracking benchmark, and in both cases it significantly outperforms four other state-of-the-art RGB-D trackers from the literature.
High resolution multiplexed functional imaging in live embryos (Conference Presentation)
NASA Astrophysics Data System (ADS)
Xu, Dongli; Zhou, Weibin; Peng, Leilei
2017-02-01
Fourier multiplexed fluorescence lifetime imaging (FmFLIM) scanning laser optical tomography (FmFLIM-SLOT) combines FmFLIM and Scanning laser optical tomography (SLOT) to perform multiplexed 3D FLIM imaging of live embryos. The system had demonstrate multiplexed functional imaging of zebrafish embryos genetically express Foster Resonant Energy Transfer (FRET) sensors. However, previous system has a 20 micron resolution because the focused Gaussian beam diverges quickly from the focused plane, makes it difficult to achieve high resolution imaging over a long projection depth. Here, we present a high-resolution FmFLIM-SLOT system with achromatic Bessel beam, which achieves 3 micron resolution in 3D deep tissue imaging. In Bessel-FmFLIM-SLOT, multiple laser excitation lines are firstly intensity modulated by a Michelson interferometer with a spinning polygon mirror optical delay line, which enables Fourier multiplexed multi-channel lifetime measurements. Then, a spatial light modulator and a prism are used to transform the modulated Gaussian laser beam to an achromatic Bessel beam. The achromatic Bessel beam scans across the whole specimen with equal angular intervals as sample rotated. After tomography reconstruction and the frequency domain lifetime analysis method, both the 3D intensity and lifetime image of multiple excitation-emission can be obtained. Using Bessel-FmFLIM-SLOT system, we performed cellular-resolution FLIM tomography imaging of live zebrafish embryo. Genetically expressed FRET sensors in these embryo will allow non-invasive observation of multiple biochemical processes in vivo.
Nonlinear spectral imaging of biological tissues
NASA Astrophysics Data System (ADS)
Palero, J. A.
2007-07-01
The work presented in this thesis demonstrates live high resolution 3D imaging of tissue in its native state and environment. The nonlinear interaction between focussed femtosecond light pulses and the biological tissue results in the emission of natural autofluorescence and second-harmonic signal. Because biological intrinsic emission is generally very weak and extends from the ultraviolet to the visible spectral range, a broad-spectral range and high sensitivity 3D spectral imaging system is developed. Imaging the spectral characteristics of the biological intrinsic emission reveals the structure and biochemistry of the cells and extra-cellular components. By using different methods in visualizing the spectral images, discrimination between different tissue structures is achieved without the use of any stain or fluorescent label. For instance, RGB real color spectral images of the intrinsic emission of mouse skin tissues show blue cells, green hair follicles, and purple collagen fibers. The color signature of each tissue component is directly related to its characteristic emission spectrum. The results of this study show that skin tissue nonlinear intrinsic emission is mainly due to the autofluorescence of reduced nicotinamide adenine dinucleotide (phosphate), flavins, keratin, melanin, phospholipids, elastin and collagen and nonlinear Raman scattering and second-harmonic generation in Type I collagen. In vivo time-lapse spectral imaging is implemented to study metabolic changes in epidermal cells in tissues. Optical scattering in tissues, a key factor in determining the maximum achievable imaging depth, is also investigated in this work.
Aberration-free superresolution imaging via binary speckle pattern encoding and processing
NASA Astrophysics Data System (ADS)
Ben-Eliezer, Eyal; Marom, Emanuel
2007-04-01
We present an approach that provides superresolution beyond the classical limit as well as image restoration in the presence of aberrations; in particular, the ability to obtain superresolution while extending the depth of field (DOF) simultaneously is tested experimentally. It is based on an approach, recently proposed, shown to increase the resolution significantly for in-focus images by speckle encoding and decoding. In our approach, an object multiplied by a fine binary speckle pattern may be located anywhere along an extended DOF region. Since the exact magnification is not known in the presence of defocus aberration, the acquired low-resolution image is electronically processed via a parallel-branch decoding scheme, where in each branch the image is multiplied by the same high-resolution synchronized time-varying binary speckle but with different magnification. Finally, a hard-decision algorithm chooses the branch that provides the highest-resolution output image, thus achieving insensitivity to aberrations as well as DOF variations. Simulation as well as experimental results are presented, exhibiting significant resolution improvement factors.
Use of OCTA, FA, and Ultra-Widefield Imaging in Quantifying Retinal Ischemia: A Review.
Or, Chris; Sabrosa, Almyr S; Sorour, Osama; Arya, Malvika; Waheed, Nadia
2018-01-01
As ischemia remains a key prognostic factor in the management of various diseases including diabetic retinopathy, an increasing amount of research has been dedicated to its quantification as a potential biomarker. Advancements in the quantification of retinal ischemia have been made with the imaging modalities of fluorescein angiography (FA), ultra-widefield imaging (UWF), and optical coherence tomography angiography (OCTA), with each imaging modality offering certain benefits over the others. FA remains the gold standard in assessing the extent of ischemia. UWF imaging has allowed for the assessment of peripheral ischemia via FA. It is, however, OCTA that offers the best visualization of retinal vasculature with its noninvasive depth-resolved imaging and therefore has the potential to become a mainstay in the assessment of retinal ischemia. The primary purpose of this article is to review the use of FA, UWF, and OCTA to quantify retinal ischemia and the various methods described in the literature by which this is achieved. Copyright 2018 Asia-Pacific Academy of Ophthalmology.
X-RAY IMAGING Achieving the third dimension using coherence
Robinson, Ian; Huang, Xiaojing
2017-01-25
X-ray imaging is extensively used in medical and materials science. Traditionally, the depth dimension is obtained by turning the sample to gain different views. The famous penetrating properties of X-rays mean that projection views of the subject sample can be readily obtained in the linear absorption regime. 180 degrees of projections can then be combined using computed tomography (CT) methods to obtain a full 3D image, a technique extensively used in medical imaging. In the work now presented in Nature Materials, Stephan Hruszkewycz and colleagues have demonstrated genuine 3D imaging by a new method called 3D Bragg projection ptychography1. Ourmore » approach combines the 'side view' capability of using Bragg diffraction from a crystalline sample with the coherence capabilities of ptychography. Thus, it results in a 3D image from a 2D raster scan of a coherent beam across a sample that does not have to be rotated.« less
20 MHz/40 MHz dual element transducers for high frequency harmonic imaging.
Kim, Hyung Ham; Cannata, Jonathan M; Liu, Ruibin; Chang, Jin Ho; Silverman, Ronald H; Shung, K Kirk
2008-12-01
Concentric annular type dual element transducers for second harmonic imaging at 20 MHz / 40 MHz were designed and fabricated to improve spatial resolution and depth of penetration for ophthalmic imaging applications. The outer ring element was designed to transmit the 20 MHz signal and the inner circular element was designed to receive the 40 MHz second harmonic signal. Lithium niobate (LiNbO(3)), with its low dielectric constant, was used as the piezoelectric material to achieve good electrical impedance matching. Double matching layers and conductive backing were used and optimized by KLM modeling to achieve high sensitivity and wide bandwidth for harmonic imaging and superior time-domain characteristics. Prototype transducers were fabricated and evaluated quantitatively and clinically. The average measured center frequency for the transmit ring element was 21 MHz and the one-way --3 dB bandwidth was greater than 50%. The 40 MHz receive element functioned at 31 MHz center frequency with acceptable bandwidth to receive attenuated and frequency downshifted harmonic signal. The lateral beam profile for the 20 MHz ring elements at the focus matched the Field II simulated results well, and the effect of outer ring diameter was also examined. Images of a posterior segment of an excised pig eye and a choroidal nevus of human eye were obtained both for single element and dual element transducers and compared to demonstrate the advantages of dual element harmonic imaging.
WFIRST: Science from Deep Field Surveys
NASA Astrophysics Data System (ADS)
Koekemoer, Anton M.; Foley, Ryan; WFIRST Deep Field Working Group
2018-06-01
WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.
WFIRST: Science from Deep Field Surveys
NASA Astrophysics Data System (ADS)
Koekemoer, Anton; Foley, Ryan; WFIRST Deep Field Working Group
2018-01-01
WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.
A deep learning approach for pose estimation from volumetric OCT data.
Gessert, Nils; Schlüter, Matthias; Schlaefer, Alexander
2018-05-01
Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images' 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label's resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
A Prototype High-Resolution Small-Animal PET Scanner Dedicated to Mouse Brain Imaging.
Yang, Yongfeng; Bec, Julien; Zhou, Jian; Zhang, Mengxi; Judenhofer, Martin S; Bai, Xiaowei; Di, Kun; Wu, Yibao; Rodriguez, Mercedes; Dokhale, Purushottam; Shah, Kanai S; Farrell, Richard; Qi, Jinyi; Cherry, Simon R
2016-07-01
We developed a prototype small-animal PET scanner based on depth-encoding detectors using dual-ended readout of small scintillator elements to produce high and uniform spatial resolution suitable for imaging the mouse brain. The scanner consists of 16 tapered dual-ended-readout detectors arranged in a 61-mm-diameter ring. The axial field of view (FOV) is 7 mm, and the transaxial FOV is 30 mm. The scintillator arrays consist of 14 × 14 lutetium oxyorthosilicate elements, with a crystal size of 0.43 × 0.43 mm at the front end and 0.80 × 0.43 mm at the back end, and the crystal elements are 13 mm long. The arrays are read out by 8 × 8 mm and 13 × 8 mm position-sensitive avalanche photodiodes (PSAPDs) placed at opposite ends of the array. Standard nuclear-instrumentation-module electronics and a custom-designed multiplexer are used for signal processing. The detector performance was measured, and all but the crystals at the very edge could be clearly resolved. The average intrinsic spatial resolution in the axial direction was 0.61 mm. A depth-of-interaction resolution of 1.7 mm was achieved. The sensitivity of the scanner at the center of the FOV was 1.02% for a lower energy threshold of 150 keV and 0.68% for a lower energy threshold of 250 keV. The spatial resolution within a FOV that can accommodate the entire mouse brain was approximately 0.6 mm using a 3-dimensional maximum-likelihood expectation maximization reconstruction. Images of a hot-rod microphantom showed that rods with a diameter of as low as 0.5 mm could be resolved. The first in vivo studies were performed using (18)F-fluoride and confirmed that a 0.6-mm resolution can be achieved in the mouse head in vivo. Brain imaging studies with (18)F-FDG were also performed. We developed a prototype PET scanner that can achieve a spatial resolution approaching the physical limits of a small-bore PET scanner set by positron range and detector interaction. We plan to add more detector rings to extend the axial FOV of the scanner and increase sensitivity. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Lee, Hyun Jeong; Min, Ji Young; Kim, Hyun Il; Byon, Hyo-Jin
2017-05-01
Caudal blocks are performed through the sacral hiatus in order to provide pain control in children undergoing lower abdominal surgery. During the block, it is important to avoid advancing the needle beyond the sacrococcygeal ligament too much to prevent unintended dural puncture. This study used demographic data to establish simple guidelines for predicting a safe needle depth in the caudal epidural space in children. A total of 141 children under 12 years old who had undergone lumbar-sacral magnetic resonance imaging were included. The T2 sagittal image that provided the best view of the sacrococcygeal membrane and the dural sac was chosen. We used Picture Achieving and Communication System (Centricity ® PACS, GE Healthcare Co.) to measure the distance between the sacrococcygeal ligament and the dural sac, the length of the sacrococcygeal ligament, and the maximum depth of the caudal space. There were strong correlations between age, weight, height, and BSA, and the distance between the sacrococcygeal ligament and dural sac, as well as the length of the sacrococcygeal ligament. Based on these findings, a simple formula to calculate the distance between the sacrococcygeal ligament and dural sac was developed: 25 × BSA (mm). This simple formula can accurately calculate the safe depth of the caudal epidural space to prevent unintended dural puncture during caudal block in children. However, further clinical studies based on this formula are needed to substantiate its utility. © 2017 John Wiley & Sons Ltd.
Su, Rong; Kirillin, Mikhail; Chang, Ernest W.; Sergeeva, Ekaterina; Yun, Seok H.; Mattsson, Lars
2014-01-01
Optical coherence tomography (OCT) is a promising tool for detecting micro channels, metal prints, defects and delaminations embedded in alumina and zirconia ceramic layers at hundreds of micrometers beneath surfaces. The effect of surface roughness and scattering of probing radiation within sample on OCT inspection is analyzed from the experimental and simulated OCT images of the ceramic samples with varying surface roughnesses and operating wavelengths. By Monte Carlo simulations of the OCT images in the mid-IR the optimal operating wavelength is found to be 4 µm for the alumina samples and 2 µm for the zirconia samples for achieving sufficient probing depth of about 1 mm. The effects of rough surfaces and dispersion on the detection of the embedded boundaries are discussed. Two types of image artefacts are found in OCT images due to multiple reflections between neighboring boundaries and inhomogeneity of refractive index. PMID:24977838
Li, Guang; Luo, Shouhua; Yan, Yuling; Gu, Ning
2015-01-01
The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion.
2015-01-01
Background The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. Methods In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. Results By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. Conclusions The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion. PMID:25602532
Distance-based over-segmentation for single-frame RGB-D images
NASA Astrophysics Data System (ADS)
Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao
2017-11-01
Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.
Depth image super-resolution via semi self-taught learning framework
NASA Astrophysics Data System (ADS)
Zhao, Furong; Cao, Zhiguo; Xiao, Yang; Zhang, Xiaodi; Xian, Ke; Li, Ruibo
2017-06-01
Depth images have recently attracted much attention in computer vision and high-quality 3D content for 3DTV and 3D movies. In this paper, we present a new semi self-taught learning application framework for enhancing resolution of depth maps without making use of ancillary color images data at the target resolution, or multiple aligned depth maps. Our framework consists of cascade random forests reaching from coarse to fine results. We learn the surface information and structure transformations both from a small high-quality depth exemplars and the input depth map itself across different scales. Considering that edge plays an important role in depth map quality, we optimize an effective regularized objective that calculates on output image space and input edge space in random forests. Experiments show the effectiveness and superiority of our method against other techniques with or without applying aligned RGB information
Lew, Matthew D.; Lee, Steven F.; Badieirostami, Majid; Moerner, W. E.
2011-01-01
We describe the corkscrew point spread function (PSF), which can localize objects in three dimensions throughout a 3.2 µm depth of field with nanometer precision. The corkscrew PSF rotates as a function of the axial (z) position of an emitter. Fisher information calculations show that the corkscrew PSF can achieve nanometer localization precision with limited numbers of photons. We demonstrate three-dimensional super-resolution microscopy with the corkscrew PSF by imaging beads on the surface of a triangular polydimethylsiloxane (PDMS) grating. With 99,000 photons detected, the corkscrew PSF achieves a localization precision of 2.7 nm in x, 2.1 nm in y, and 5.7 nm in z. PMID:21263500
Lew, Matthew D; Lee, Steven F; Badieirostami, Majid; Moerner, W E
2011-01-15
We describe the corkscrew point spread function (PSF), which can localize objects in three dimensions throughout a 3.2 μm depth of field with nanometer precision. The corkscrew PSF rotates as a function of the axial (z) position of an emitter. Fisher information calculations show that the corkscrew PSF can achieve nanometer localization precision with limited numbers of photons. We demonstrate three-dimensional super-resolution microscopy with the corkscrew PSF by imaging beads on the surface of a triangular polydimethylsiloxane (PDMS) grating. With 99,000 photons detected, the corkscrew PSF achieves a localization precision of 2.7 nm in x, 2.1 nm in y, and 5.7 nm in z.
The selection of the optimal baseline in the front-view monocular vision system
NASA Astrophysics Data System (ADS)
Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen
2018-03-01
In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.
Depth map generation using a single image sensor with phase masks.
Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki
2016-06-13
Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.
NASA Technical Reports Server (NTRS)
Rignot, Eric J.; Zimmermann, Reiner; Oren, Ram
1995-01-01
In the tropical rain forests of Manu, in Peru, where forest biomass ranges from 4 kg/sq m in young forest succession up to 100 kg/sq m in old, undisturbed floodplain stands, the P-band polarimetric radar data gathered in June of 1993 by the AIRSAR (Airborne Synthetic Aperture Radar) instrument separate most major vegetation formations and also perform better than expected in estimating woody biomass. The worldwide need for large scale, updated biomass estimates, achieved with a uniformly applied method, as well as reliable maps of land cover, justifies a more in-depth exploration of long wavelength imaging radar applications for tropical forests inventories.
A novel line segment detection algorithm based on graph search
NASA Astrophysics Data System (ADS)
Zhao, Hong-dan; Liu, Guo-ying; Song, Xu
2018-02-01
To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).
Real time ray tracing based on shader
NASA Astrophysics Data System (ADS)
Gui, JiangHeng; Li, Min
2017-07-01
Ray tracing is a rendering algorithm for generating an image through tracing lights into an image plane, it can simulate complicate optical phenomenon like refraction, depth of field and motion blur. Compared with rasterization, ray tracing can achieve more realistic rendering result, however with greater computational cost, simple scene rendering can consume tons of time. With the GPU's performance improvement and the advent of programmable rendering pipeline, complicated algorithm can also be implemented directly on shader. So, this paper proposes a new method that implement ray tracing directly on fragment shader, mainly include: surface intersection, importance sampling and progressive rendering. With the help of GPU's powerful throughput capability, it can implement real time rendering of simple scene.
Advances in combined endoscopic fluorescence confocal microscopy and optical coherence tomography
NASA Astrophysics Data System (ADS)
Risi, Matthew D.
Confocal microendoscopy provides real-time high resolution cellular level images via a minimally invasive procedure. Results from an ongoing clinical study to detect ovarian cancer with a novel confocal fluorescent microendoscope are presented. As an imaging modality, confocal fluorescence microendoscopy typically requires exogenous fluorophores, has a relatively limited penetration depth (100 μm), and often employs specialized aperture configurations to achieve real-time imaging in vivo. Two primary research directions designed to overcome these limitations and improve diagnostic capability are presented. Ideal confocal imaging performance is obtained with a scanning point illumination and confocal aperture, but this approach is often unsuitable for real-time, in vivo biomedical imaging. By scanning a slit aperture in one direction, image acquisition speeds are greatly increased, but at the cost of a reduction in image quality. The design, implementation, and experimental verification of a custom multi-point-scanning modification to a slit-scanning multi-spectral confocal microendoscope is presented. This new design improves the axial resolution while maintaining real-time imaging rates. In addition, the multi-point aperture geometry greatly reduces the effects of tissue scatter on imaging performance. Optical coherence tomography (OCT) has seen wide acceptance and FDA approval as a technique for ophthalmic retinal imaging, and has been adapted for endoscopic use. As a minimally invasive imaging technique, it provides morphological characteristics of tissues at a cellular level without requiring the use of exogenous fluorophores. OCT is capable of imaging deeper into biological tissue (˜1-2 mm) than confocal fluorescence microscopy. A theoretical analysis of the use of a fiber-bundle in spectral-domain OCT systems is presented. The fiber-bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the optical coherence tomography data. However, the multi-mode characteristic of the fibers in the fiber-bundle affects the depth sensitivity of the imaging system. A description of light interference in a multi-mode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis.
NASA Astrophysics Data System (ADS)
Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.
2016-06-01
We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.
Absorption characterization of immersion medium for multiphoton microscopy at the 1700nm window
NASA Astrophysics Data System (ADS)
Wen, Wenhui; Qiu, Ping
2017-02-01
Larger imaging depth is the quest of almost all the imaging modalities, including multiphoton microscopy (MPM). Recently, it has been domonstrated that excitation at the 1700-nm helps extending imaging depth in MPM, optical coherence tomography, as well as photoacoustic imaging compared with excitation at other wavelengths. In MPM, immersion objective lenses with high numerical aperture (NA) are typically used to achieve better signal resolution, higer signal collection efficiency, and stronger signal generation. Although physically short ( mm), this extra optical path length traversed by the excitation light inevitably introduces absorption of the excitation light, and as a result leads to a decrease in the signal generation. Here we demonstrate experimental characterization of absorption spectrum of various immersion media at the 1700-nm window, including water (H2O), deuterium oxide (D2O), and several brands of immersion oil. Our results identify either the best immersion medium for a specific wavelength, or the best wavelength for a specific immersion medium at the 1700-nm window. Furthermore, through quantitative MPM experiments comparing different immersion media, we show that the MPM signal levels can be enhanced by more than ten fold simply by selecting the proper immersion medium, in good agreement with theoretical expectation based on the absorption measurement. Our results will offer guidelines for signal optimization in MPM at the 1700-nm window.
LETTER TO THE EDITOR: Combined optical and single photon emission imaging: preliminary results
NASA Astrophysics Data System (ADS)
Boschi, Federico; Spinelli, Antonello E.; D'Ambrosio, Daniela; Calderan, Laura; Marengo, Mario; Sbarbati, Andrea
2009-12-01
In vivo optical imaging instruments are generally devoted to the acquisition of light coming from fluorescence or bioluminescence processes. Recently, an instrument was conceived with radioisotopic detection capabilities (Kodak in Vivo Multispectral System F) based on the conversion of x-rays from the phosphorus screen. The goal of this work is to demonstrate that an optical imager (IVIS 200, Xenogen Corp., Alameda, USA), designed for in vivo acquisitions of small animals in bioluminescent and fluorescent modalities, can even be employed to detect signals due to radioactive tracers. Our system is based on scintillator crystals for the conversion of high-energy rays and a collimator. No hardware modifications are required. Crystals alone permit the acquisition of photons coming from an in vivo 20 g nude mouse injected with a solution of methyl diphosphonate technetium 99 metastable (Tc99m-MDP). With scintillator crystals and collimators, a set of measurements aimed to fully characterize the system resolution was carried out. More precisely, system point spread function and modulation transfer function were measured at different source depths. Results show that system resolution is always better than 1.3 mm when the source depth is less than 10 mm. The resolution of the images obtained with radioactive tracers is comparable with the resolution achievable with dedicated techniques. Moreover, it is possible to detect both optical and nuclear tracers or bi-modal tracers with only one instrument.
Comparison of seven optical clearing methods for mouse brain
NASA Astrophysics Data System (ADS)
Wan, Peng; Zhu, Jingtan; Yu, Tingting; Zhu, Dan
2018-02-01
Recently, a variety of tissue optical clearing techniques have been developed to reduce light scattering for imaging deeper and three-dimensional reconstruction of tissue structures. Combined with optical imaging techniques and diverse labeling methods, these clearing methods have significantly promoted the development of neuroscience. However, most of the protocols were proposed aiming for specific tissue type. Though there are some comparison results, the clearing methods covered are limited and the evaluation indices are lack of uniformity, which made it difficult to select a best-fit protocol for clearing in practical applications. Hence, it is necessary to systematically assess and compare these clearing methods. In this work, we evaluated the performance of seven typical clearing methods, including 3DISCO, uDISCO, SeeDB, ScaleS, ClearT2, CUBIC and PACT, on mouse brain samples. First, we compared the clearing capability on both brain slices and whole-brains by observing brain transparency. Further, we evaluated the fluorescence preservation and the increase of imaging depth. The results showed that 3DISCO, uDISCO and PACT posed excellent clearing capability on mouse brains, ScaleS and SeeDB rendered moderate transparency, while ClearT2 was the worst. Among those methods, ScaleS was the best on fluorescence preservation, and PACT achieved the highest increase of imaging depth. This study is expected to provide important reference for users in choosing most suitable brain optical clearing method.
NASA Astrophysics Data System (ADS)
Belfield, Kevin D.; Yao, Sheng; Kim, Bosung; Yue, Xiling
2016-03-01
Imaging biological samples with two-photon fluorescence (2PF) microscopy has the unique advantage of resulting high contrast 3D resolution subcellular image that can reach up to several millimeters depth. 2PF probes that absorb and emit at near IR region need to be developed. Two-photon excitation (2PE) wavelengths are less concerned as 2PE uses wavelengths doubles the absorption wavelength of the probe, which means 2PE wavelengths for probes even with absorption at visible wavelength will fall into NIR region. Therefore, probes that fluoresce at near IR region with high quantum yields are needed. A series of dyes based on 5-thienyl-2, 1, 3-benzothiadiazole and 5-thienyl-2, 1, 3-benzoselenadiazole core were synthesized as near infrared two-photon fluorophores. Fluorescence maxima wavelengths as long as 714 nm and fluorescence quantum yields as high as 0.67 were achieved. The fluorescence quantum yields of the dyes were nearly constant, regardless of solvents polarity. These diazoles exhibited large Stokes shift (<114nm), high two-photon absorption cross sections (up to 2,800 GM), and high two-photon fluorescence figure of merit (FM , 1.04×10-2 GM). Cells incubated on a 3D scaffold with one of the new probes (encapsulated in Pluronic micelles) exhibited bright fluorescence, enabling 3D two-photon fluorescence imaging to a depth of 100 µm.
3D on-chip microscopy of optically cleared tissue
NASA Astrophysics Data System (ADS)
Zhang, Yibo; Shin, Yoonjung; Sung, Kevin; Yang, Sam; Chen, Harrison; Wang, Hongda; Teng, Da; Rivenson, Yair; Kulkarni, Rajan P.; Ozcan, Aydogan
2018-02-01
Traditional pathology relies on tissue biopsy, micro-sectioning, immunohistochemistry and microscopic imaging, which are relatively expensive and labor-intensive, and therefore are less accessible in resource-limited areas. Low-cost tissue clearing techniques, such as the simplified CLARITY method (SCM), are promising to potentially reduce the cost of disease diagnosis by providing 3D imaging and phenotyping of thicker tissue samples with simpler preparation steps. However, the mainstream imaging approach for cleared tissue, fluorescence microscopy, suffers from high-cost, photobleaching and signal fading. As an alternative approach to fluorescence, here we demonstrate 3D imaging of SCMcleared tissue using on-chip holography, which is based on pixel-super-resolution and multi-height phase recovery algorithms to digitally compute the sample's amplitude and phase images at various z-slices/depths through the sample. The tissue clearing procedures and the lens-free imaging system were jointly optimized to find the best illumination wavelength, tissue thickness, staining solution pH, and the number of hologram heights to maximize the imaged tissue volume, minimize the amount of acquired data, while maintaining a high contrast-to-noise ratio for the imaged cells. After this optimization, we achieved 3D imaging of a 200-μm thick cleared mouse brain tissue over a field-of-view of <20mm2 , and the resulting 3D z-stack agrees well with the images acquired with a scanning lens-based microscope (20× 0.75NA). Moreover, the lens-free microscope achieves an order-of-magnitude better data efficiency compared to its lens-based counterparts for volumetric imaging of samples. The presented low-cost and high-throughput lens-free tissue imaging technique enabled by CLARITY can be used in various biomedical applications in low-resource-settings.
Improving depth estimation from a plenoptic camera by patterned illumination
NASA Astrophysics Data System (ADS)
Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.
2015-05-01
Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.
Chen, Zhixing; Wei, Lu; Zhu, Xinxin; Min, Wei
2012-08-13
It is highly desirable to be able to optically probe biological activities deep inside live organisms. By employing a spatially confined excitation via a nonlinear transition, multiphoton fluorescence microscopy has become indispensable for imaging scattering samples. However, as the incident laser power drops exponentially with imaging depth due to scattering loss, the out-of-focus fluorescence eventually overwhelms the in-focal signal. The resulting loss of imaging contrast defines a fundamental imaging-depth limit, which cannot be overcome by increasing excitation intensity. Herein we propose to significantly extend this depth limit by multiphoton activation and imaging (MPAI) of photo-activatable fluorophores. The imaging contrast is drastically improved due to the created disparity of bright-dark quantum states in space. We demonstrate this new principle by both analytical theory and experiments on tissue phantoms labeled with synthetic caged fluorescein dye or genetically encodable photoactivatable GFP.
Bessel light sheet structured illumination microscopy
NASA Astrophysics Data System (ADS)
Noshirvani Allahabadi, Golchehr
Biomedical study researchers using animals to model disease and treatment need fast, deep, noninvasive, and inexpensive multi-channel imaging methods. Traditional fluorescence microscopy meets those criteria to an extent. Specifically, two-photon and confocal microscopy, the two most commonly used methods, are limited in penetration depth, cost, resolution, and field of view. In addition, two-photon microscopy has limited ability in multi-channel imaging. Light sheet microscopy, a fast developing 3D fluorescence imaging method, offers attractive advantages over traditional two-photon and confocal microscopy. Light sheet microscopy is much more applicable for in vivo 3D time-lapsed imaging, owing to its selective illumination of tissue layer, superior speed, low light exposure, high penetration depth, and low levels of photobleaching. However, standard light sheet microscopy using Gaussian beam excitation has two main disadvantages: 1) the field of view (FOV) of light sheet microscopy is limited by the depth of focus of the Gaussian beam. 2) Light-sheet images can be degraded by scattering, which limits the penetration of the excitation beam and blurs emission images in deep tissue layers. While two-sided sheet illumination, which doubles the field of view by illuminating the sample from opposite sides, offers a potential solution, the technique adds complexity and cost to the imaging system. We investigate a new technique to address these limitations: Bessel light sheet microscopy in combination with incoherent nonlinear Structured Illumination Microscopy (SIM). Results demonstrate that, at visible wavelengths, Bessel excitation penetrates up to 250 microns deep in the scattering media with single-side illumination. Bessel light sheet microscope achieves confocal level resolution at a lateral resolution of 0.3 micron and an axial resolution of 1 micron. Incoherent nonlinear SIM further reduces the diffused background in Bessel light sheet images, resulting in confocal quality images in thick tissue. The technique was applied to live transgenic zebra fish tg(kdrl:GFP), and the sub-cellular structure of fish vasculature genetically labeled with GFP was captured in 3D. The superior speed of the microscope enables us to acquire signal from 200 layers of a thick sample in 4 minutes. The compact microscope uses exclusively off-the-shelf components and offers a low-cost imaging solution for studying small animal models or tissue samples.
NASA Astrophysics Data System (ADS)
Pande-Chhetri, Roshan
High resolution hyperspectral imagery (airborne or ground-based) is gaining momentum as a useful analytical tool in various fields including agriculture and aquatic systems. These images are often contaminated with stripes and noise resulting in lower signal-to-noise ratio, especially in aquatic regions where signal is naturally low. This research investigates effective methods for filtering high spatial resolution hyperspectral imagery and use of the imagery in water quality parameter estimation and aquatic vegetation classification. The striping pattern of the hyperspectral imagery is non-parametric and difficult to filter. In this research, a de-striping algorithm based on wavelet analysis and adaptive Fourier domain normalization was examined. The result of this algorithm was found superior to other available algorithms and yielded highest Peak Signal to Noise Ratio improvement. The algorithm was implemented on individual image bands and on selected bands of the Maximum Noise Fraction (MNF) transformed images. The results showed that image filtering in the MNF domain was efficient and produced best results. The study investigated methods of analyzing hyperspectral imagery to estimate water quality parameters and to map aquatic vegetation in case-2 waters. Ground-based hyperspectral imagery was analyzed to determine chlorophyll-a (Chl-a) concentrations in aquaculture ponds. Two-band and three-band indices were implemented and the effect of using submerged reflectance targets was evaluated. Laboratory measured values were found to be in strong correlation with two-band and three-band spectral indices computed from the hyperspectral image. Coefficients of determination (R2) values were found to be 0.833 and 0.862 without submerged targets and stronger values of 0.975 and 0.982 were obtained using submerged targets. Airborne hyperspectral images were used to detect and classify aquatic vegetation in a black river estuarine system. Image normalization for water surface reflectance and water depths was conducted and non-parametric classifiers such as ANN, SVM and SAM were tested and compared. Quality assessment indicated better classification and detection when non-parametric classifiers were applied to normalized or depth invariant transform images. Best classification accuracy of 73% was achieved when ANN is applied on normalized image and best detection accuracy of around 92% was obtained when SVM or SAM was applied on depth invariant images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, X; Lou, K; Rice University, Houston, TX
Purpose: To develop a practical and compact preclinical PET with innovative technologies for substantially improved imaging performance required for the advanced imaging applications. Methods: Several key components of detector, readout electronics and data acquisition have been developed and evaluated for achieving leapfrogged imaging performance over a prototype animal PET we had developed. The new detector module consists of an 8×8 array of 1.5×1.5×30 mm{sup 3} LYSO scintillators with each end coupled to a latest 4×4 array of 3×3 mm{sup 2} Silicon Photomultipliers (with ∼0.2 mm insensitive gap between pixels) through a 2.0 mm thick transparent light spreader. Scintillator surface andmore » reflector/coupling were designed and fabricated to reserve air-gap to achieve higher depth-of-interaction (DOI) resolution and other detector performance. Front-end readout electronics with upgraded 16-ch ASIC was newly developed and tested, so as the compact and high density FPGA based data acquisition and transfer system targeting 10M/s coincidence counting rate with low power consumption. The new detector module performance of energy, timing and DOI resolutions with the data acquisition system were evaluated. Initial Na-22 point source image was acquired with 2 rotating detectors to assess the system imaging capability. Results: No insensitive gaps at the detector edge and thus it is capable for tiling to a large-scale detector panel. All 64 crystals inside the detector were clearly separated from a flood-source image. Measured energy, timing, and DOI resolutions are around 17%, 2.7 ns and 1.96 mm (mean value). Point source image is acquired successfully without detector/electronics calibration and data correction. Conclusion: Newly developed advanced detector and readout electronics will be enable achieving targeted scalable and compact PET system in stationary configuration with >15% sensitivity, ∼1.3 mm uniform imaging resolution, and fast acquisition counting rate capability for substantially improved imaging and quantification performance for small animal imaging and image-guided radiotherapy applications. This work was supported by a research award RP120326 from Cancer Prevention and Research Institute of Texas.« less
Hyperspectral and multispectral bioluminescence optical tomography for small animal imaging.
Chaudhari, Abhijit J; Darvas, Felix; Bading, James R; Moats, Rex A; Conti, Peter S; Smith, Desmond J; Cherry, Simon R; Leahy, Richard M
2005-12-07
For bioluminescence imaging studies in small animals, it is important to be able to accurately localize the three-dimensional (3D) distribution of the underlying bioluminescent source. The spectrum of light produced by the source that escapes the subject varies with the depth of the emission source because of the wavelength-dependence of the optical properties of tissue. Consequently, multispectral or hyperspectral data acquisition should help in the 3D localization of deep sources. In this paper, we describe a framework for fully 3D bioluminescence tomographic image acquisition and reconstruction that exploits spectral information. We describe regularized tomographic reconstruction techniques that use semi-infinite slab or FEM-based diffusion approximations of photon transport through turbid media. Singular value decomposition analysis was used for data dimensionality reduction and to illustrate the advantage of using hyperspectral rather than achromatic data. Simulation studies in an atlas-mouse geometry indicated that sub-millimeter resolution may be attainable given accurate knowledge of the optical properties of the animal. A fixed arrangement of mirrors and a single CCD camera were used for simultaneous acquisition of multispectral imaging data over most of the surface of the animal. Phantom studies conducted using this system demonstrated our ability to accurately localize deep point-like sources and show that a resolution of 1.5 to 2.2 mm for depths up to 6 mm can be achieved. We also include an in vivo study of a mouse with a brain tumour expressing firefly luciferase. Co-registration of the reconstructed 3D bioluminescent image with magnetic resonance images indicated good anatomical localization of the tumour.
Initial experiments with gel-water: towards MRI-linac dosimetry and imaging.
Alnaghy, Sarah J; Gargett, Maegan; Liney, Gary; Petasecca, Marco; Begg, Jarrad; Espinoza, Anthony; Newall, Matthew K; Duncan, Mitchell; Holloway, Lois; Lerch, Michael L F; Lazea, Mircea; Rosenfeld, Anatoly B; Metcalfe, Peter
2016-12-01
Tracking the position of a moving radiation detector in time and space during data acquisition can replicate 4D image-guided radiotherapy (4DIGRT). Magnetic resonance imaging (MRI)-linacs need MRI-visible detectors to achieve this, however, imaging solid phantoms is an issue. Hence, gel-water, a material that provides signal for MRI-visibility, and which will in future work, replace solid water for an MRI-linac 4DIGRT quality assurance tool, is discussed. MR and CT images of gel-water were acquired for visualisation and electron density verification. Characterisation of gel-water at 0 T was compared to Gammex-RMI solid water, using MagicPlate-512 (M512) and RMI Attix chamber; this included percentage depth dose, tissue-phantom ratio (TPR 20/10 ), tissue-maximum ratio (TMR), profiles, output factors, and a gamma analysis to investigate field penumbral differences. MR images of a non-powered detector in gel-water demonstrated detector visualisation. The CT-determined gel-water electron density agreed with the calculated value of 1.01. Gel-water depth dose data demonstrated a maximum deviation of 0.7% from solid water for M512 and 2.4% for the Attix chamber, and by 2.1% for TPR 20/10 and 1.0% for TMR. FWHM and output factor differences between materials were ≤0.3 and ≤1.4%. M512 data passed gamma analysis with 100% within 2%, 2 mm tolerance for multileaf collimator defined fields. Gel-water was shown to be tissue-equivalent for dosimetry and a feasible option to replace solid water.
Aydın, Zeliha Uğur; Özyürek, Taha; Keskin, Büşra; Baran, Talat
2018-04-12
The aim of the present study was to compare the effect of chitosan nanoparticle, QMix, and 17% EDTA on the penetrability of a calcium silicate-based sealer into dentinal tubules using a confocal laser scanning microscope (CLSM). Sixty mandibular premolar teeth were selected and randomly divided into three groups (n = 20) before root canal preparation according to the solution used in the final rinse protocol: chitosan, QMix, and EDTA groups. Twenty teeth of each group were filled with a TotalFill BC sealers' single gutta-percha cone and with 0.1% rhodamine B. The specimens were horizontally sectioned at 3 and 5 mm from the apex, and the slices were analyzed in CLSM (4×). Total percentage and maximum depth of sealer penetration were measured using confocal laser scanning microscopy with using Image J analysis software. Dentinal tubule's penetration depth, percentage, and area were measured using imaging software. Kruskal-Wallis test was used for statistical analysis. The level of significance was set at 5%. Results of Kruskal-Wallis analysis showed that there was a significant difference in the percentage and depth of sealer penetration among all groups at 3 and 5 mm level sections (P < 0.05). Within the groups, the minimum sealer penetration depth was recorded for chitosan nanoparticle group. Greater depth of sealer penetration was recorded at 5 mm as compared to 3 mm in all the groups. Within the limitation of the present study, it can be concluded that QMix and EDTA promoted sealer penetration superior to that achieved by chitosan nanoparticle.
Three-photon tissue imaging using moxifloxacin.
Lee, Seunghun; Lee, Jun Ho; Wang, Taejun; Jang, Won Hyuk; Yoon, Yeoreum; Kim, Bumju; Jun, Yong Woong; Kim, Myoung Joon; Kim, Ki Hean
2018-06-20
Moxifloxacin is an antibiotic used in clinics and has recently been used as a clinically compatible cell-labeling agent for two-photon (2P) imaging. Although 2P imaging with moxifloxacin labeling visualized cells inside tissues using enhanced fluorescence, the imaging depth was quite limited because of the relatively short excitation wavelength (<800 nm) used. In this study, the feasibility of three-photon (3P) excitation of moxifloxacin using a longer excitation wavelength and moxifloxacin-based 3P imaging were tested to increase the imaging depth. Moxifloxacin fluorescence via 3P excitation was detected at a >1000 nm excitation wavelength. After obtaining the excitation and emission spectra of moxifloxacin, moxifloxacin-based 3P imaging was applied to ex vivo mouse bladder and ex vivo mouse small intestine tissues and compared with moxifloxacin-based 2P imaging by switching the excitation wavelength of a Ti:sapphire oscillator between near 1030 and 780 nm. Both moxifloxacin-based 2P and 3P imaging visualized cellular structures in the tissues via moxifloxacin labeling, but the image contrast was better with 3P imaging than with 2P imaging at the same imaging depths. The imaging speed and imaging depth of moxifloxacin-based 3P imaging using a Ti:sapphire oscillator were limited by insufficient excitation power. Therefore, we constructed a new system for moxifloxacin-based 3P imaging using a high-energy Yb fiber laser at 1030 nm and used it for in vivo deep tissue imaging of a mouse small intestine. Moxifloxacin-based 3P imaging could be useful for clinical applications with enhanced imaging depth.
Image translation for single-shot focal tomography
Llull, Patrick; Yuan, Xin; Carin, Lawrence; ...
2015-01-01
Focus and depth of field are conventionally addressed by adjusting longitudinal lens position. More recently, combinations of deliberate blur and computational processing have been used to extend depth of field. Here we show that dynamic control of transverse and longitudinal lens position can be used to decode focus and extend depth of field without degrading static resolution. Our results suggest that optical image stabilization systems may be used for autofocus, extended depth of field, and 3D imaging.
NASA Astrophysics Data System (ADS)
Simon, Jacob C.; Curtis, Donald A.; Darling, Cynthia L.; Fried, Daniel
2018-02-01
In vivo and in vitro studies have demonstrated that near-infrared (NIR) light at λ=1300-1700-nm can be used to acquire high contrast images of enamel demineralization without interference of stains. The objective of this study was to determine if a relationship exists between the NIR image contrast of occlusal lesions and the depth of the lesion. Extracted teeth with varying amounts of natural occlusal decay were measured using a multispectral-multimodal NIR imaging system which captures λ=1300-nm occlusal transillumination, and λ=1500-1700-nm cross-polarized reflectance images. Image analysis software was used to calculate the lesion contrast detected in both images from matched positions of each imaging modality. Samples were serially sectioned across the lesion with a precision saw, and polarized light microscopy was used to measure the respective lesion depth relative to the dentinoenamel junction. Lesion contrast measured from NIR crosspolarized reflectance images positively correlated (p<0.05) with increasing lesion depth and a statistically significant difference between inner enamel and dentin lesions was observed. The lateral width of pit and fissures lesions measured in both NIR cross-polarized reflectance and NIR transillumination positively correlated with lesion depth.
NASA Astrophysics Data System (ADS)
Tavakolian, Pantea; Sivagurunathan, Koneswaran; Mandelis, Andreas
2017-07-01
Photothermal diffusion-wave imaging is a promising technique for non-destructive evaluation and medical applications. Several diffusion-wave techniques have been developed to produce depth-resolved planar images of solids and to overcome imaging depth and image blurring limitations imposed by the physics of parabolic diffusion waves. Truncated-Correlation Photothermal Coherence Tomography (TC-PCT) is the most successful class of these methodologies to-date providing 3-D subsurface visualization with maximum depth penetration and high axial and lateral resolution. To extend the depth range and axial and lateral resolution, an in-depth analysis of TC-PCT, a novel imaging system with improved instrumentation, and an optimized reconstruction algorithm over the original TC-PCT technique is developed. Thermal waves produced by a laser chirped pulsed heat source in a finite thickness solid and the image reconstruction algorithm are investigated from the theoretical point of view. 3-D visualization of subsurface defects utilizing the new TC-PCT system is reported. The results demonstrate that this method is able to detect subsurface defects at the depth range of ˜4 mm in a steel sample, which exhibits dynamic range improvement by a factor of 2.6 compared to the original TC-PCT. This depth does not represent the upper limit of the enhanced TC-PCT. Lateral resolution in the steel sample was measured to be ˜31 μm.
NASA Astrophysics Data System (ADS)
Dilbone, Elizabeth K.
Methods for spectrally-based bathymetric mapping of rivers mainly have been developed and tested on clear-flowing, gravel bedded channels, with limited application to turbid, sand-bedded rivers. Using hyperspectral images of the Niobrara River, Nebraska, and field-surveyed depth data, this study evaluated three methods of retrieving depth from remotely sensed data in a dynamic, sand-bedded channel. The first regression-based approach paired in situ depth measurements and image pixel values to predict depth via Optimal Band Ratio Analysis (OBRA). The second approach used ground-based reflectance measurements to calibrate an OBRA relationship. For this approach, CASI images were atmospherically corrected to units of apparent surface reflectance using an empirical line calibration. For the final technique, we used Image-to-Depth Quantile Transformation (IDQT) to predict depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image derived variable. OBRA yielded the lowest overall depth retrieval error (0.0047 m) and highest observed versus predicted R2 (0.81). Although misalignment between field and image data were not problematic to OBRA's performance in this study, such issues present potential limitations to standard regression-based approaches like OBRA in dynamic, sand-bedded rivers. Field spectroscopy-based maps exhibited a slight shallow bias (0.0652 m) but provided reliable depth estimates for most of the study reach. IDQT had a strong deep bias, but still provided informative relative depth maps that portrayed general patterns of shallow and deep areas of the channel. The over-prediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the CDF of depth. While each of the techniques tested in this study demonstrated the potential to provide accurate depth estimates in sand-bedded rivers, each method also was subject to certain constraints and limitations.
NASA Astrophysics Data System (ADS)
Liu, Miaofeng
2017-07-01
In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously
de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas
2018-01-23
High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.
On-Tree Mango Fruit Size Estimation Using RGB-D Images
Wang, Zhenglin; Verma, Brijesh
2017-01-01
In-field mango fruit sizing is useful for estimation of fruit maturation and size distribution, informing the decision to harvest, harvest resourcing (e.g., tray insert sizes), and marketing. In-field machine vision imaging has been used for fruit count, but assessment of fruit size from images also requires estimation of camera-to-fruit distance. Low cost examples of three technologies for assessment of camera to fruit distance were assessed: a RGB-D (depth) camera, a stereo vision camera and a Time of Flight (ToF) laser rangefinder. The RGB-D camera was recommended on cost and performance, although it functioned poorly in direct sunlight. The RGB-D camera was calibrated, and depth information matched to the RGB image. To detect fruit, a cascade detection with histogram of oriented gradients (HOG) feature was used, then Otsu’s method, followed by color thresholding was applied in the CIE L*a*b* color space to remove background objects (leaves, branches etc.). A one-dimensional (1D) filter was developed to remove the fruit pedicles, and an ellipse fitting method employed to identify well-separated fruit. Finally, fruit lineal dimensions were calculated using the RGB-D depth information, fruit image size and the thin lens formula. A Root Mean Square Error (RMSE) = 4.9 and 4.3 mm was achieved for estimated fruit length and width, respectively, relative to manual measurement, for which repeated human measures were characterized by a standard deviation of 1.2 mm. In conclusion, the RGB-D method for rapid in-field mango fruit size estimation is practical in terms of cost and ease of use, but cannot be used in direct intense sunshine. We believe this work represents the first practical implementation of machine vision fruit sizing in field, with practicality gauged in terms of cost and simplicity of operation. PMID:29182534
VizieR Online Data Catalog: New planetary nebulae in LMC (Reid+, 2006)
NASA Astrophysics Data System (ADS)
Reid, W. A.; Parker, Q. A.
2006-05-01
Over the last few years, we have specially constructed additional deep, homogeneous, narrow-band H and matching broad-band 'SR' (Short Red) maps of the entire central 25deg2 of the LMC. These unique maps were obtained from co-adding 12 well-matched UKST 2-h Hα exposures and six 15-min equivalent SR-band exposures on the same field using high-resolution Tech-Pan film. The 'SuperCOSMOS' plate-measuring machine at the Royal Observatory Edinburgh (Hambly et al., 2001MNRAS.326.1279) has scanned, co-added and pixel-matched these exposures, creating 10-m (0.67-arcsec) pixel data which goes 1.35 and 1mag deeper than individual exposures, achieving the full canonical Poissonian depth gain, e.g. Bland-Hawthorn, Shopbell & Malin (1993AJ....106.2154B). This gives a depth ~21.5 for the SR images and Requiv~22 for Hα (4.5x10-17erg/cm2/s/{AA}) which is at least 1-mag deeper than the best wide-field narrow-band LMC images currently available. (2 data files).
Modeling of Composite Scenes Using Wires, Plates and Dielectric Parallelized (WIPL-DP)
2006-06-01
formation and solves the data communications problem. The ability to perform subsurface imaging to depths of 200’ have already been demonstrated by...perform subsurface imaging to depths of 200’ have already been demonstrated by Brown in [3] and presented in Figure 3 above. Furthermore, reference [3...transmitter platform for use in image formation and solves the data communications problem. The ability to perform subsurface imaging to depths of 200
A joint encryption/watermarking system for verifying the reliability of medical images.
Bouslimi, Dalel; Coatrieux, Gouenou; Cozic, Michel; Roux, Christian
2012-09-01
In this paper, we propose a joint encryption/water-marking system for the purpose of protecting medical images. This system is based on an approach which combines a substitutive watermarking algorithm, the quantization index modulation, with an encryption algorithm: a stream cipher algorithm (e.g., the RC4) or a block cipher algorithm (e.g., the AES in cipher block chaining (CBC) mode of operation). Our objective is to give access to the outcomes of the image integrity and of its origin even though the image is stored encrypted. If watermarking and encryption are conducted jointly at the protection stage, watermark extraction and decryption can be applied independently. The security analysis of our scheme and experimental results achieved on 8-bit depth ultrasound images as well as on 16-bit encoded positron emission tomography images demonstrate the capability of our system to securely make available security attributes in both spatial and encrypted domains while minimizing image distortion. Furthermore, by making use of the AES block cipher in CBC mode, the proposed system is compliant with or transparent to the DICOM standard.
Laser speckle imaging based on photothermally driven convection
Regan, Caitlin; Choi, Bernard
2016-01-01
Abstract. Laser speckle imaging (LSI) is an interferometric technique that provides information about the relative speed of moving scatterers in a sample. Photothermal LSI overcomes limitations in depth resolution faced by conventional LSI by incorporating an excitation pulse to target absorption by hemoglobin within the vascular network. Here we present results from experiments designed to determine the mechanism by which photothermal LSI decreases speckle contrast. We measured the impact of mechanical properties on speckle contrast, as well as the spatiotemporal temperature dynamics and bulk convective motion occurring during photothermal LSI. Our collective data strongly support the hypothesis that photothermal LSI achieves a transient reduction in speckle contrast due to bulk motion associated with thermally driven convection. The ability of photothermal LSI to image structures below a scattering medium may have important preclinical and clinical applications. PMID:26927221
Imaging of Fluoride Ion in Living Cells and Tissues with a Two-Photon Ratiometric Fluorescence Probe
Zhu, Xinyue; Wang, Jianxi; Zhang, Jianjian; Chen, Zhenjie; Zhang, Haixia; Zhang, Xiaoyu
2015-01-01
A reaction-based two-photon (TP) ratiometric fluorescence probe Z2 has been developed and successfully applied to detect and image fluoride ion in living cells and tissues. The Z2 probe was designed designed to utilize an ICT mechanism between n-butylnaphthalimide as a fluorophore and tert-butyldiphenylsilane (TBDPS) as a response group. Upon addition of fluoride ion, the Si-O bond in the Z2 would be cleaved, and then a stronger electron-donating group was released. The fluorescent changes at 450 and 540 nm, respectively, made it possible to achieve ratiometric fluorescence detection. The results indicated that the Z2 could ratiometrically detect and image fluoride ion in living cells and tissues in a depth of 250 μm by two-photon microscopy (TPM). PMID:25594597
3D resolved mapping of optical aberrations in thick tissues
Zeng, Jun; Mahou, Pierre; Schanne-Klein, Marie-Claire; Beaurepaire, Emmanuel; Débarre, Delphine
2012-01-01
We demonstrate a simple method for mapping optical aberrations with 3D resolution within thick samples. The method relies on the local measurement of the variation in image quality with externally applied aberrations. We discuss the accuracy of the method as a function of the signal strength and of the aberration amplitude and we derive the achievable resolution for the resulting measurements. We then report on measured 3D aberration maps in human skin biopsies and mouse brain slices. From these data, we analyse the consequences of tissue structure and refractive index distribution on aberrations and imaging depth in normal and cleared tissue samples. The aberration maps allow the estimation of the typical aplanetism region size over which aberrations can be uniformly corrected. This method and data pave the way towards efficient correction strategies for tissue imaging applications. PMID:22876353
Rotary-scanning optical resolution photoacoustic microscopy
NASA Astrophysics Data System (ADS)
Qi, Weizhi; Xi, Lei
2016-10-01
Optical resolution photoacoustic microscopy (ORPAM) is currently one of the fastest evolving photoacoustic imaging modalities. It has a comparable spatial resolution to pure optical microscopic techniques such as epifluorescence microscopy, confocal microscopy, and two-photon microscopy, but also owns a deeper penetration depth. In this paper, we report a rotary-scanning (RS)-ORPAM that utilizes a galvanometer scanner integrated with objective to achieve rotary laser scanning. A 15 MHz cylindrically focused ultrasonic transducer is mounted onto a motorized rotation stage to follow optical scanning traces synchronously. To minimize the loss of signal to noise ratio, the acoustic focus is precisely adjusted to reach confocal with optical focus. Black tapes and carbon fibers are firstly imaged to evaluate the performance of the system, and then in vivo imaging of vasculature networks inside the ears and brains of mice is demonstrated using this system.
Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.
Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.
NASA Astrophysics Data System (ADS)
Rössler, Erik; Mattea, Carlos; Stapf, Siegfried
2015-02-01
Low field Nuclear Magnetic Resonance increases the contrast of the longitudinal relaxation rate in many biological tissues; one prominent example is hyaline articular cartilage. In order to take advantage of this increased contrast and to profile the depth-dependent variations, high resolution parameter measurements are carried out which can be of critical importance in an early diagnosis of cartilage diseases such as osteoarthritis. However, the maximum achievable spatial resolution of parameter profiles is limited by factors such as sensor geometry, sample curvature, and diffusion limitation. In this work, we report on high-resolution single-sided NMR scanner measurements with a commercial device, and quantify these limitations. The highest achievable spatial resolution on the used profiler, and the lateral dimension of the sensitive volume were determined. Since articular cartilage samples are usually bent, we also focus on averaging effects inside the horizontally aligned sensitive volume and their impact on the relaxation profiles. Taking these critical parameters into consideration, depth-dependent relaxation time profiles with the maximum achievable vertical resolution of 20 μm are discussed, and are correlated with diffusion coefficient profiles in hyaline articular cartilage in order to reconstruct T2 maps from the diffusion-weighted CPMG decays of apparent relaxation rates.
CUTIE: Cubesat Ultraviolet Transient Imaging Experiment
NASA Astrophysics Data System (ADS)
Cenko, Stephen B.; Bellm, Eric Christopher; Gal-Yam, Avishay; Gezari, Suvi; Gorjian, Varoujan; Jewell, April; Kruk, Jeffrey W.; Kulkarni, Shrinivas R.; Mushotzky, Richard; Nikzad, Shouleh; Piro, Anthony; Waxman, Eli; Ofek, Eran Oded
2017-01-01
We describe a mission concept for the Cubesat Ultraviolet Transient Imaging Experiment (CUTIE). CUTIE will image an area on the sky of ~ 1700 square degrees every ~ 95 min at near-UV wavelengths (260-320 nm) to a depth of 19.0 mag (AB). These capabilities represent orders of magnitude improvement over past UV imagers, allowing CUTIE to conduct the first true synoptic survey of the transient and variable sky in the UV bandpass. CUTIE will uniquely address key Decadal Survey science questions such as how massive stars end their lives, how super-massive black holes accrete material and influence their surroundings, and how suitable habitable-zone planets around low-mass stars are for hosting life. By partnering with upcoming ground-based time-domain surveys, CUTIE will further leverage its low-Earth orbit to provide a multi-wavelength view of the dynamic universe that can only be achieved from space. The remarkable sensitivity for such a small payload is achieved via the use of large format delta-doped CCDs; space qualifying this technology will serve as a key milestone towards the development of future large missions (Explorers and Surveyors). Finally, our innovative design in a 6U cubesat form factor will enable significant cost savings, accelerating the timeline from conception to on-sky operation (5 years; well matched for graduate student participation).
The Frontier Fields: Survey Design and Initial Results
NASA Astrophysics Data System (ADS)
Lotz, J. M.; Koekemoer, A.; Coe, D.; Grogin, N.; Capak, P.; Mack, J.; Anderson, J.; Avila, R.; Barker, E. A.; Borncamp, D.; Brammer, G.; Durbin, M.; Gunning, H.; Hilbert, B.; Jenkner, H.; Khandrika, H.; Levay, Z.; Lucas, R. A.; MacKenty, J.; Ogaz, S.; Porterfield, B.; Reid, N.; Robberto, M.; Royle, P.; Smith, L. J.; Storrie-Lombardi, L. J.; Sunnquist, B.; Surace, J.; Taylor, D. C.; Williams, R.; Bullock, J.; Dickinson, M.; Finkelstein, S.; Natarajan, P.; Richard, J.; Robertson, B.; Tumlinson, J.; Zitrin, A.; Flanagan, K.; Sembach, K.; Soifer, B. T.; Mountain, M.
2017-03-01
What are the faintest distant galaxies we can see with the Hubble Space Telescope (HST) now, before the launch of the James Webb Space Telescope? This is the challenge taken up by the Frontier Fields, a Director’s discretionary time campaign with HST and the Spitzer Space Telescope to see deeper into the universe than ever before. The Frontier Fields combines the power of HST and Spitzer with the natural gravitational telescopes of massive high-magnification clusters of galaxies to produce the deepest observations of clusters and their lensed galaxies ever obtained. Six clusters—Abell 2744, MACSJ0416.1-2403, MACSJ0717.5+3745, MACSJ1149.5+2223, Abell S1063, and Abell 370—have been targeted by the HST ACS/WFC and WFC3/IR cameras with coordinated parallel fields for over 840 HST orbits. The parallel fields are the second-deepest observations thus far by HST with 5σ point-source depths of ˜29th ABmag. Galaxies behind the clusters experience typical magnification factors of a few, with small regions magnified by factors of 10-100. Therefore, the Frontier Field cluster HST images achieve intrinsic depths of ˜30-33 mag over very small volumes. Spitzer has obtained over 1000 hr of Director’s discretionary imaging of the Frontier Field cluster and parallels in IRAC 3.6 and 4.5 μm bands to 5σ point-source depths of ˜26.5, 26.0 ABmag. We demonstrate the exceptional sensitivity of the HST Frontier Field images to faint high-redshift galaxies, and review the initial results related to the primary science goals.
Resolution limits of ultrafast ultrasound localization microscopy
NASA Astrophysics Data System (ADS)
Desailly, Yann; Pierre, Juliette; Couture, Olivier; Tanter, Mickael
2015-11-01
As in other imaging methods based on waves, the resolution of ultrasound imaging is limited by the wavelength. However, the diffraction-limit can be overcome by super-localizing single events from isolated sources. In recent years, we developed plane-wave ultrasound allowing frame rates up to 20 000 fps. Ultrafast processes such as rapid movement or disruption of ultrasound contrast agents (UCA) can thus be monitored, providing us with distinct punctual sources that could be localized beyond the diffraction limit. We previously showed experimentally that resolutions beyond λ/10 can be reached in ultrafast ultrasound localization microscopy (uULM) using a 128 transducer matrix in reception. Higher resolutions are theoretically achievable and the aim of this study is to predict the maximum resolution in uULM with respect to acquisition parameters (frequency, transducer geometry, sampling electronics). The accuracy of uULM is the error on the localization of a bubble, considered a point-source in a homogeneous medium. The proposed model consists in two steps: determining the timing accuracy of the microbubble echo in radiofrequency data, then transferring this time accuracy into spatial accuracy. The simplified model predicts a maximum resolution of 40 μm for a 1.75 MHz transducer matrix composed of two rows of 64 elements. Experimental confirmation of the model was performed by flowing microbubbles within a 60 μm microfluidic channel and localizing their blinking under ultrafast imaging (500 Hz frame rate). The experimental resolution, determined as the standard deviation in the positioning of the microbubbles, was predicted within 6 μm (13%) of the theoretical values and followed the analytical relationship with respect to the number of elements and depth. Understanding the underlying physical principles determining the resolution of superlocalization will allow the optimization of the imaging setup for each organ. Ultimately, accuracies better than the size of capillaries are achievable at several centimeter depths.
Carrasco-Zevallos, O. M.; Keller, B.; Viehland, C.; Shen, L.; Waterman, G.; Todorich, B.; Shieh, C.; Hahn, P.; Farsiu, S.; Kuo, A. N.; Toth, C. A.; Izatt, J. A.
2016-01-01
Minimally-invasive microsurgery has resulted in improved outcomes for patients. However, operating through a microscope limits depth perception and fixes the visual perspective, which result in a steep learning curve to achieve microsurgical proficiency. We introduce a surgical imaging system employing four-dimensional (live volumetric imaging through time) microscope-integrated optical coherence tomography (4D MIOCT) capable of imaging at up to 10 volumes per second to visualize human microsurgery. A custom stereoscopic heads-up display provides real-time interactive volumetric feedback to the surgeon. We report that 4D MIOCT enhanced suturing accuracy and control of instrument positioning in mock surgical trials involving 17 ophthalmic surgeons. Additionally, 4D MIOCT imaging was performed in 48 human eye surgeries and was demonstrated to successfully visualize the pathology of interest in concordance with preoperative diagnosis in 93% of retinal surgeries and the surgical site of interest in 100% of anterior segment surgeries. In vivo 4D MIOCT imaging revealed sub-surface pathologic structures and instrument-induced lesions that were invisible through the operating microscope during standard surgical maneuvers. In select cases, 4D MIOCT guidance was necessary to resolve such lesions and prevent post-operative complications. Our novel surgical visualization platform achieves surgeon-interactive 4D visualization of live surgery which could expand the surgeon’s capabilities. PMID:27538478
NASA Astrophysics Data System (ADS)
Carrasco-Zevallos, O. M.; Keller, B.; Viehland, C.; Shen, L.; Waterman, G.; Todorich, B.; Shieh, C.; Hahn, P.; Farsiu, S.; Kuo, A. N.; Toth, C. A.; Izatt, J. A.
2016-08-01
Minimally-invasive microsurgery has resulted in improved outcomes for patients. However, operating through a microscope limits depth perception and fixes the visual perspective, which result in a steep learning curve to achieve microsurgical proficiency. We introduce a surgical imaging system employing four-dimensional (live volumetric imaging through time) microscope-integrated optical coherence tomography (4D MIOCT) capable of imaging at up to 10 volumes per second to visualize human microsurgery. A custom stereoscopic heads-up display provides real-time interactive volumetric feedback to the surgeon. We report that 4D MIOCT enhanced suturing accuracy and control of instrument positioning in mock surgical trials involving 17 ophthalmic surgeons. Additionally, 4D MIOCT imaging was performed in 48 human eye surgeries and was demonstrated to successfully visualize the pathology of interest in concordance with preoperative diagnosis in 93% of retinal surgeries and the surgical site of interest in 100% of anterior segment surgeries. In vivo 4D MIOCT imaging revealed sub-surface pathologic structures and instrument-induced lesions that were invisible through the operating microscope during standard surgical maneuvers. In select cases, 4D MIOCT guidance was necessary to resolve such lesions and prevent post-operative complications. Our novel surgical visualization platform achieves surgeon-interactive 4D visualization of live surgery which could expand the surgeon’s capabilities.
Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor
NASA Astrophysics Data System (ADS)
Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.
2018-04-01
RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.
Approach for scene reconstruction from the analysis of a triplet of still images
NASA Astrophysics Data System (ADS)
Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle
1997-03-01
Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.
Walker Ranch 3D seismic images
Robert J. Mellors
2016-03-01
Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.
Comparison of the depth of an optic nerve head obtained using stereo retinal images and HRT
NASA Astrophysics Data System (ADS)
Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Aoyama, Akira; Hara, Takeshi; Kakogawa, Masakatsu; Fujita, Hiroshi; Yamamoto, Tetsuya
2007-03-01
The analysis of the optic nerve head (ONH) in the retinal fundus is important for the early detection of glaucoma. In this study, we investigate an automatic reconstruction method for producing the 3-D structure of the ONH from a stereo retinal image pair; the depth value of the ONH measured by using this method was compared with the measurement results determined from the Heidelberg Retina Tomograph (HRT). We propose a technique to obtain the depth value from the stereo image pair, which mainly consists of four steps: (1) cutout of the ONH region from the retinal images, (2) registration of the stereo pair, (3) disparity detection, and (4) depth calculation. In order to evaluate the accuracy of this technique, the shape of the depression of an eyeball phantom that had a circular dent as generated from the stereo image pair and used to model the ONH was compared with a physically measured quantity. The measurement results obtained when the eyeball phantom was used were approximately consistent. The depth of the ONH obtained using the stereo retinal images was in accordance with the results obtained using the HRT. These results indicate that the stereo retinal images could be useful for assessing the depth of the ONH for the diagnosis of glaucoma.
Jin, Xin; Liu, Li; Chen, Yanqin; Dai, Qionghai
2017-05-01
This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object's depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
High-frequency Pulse-compression Ultrasound Imaging with an Annular Array
NASA Astrophysics Data System (ADS)
Mamou, J.; Ketterling, J. A.; Silverman, R. H.
High-frequency ultrasound (HFU) allows fine-resolution imaging at the expense of limited depth-of-field (DOF) and shallow acoustic penetration depth. Coded-excitation imaging permits a significant increase in the signal-to-noise ratio (SNR) and therefore, the acoustic penetration depth. A 17-MHz, five-element annular array with a focal length of 31 mm and a total aperture of 10 mm was fabricated using a 25-μm thick piezopolymer membrane. An optimized 8-μs linear chirp spanning 6.5-32 MHz was used to excite the transducer. After data acquisition, the received signals were linearly filtered by a compression filter and synthetically focused. To compare the chirp-array imaging method with conventional impulse imaging in terms of resolution, a 25-μm wire was scanned and the -6-dB axial and lateral resolutions were computed at depths ranging from 20.5 to 40.5 mm. A tissue-mimicking phantom containing 10-μm glass beads was scanned, and backscattered signals were analyzed to evaluate SNR and penetration depth. Finally, ex-vivo ophthalmic images were formed and chirp-coded images showed features that were not visible in conventional impulse images.
Snow Depth Depicted on Mt. Lyell by NASA Airborne Snow Observatory
2013-05-02
A natural color image of Mt. Lyell, the highest point in the Tuolumne River Basin top image is compared with a three-dimensional color composite image of Mt. Lyell from NASA Airborne Snow Observatory depicting snow depth bottom image.
True 3D digital holographic tomography for virtual reality applications
NASA Astrophysics Data System (ADS)
Downham, A.; Abeywickrema, U.; Banerjee, P. P.
2017-09-01
Previously, a single CCD camera has been used to record holograms of an object while the object is rotated about a single axis to reconstruct a pseudo-3D image, which does not show detailed depth information from all perspectives. To generate a true 3D image, the object has to be rotated through multiple angles and along multiple axes. In this work, to reconstruct a true 3D image including depth information, a die is rotated along two orthogonal axes, and holograms are recorded using a Mach-Zehnder setup, which are subsequently numerically reconstructed. This allows for the generation of multiple images containing phase (i.e., depth) information. These images, when combined, create a true 3D image with depth information which can be exported to a Microsoft® HoloLens for true 3D virtual reality.
NASA Astrophysics Data System (ADS)
Ando, Yoriko; Sawahata, Hirohito; Kawano, Takeshi; Koida, Kowa; Numano, Rika
2018-02-01
Bundled fiber optics allow in vivo imaging at deep sites in a body. The intrinsic optical contrast detects detailed structures in blood vessels and organs. We developed a bundled-fiber-coupled endomicroscope, enabling stereoscopic three-dimensional (3-D) reflectance imaging with a multipositional illumination scheme. Two illumination sites were attached to obtain reflectance images with left and right illumination. Depth was estimated by the horizontal disparity between the two images under alternative illuminations and was calibrated by the targets with known depths. This depth reconstruction was applied to an animal model to obtain the 3-D structure of blood vessels of the cerebral cortex (Cereb cortex) and preputial gland (Pre gla). The 3-D endomicroscope could be instrumental to microlevel reflectance imaging, improving the precision in subjective depth perception, spatial orientation, and identification of anatomical structures.
NASA Astrophysics Data System (ADS)
Boroomand, Ameneh; Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka
2015-03-01
The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
20 MHz/40 MHz Dual Element Transducers for High Frequency Harmonic Imaging
Kim, Hyung Ham; Cannata, Jonathan M.; Liu, Ruibin; Chang, Jin Ho; Silverman, Ronald H.; Shung, K. Kirk
2009-01-01
Concentric annular type dual element transducers for second harmonic imaging at 20 MHz / 40 MHz were designed and fabricated to improve spatial resolution and depth of penetration for ophthalmic imaging applications. The outer ring element was designed to transmit the 20 MHz signal and the inner circular element was designed to receive the 40 MHz second harmonic signal. Lithium niobate (LiNbO3), with its low dielectric constant, was used as the piezoelectric material to achieve good electrical impedance matching. Double matching layers and conductive backing were used and optimized by KLM modeling to achieve high sensitivity and wide bandwidth for harmonic imaging and superior time-domain characteristics. Prototype transducers were fabricated and evaluated quantitatively and clinically. The average measured center frequency for the transmit ring element was 21 MHz and the one-way –3 dB bandwidth was greater than 50%. The 40 MHz receive element functioned at 31 MHz center frequency with acceptable bandwidth to receive attenuated and frequency downshifted harmonic signal. The lateral beam profile for the 20 MHz ring elements at the focus matched the Field II simulated results well, and the effect of outer ring diameter was also examined. Images of a posterior segment of an excised pig eye and a choroidal nevus of human eye were obtained both for single element and dual element transducers and compared to demonstrate the advantages of dual element harmonic imaging. PMID:19126492
Enabling Technologies for High-accuracy Multiangle Spectropolarimetric Imaging from Space
NASA Technical Reports Server (NTRS)
Diner, David J.; Macenka, Steven A.; Seshndri, Suresh; Bruce, Carl E; Jau, Bruno; Chipman, Russell A.; Cairns, Brian; Christoph, Keller; Foo, Leslie D.
2004-01-01
Satellite remote sensing plays a major role in measuring the optical and radiative properties, environmental impact, and spatial and temporal distribution of tropospheric aerosols. In this paper, we envision a new generation of spaceborne imager that integrates the unique strengths of multispectral, multiangle, and polarimetric approaches, thereby achieving better accuracies in aerosol optical depth and particle properties than can be achieved using any one method by itself. Design goals include spectral coverage from the near-UV to the shortwave infrared; global coverage within a few days; intensity and polarimetric imaging simultaneously at multiple view angles; kilometer to sub-kilometer spatial resolution; and measurement of the degree of linear polarization for a subset of the spectral complement with an uncertainty of 0.5% or less. The latter requirement is technically the most challenging. In particular, an approach for dealing with inter-detector gain variations is essential to avoid false polarization signals. We propose using rapid modulation of the input polarization state to overcome this problem, using a high-speed variable retarder in the camera design. Technologies for rapid retardance modulation include mechanically rotating retarders, liquid crystals, and photoelastic modulators (PEMs). We conclude that the latter are the most suitable.
Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Liang, Kaicheng; Wang, Zhao; Cleveland, Cody; Booth, Lucas; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Cable, Alex E; Mashimo, Hiroshi; Langer, Robert; Traverso, Giovanni; Fujimoto, James G
2016-08-01
We demonstrate a micromotor balloon imaging catheter for ultrahigh speed endoscopic optical coherence tomography (OCT) which provides wide area, circumferential structural and angiographic imaging of the esophagus without contrast agents. Using a 1310 nm MEMS tunable wavelength swept VCSEL light source, the system has a 1.2 MHz A-scan rate and ~8.5 µm axial resolution in tissue. The micromotor balloon catheter enables circumferential imaging of the esophagus at 240 frames per second (fps) with a ~30 µm (FWHM) spot size. Volumetric imaging is achieved by proximal pullback of the micromotor assembly within the balloon at 1.5 mm/sec. Volumetric data consisting of 4200 circumferential images of 5,000 A-scans each over a 2.6 cm length, covering a ~13 cm(2) area is acquired in <18 seconds. A non-rigid image registration algorithm is used to suppress motion artifacts from non-uniform rotational distortion (NURD), cardiac motion or respiration. En face OCT images at various depths can be generated. OCT angiography (OCTA) is computed using intensity decorrelation between sequential pairs of circumferential scans and enables three-dimensional visualization of vasculature. Wide area volumetric OCT and OCTA imaging of the swine esophagus in vivo is demonstrated.
Methods for reverberation suppression utilizing dual frequency band imaging.
Rau, Jochen M; Måsøy, Svein-Erik; Hansen, Rune; Angelsen, Bjørn; Tangen, Thor Andreas
2013-09-01
Reverberations impair the contrast resolution of diagnostic ultrasound images. Tissue harmonic imaging is a common method to reduce these artifacts, but does not remove all reverberations. Dual frequency band imaging (DBI), utilizing a low frequency pulse which manipulates propagation of the high frequency imaging pulse, has been proposed earlier for reverberation suppression. This article adds two different methods for reverberation suppression with DBI: the delay corrected subtraction (DCS) and the first order content weighting (FOCW) method. Both methods utilize the propagation delay of the imaging pulse of two transmissions with alternating manipulation pressure to extract information about its depth of first scattering. FOCW further utilizes this information to estimate the content of first order scattering in the received signal. Initial evaluation is presented where both methods are applied to simulated and in vivo data. Both methods yield visual and measurable substantial improvement in image contrast. Comparing DCS with FOCW, DCS produces sharper images and retains more details while FOCW achieves best suppression levels and, thus, highest image contrast. The measured improvement in contrast ranges from 8 to 27 dB for DCS and from 4 dB up to the dynamic range for FOCW.
Feasibility study of proton-based quality assurance of proton range compensator
NASA Astrophysics Data System (ADS)
Park, S.; Jeong, C.; Min, B. J.; Kwak, J.; Lee, J.; Cho, S.; Shin, D.; Lim, Y. K.; Park, S. Y.; Lee, S. B.
2013-06-01
All patient specific range compensators (RCs) are customized for achieving distal dose conformity of target volume in passively scattered proton therapy. Compensators are milled precisely using a computerized machine. In proton therapy, precision of the compensator is critical and quality assurance (QA) is required to protect normal tissues and organs from radiation damage. This study aims to evaluate the precision of proton-based quality assurance of range compensator. First, the geometry information of two compensators was extracted from the DICOM Radiotherapy (RT) plan. Next, RCs were irradiated on the EBT film individually by proton beam which is modulated to have a photon-like percent depth dose (PDD). Step phantoms were also irradiated on the EBT film to generate calibration curve which indicates relationship between optical density of irradiated film and perpendicular depth of compensator. Comparisons were made using the mean absolute difference (MAD) between coordinate information from DICOM RT and converted depth information from the EBT film. MAD over the whole region was 1.7, and 2.0 mm. However, MAD over the relatively flat regions on each compensator selected for comparison was within 1 mm. These results shows that proton-based quality assurance of range compensator is feasible and it is expected to achieve MAD over the whole region less than 1 mm with further correction about scattering effect of proton imaging.
On the acoustic wedge design and simulation of anechoic chamber
NASA Astrophysics Data System (ADS)
Jiang, Changyong; Zhang, Shangyu; Huang, Lixi
2016-10-01
This study proposes an alternative to the classic wedge design for anechoic chambers, which is the uniform-then-gradient, flat-wall (UGFW) structure. The working mechanisms of the proposed structure and the traditional wedge are analyzed. It is found that their absorption patterns are different. The parameters of both structures are optimized for achieving minimum absorber depth, under the condition of absorbing 99% of normal incident sound energy. It is found that, the UGFW structure achieves a smaller total depth for the cut-off frequencies ranging from 100 Hz to 250 Hz. This paper also proposes a modification for the complex source image (CSI) model for the empirical simulation of anechoic chambers, originally proposed by Bonfiglio et al. [J. Acoust. Soc. Am. 134 (1), 285-291 (2013)]. The modified CSI model considers the non-locally reactive effect of absorbers at oblique incidence, and the improvement is verified by a full, finite-element simulation of a small chamber. With the modified CSI model, the performance of both decorations with the optimized parameters in a large chamber is simulated. The simulation results are analyzed and checked against the tolerance of 1.5 dB deviation from the inverse square law, stipulated in the ISO standard 3745(2003). In terms of the total decoration depth and anechoic chamber performance, the UGFW structure is better than the classic wedge design.
Designing Pulse Laser Surface Modification of H13 Steel Using Response Surface Method
NASA Astrophysics Data System (ADS)
Aqida, S. N.; Brabazon, D.; Naher, S.
2011-01-01
This paper presents a design of experiment (DOE) for laser surface modification process of AISI H13 tool steel in achieving the maximum hardness and minimum surface roughness at a range of modified layer depth. A Rofin DC-015 diffusion-cooled CO2 slab laser was used to process AISI H13 tool steel samples. Samples of 10 mm diameter were sectioned to 100 mm length in order to process a predefined circumferential area. The parameters selected for examination were laser peak power, overlap percentage and pulse repetition frequency (PRF). The response surface method with Box-Behnken design approach in Design Expert 7 software was used to design the H13 laser surface modification process. Metallographic study and image analysis were done to measure the modified layer depth. The modified surface roughness was measured using two-dimensional surface profilometer. The correlation of the three laser processing parameters and the modified surface properties was specified by plotting three-dimensional graph. The hardness properties were tested at 981 mN force. From metallographic study, the laser modified surface depth was between 37 μm and 150 μm. The average surface roughness recorded from the 2D profilometry was at a minimum value of 1.8 μm. The maximum hardness achieved was between 728 and 905 HV0.1. These findings are significant to modern development of hard coatings for wear resistant applications.
High resolution axicon-based endoscopic FD OCT imaging with a large depth range
NASA Astrophysics Data System (ADS)
Lee, Kye-Sung; Hurley, William; Deegan, John; Dean, Scott; Rolland, Jannick P.
2010-02-01
Endoscopic imaging in tubular structures, such as the tracheobronchial tree, could benefit from imaging optics with an extended depth of focus (DOF). This optics could accommodate for varying sizes of tubular structures across patients and along the tree within a single patient. In the paper, we demonstrate an extended DOF without sacrificing resolution showing rotational images in biological tubular samples with 2.5 μm axial resolution, 10 ìm lateral resolution, and > 4 mm depth range using a custom designed probe.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.
Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization
Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098
Development of Vertical Cable Seismic System
NASA Astrophysics Data System (ADS)
Asakawa, E.; Murakami, F.; Sekino, Y.; Okamoto, T.; Ishikawa, K.; Tsukahara, H.; Shimura, T.
2011-12-01
In 2009, Ministry of Education, Culture, Sports, Science and Technology(MEXT) started the survey system development for Hydrothermal deposit. We proposed the Vertical Cable Seismic (VCS), the reflection seismic survey with vertical cable above seabottom. VCS has the following advantages for hydrothermal deposit survey. (1) VCS is an efficient high-resolution 3D seismic survey in limited area. (2) It achieves high-resolution image because the sensors are closely located to the target. (3) It avoids the coupling problems between sensor and seabottom that cause serious damage of seismic data quality. (4) Because of autonomous recording system on sea floor, various types of marine source are applicable with VCS such as sea-surface source (GI gun etc.) , deep-towed or ocean bottom source. Our first experiment of 2D/3D VCS surveys has been carried out in Lake Biwa, JAPAN, in November 2009. The 2D VCS data processing follows the walk-away VSP, including wave field separation and depth migration. Seismic Interferometry technique is also applied. The results give much clearer image than the conventional surface seismic. Prestack depth migration is applied to 3D data to obtain good quality 3D depth volume. Seismic Interferometry technique is applied to obtain the high resolution image in the very shallow zone. Based on the feasibility study, we have developed the autonomous recording VCS system and carried out the trial experiment in actual ocean at the water depth of about 400m to establish the procedures of deployment/recovery and to examine the VC position or fluctuation at seabottom. The result shows that the VC position is estimated with sufficient accuracy and very little fluctuation is observed. Institute of Industrial Science, the University of Tokyo took the research cruise NT11-02 on JAMSTEC R/V Natsushima in February, 2011. In the cruise NT11-02, JGI carried out the second VCS survey using the autonomous VCS recording system with the deep towed source provided by Institute of Industrial Science, the University of Tokyo. It generates high frequency acoustic waves around 1kHz. The acquired VCS data clearly shows the reflections and currently being processed for imaging the subsurface structure.
Quantifying how the combination of blur and disparity affects the perceived depth
NASA Astrophysics Data System (ADS)
Wang, Junle; Barkowsky, Marcus; Ricordel, Vincent; Le Callet, Patrick
2011-03-01
The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper. When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases. We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth comparing to the images without any blur in the background. The increase of perceived depth can be considered as a function of the relative distance between the foreground and background, while it is insensitive to the distance between the viewer and the depth plane at which the blur is added.
Detection of rebars in concrete using advanced ultrasonic pulse compression techniques.
Laureti, S; Ricci, M; Mohamed, M N I B; Senni, L; Davis, L A J; Hutchins, D A
2018-04-01
A pulse compression technique has been developed for the non-destructive testing of concrete samples. Scattering of signals from aggregate has historically been a problem in such measurements. Here, it is shown that a combination of piezocomposite transducers, pulse compression and post processing can lead to good images of a reinforcement bar at a cover depth of 55 mm. This has been achieved using a combination of wide bandwidth operation over the 150-450 kHz range, and processing based on measuring the cumulative energy scattered back to the receiver. Results are presented in the form of images of a 20 mm rebar embedded within a sample containing 10 mm aggregate. Copyright © 2017 Elsevier B.V. All rights reserved.
Multiple excitation nano-spot generation and confocal detection for far-field microscopy.
Mondal, Partha Pratim
2010-03-01
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
Super-resolved terahertz microscopy by knife-edge scan
NASA Astrophysics Data System (ADS)
Giliberti, V.; Flammini, M.; Ciano, C.; Pontecorvo, E.; Del Re, E.; Ortolani, M.
2017-08-01
We present a compact, all solid-state THz confocal microscope operating at 0.30 THz that achieves super-resolution by using the knife-edge scan approach. In the final reconstructed image, a lateral resolution of 60 μm ≍ λ/17 is demonstrated when the knife-edge is deep in the near-field of the sample surface. When the knife-edge is lifted up to λ/4 from the sample surface, a certain degree of super-resolution is maintained with a resolution of 0.4 mm, i.e. more than a factor 2 if compared to the diffraction-limited scheme. The present results open an interesting path towards super-resolved imaging with in-depth information that would be peculiar to THz microscopy systems.
Multiple excitation nano-spot generation and confocal detection for far-field microscopy
NASA Astrophysics Data System (ADS)
Mondal, Partha Pratim
2010-03-01
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R
2018-05-21
Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan
2018-01-01
Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.
Convolutional Sparse Coding for RGB+NIR Imaging.
Hu, Xuemei; Heide, Felix; Dai, Qionghai; Wetzstein, Gordon
2018-04-01
Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.
Lee, Changho; Kim, Kyungun; Han, Seunghoon; Kim, Sehui; Lee, Jun Hoon; Kim, Hong kyun; Kim, Chulhong; Jung, Woonggyu; Kim, Jeehyun
2014-01-01
Abstract. An intraoperative surgical microscope is an essential tool in a neuro- or ophthalmological surgical environment. Yet, it has an inherent limitation to classify subsurface information because it only provides the surface images. To compensate for and assist in this problem, combining the surgical microscope with optical coherence tomography (OCT) has been adapted. We developed a real-time virtual intraoperative surgical OCT (VISOCT) system by adapting a spectral-domain OCT scanner with a commercial surgical microscope. Thanks to our custom-made beam splitting and image display subsystems, the OCT images and microscopic images are simultaneously visualized through an ocular lens or the eyepiece of the microscope. This improvement helps surgeons to focus on the operation without distraction to view OCT images on another separate display. Moreover, displaying the OCT live images on the eyepiece helps surgeon’s depth perception during the surgeries. Finally, we successfully processed stimulated penetrating keratoplasty in live rabbits. We believe that these technical achievements are crucial to enhance the usability of the VISOCT system in a real surgical operating condition. PMID:24604471
In vivo near-infrared dual-axis confocal microendoscopy in the human lower gastrointestinal tract
NASA Astrophysics Data System (ADS)
Piyawattanametha, Wibool; Ra, Hyejun; Qiu, Zhen; Friedland, Shai; Liu, Jonathan T. C.; Loewke, Kevin; Kino, Gordon S.; Solgaard, Olav; Wang, Thomas D.; Mandella, Michael J.; Contag, Christopher H.
2012-02-01
Near-infrared confocal microendoscopy is a promising technique for deep in vivo imaging of tissues and can generate high-resolution cross-sectional images at the micron-scale. We demonstrate the use of a dual-axis confocal (DAC) near-infrared fluorescence microendoscope with a 5.5-mm outer diameter for obtaining clinical images of human colorectal mucosa. High-speed two-dimensional en face scanning was achieved through a microelectromechanical systems (MEMS) scanner while a micromotor was used for adjusting the axial focus. In vivo images of human patients are collected at 5 frames/sec with a field of view of 362×212 μm2 and a maximum imaging depth of 140 μm. During routine endoscopy, indocyanine green (ICG) was topically applied a nonspecific optical contrasting agent to regions of the human colon. The DAC microendoscope was then used to obtain microanatomic images of the mucosa by detecting near-infrared fluorescence from ICG. These results suggest that DAC microendoscopy may have utility for visualizing the anatomical and, perhaps, functional changes associated with colorectal pathology for the early detection of colorectal cancer.
Ultrasound strain imaging using Barker code
NASA Astrophysics Data System (ADS)
Peng, Hui; Tie, Juhong; Guo, Dequan
2017-01-01
Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.
Imaging of mesoscopic-scale organisms using selective-plane optoacoustic tomography.
Razansky, Daniel; Vinegoni, Claudio; Ntziachristos, Vasilis
2009-05-07
Mesoscopic-scale living organisms (i.e. 1 mm to 1 cm sized) remain largely inaccessible by current optical imaging methods due to intensive light scattering in tissues. Therefore, imaging of many important model organisms, such as insects, fishes, worms and similarly sized biological specimens, is currently limited to embryonic or other transparent stages of development. This makes it difficult to relate embryonic cellular and molecular mechanisms to consequences in organ function and animal behavior in more advanced stages and adults. Herein, we have developed a selective-plane illumination optoacoustic tomography technique for in vivo imaging of optically diffusive organisms and tissues. The method is capable of whole-body imaging at depths from the sub-millimeter up to centimeter range with a scalable spatial resolution in the order of magnitude of a few tenths of microns. In contrast to pure optical methods, the spatial resolution here is not determined nor limited by light diffusion; therefore, such performance cannot be achieved by any other optical imaging technology developed so far. The utility of the method is demonstrated on several whole-body models and small-animal extremities.
In vivo near-infrared dual-axis confocal microendoscopy in the human lower gastrointestinal tract.
Piyawattanametha, Wibool; Ra, Hyejun; Qiu, Zhen; Friedland, Shai; Liu, Jonathan T C; Loewke, Kevin; Kino, Gordon S; Solgaard, Olav; Wang, Thomas D; Mandella, Michael J; Contag, Christopher H
2012-02-01
Near-infrared confocal microendoscopy is a promising technique for deep in vivo imaging of tissues and can generate high-resolution cross-sectional images at the micron-scale. We demonstrate the use of a dual-axis confocal (DAC) near-infrared fluorescence microendoscope with a 5.5-mm outer diameter for obtaining clinical images of human colorectal mucosa. High-speed two-dimensional en face scanning was achieved through a microelectromechanical systems (MEMS) scanner while a micromotor was used for adjusting the axial focus. In vivo images of human patients are collected at 5 frames/sec with a field of view of 362×212 μm(2) and a maximum imaging depth of 140 μm. During routine endoscopy, indocyanine green (ICG) was topically applied a nonspecific optical contrasting agent to regions of the human colon. The DAC microendoscope was then used to obtain microanatomic images of the mucosa by detecting near-infrared fluorescence from ICG. These results suggest that DAC microendoscopy may have utility for visualizing the anatomical and, perhaps, functional changes associated with colorectal pathology for the early detection of colorectal cancer.
Legleiter, Carl; Kinzel, Paul J.; Nelson, Jonathan M.
2017-01-01
Although river discharge is a fundamental hydrologic quantity, conventional methods of streamgaging are impractical, expensive, and potentially dangerous in remote locations. This study evaluated the potential for measuring discharge via various forms of remote sensing, primarily thermal imaging of flow velocities but also spectrally-based depth retrieval from passive optical image data. We acquired thermal image time series from bridges spanning five streams in Alaska and observed strong agreement between velocities measured in situ and those inferred by Particle Image Velocimetry (PIV), which quantified advection of thermal features by the flow. The resulting surface velocities were converted to depth-averaged velocities by applying site-specific, calibrated velocity indices. Field spectra from three clear-flowing streams provided strong relationships between depth and reflectance, suggesting that, under favorable conditions, spectrally-based bathymetric mapping could complement thermal PIV in a hybrid approach to remote sensing of river discharge; this strategy would not be applicable to larger, more turbid rivers, however. A more flexible and efficient alternative might involve inferring depth from thermal data based on relationships between depth and integral length scales of turbulent fluctuations in temperature, captured as variations in image brightness. We observed moderately strong correlations for a site-aggregated data set that reduced station-to-station variability but encompassed a broad range of depths. Discharges calculated using thermal PIV-derived velocities were within 15% of in situ measurements when combined with depths measured directly in the field or estimated from field spectra and within 40% when the depth information also was derived from thermal images. The results of this initial, proof-of-concept investigation suggest that remote sensing techniques could facilitate measurement of river discharge.
Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
So, Peter T.
2016-03-01
Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens.
NASA Astrophysics Data System (ADS)
Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.
2018-03-01
Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.
All-optical pulse-echo ultrasound probe for intravascular imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Colchester, Richard J.; Noimark, Sacha; Mosse, Charles A.; Zhang, Edward Z.; Beard, Paul C.; Parkin, Ivan P.; Papakonstantinou, Ioannis; Desjardins, Adrien E.
2016-02-01
High frequency ultrasound probes such as intravascular ultrasound (IVUS) and intracardiac echocardiography (ICE) catheters can be invaluable for guiding minimally invasive medical procedures in cardiology such as coronary stent placement and ablation. With current-generation ultrasound probes, ultrasound is generated and received electrically. The complexities involved with fabricating these electrical probes can result in high costs that limit their clinical applicability. Additionally, it can be challenging to achieve wide transmission bandwidths and adequate wideband reception sensitivity with small piezoelectric elements. Optical methods for transmitting and receiving ultrasound are emerging as alternatives to their electrical counterparts. They offer several distinguishing advantages, including the potential to generate and detect the broadband ultrasound fields (tens of MHz) required for high resolution imaging. In this study, we developed a miniature, side-looking, pulse-echo ultrasound probe for intravascular imaging, with fibre-optic transmission and reception. The axial resolution was better than 70 microns, and the imaging depth in tissue was greater than 1 cm. Ultrasound transmission was performed by photoacoustic excitation of a carbon nanotube/polydimethylsiloxane composite material; ultrasound reception, with a fibre-optic Fabry-Perot cavity. Ex vivo tissue studies, which included healthy swine tissue and diseased human tissue, demonstrated the strong potential of this technique. To our knowledge, this is the first study to achieve an all-optical pulse-echo ultrasound probe for intravascular imaging. The potential for performing all-optical B-mode imaging (2D and 3D) with virtual arrays of transmit/receive elements, and hybrid imaging with pulse-echo ultrasound and photoacoustic sensing are discussed.
Soft x-ray holographic tomography for biological specimens
NASA Astrophysics Data System (ADS)
Gao, Hongyi; Chen, Jianwen; Xie, Honglan; Li, Ruxin; Xu, Zhizhan; Jiang, Shiping; Zhang, Yuxuan
2003-10-01
In this paper, we present some experimental results on X -ray holography, holographic tomography, and a new holographic tomography method called pre-amplified holographic tomography is proposed. Due to the shorter wavelength and the larger penetration depths, X-rays provide the potential of higher resolution in imaging techniques, and have the ability to image intact, living, hydrated cells w ithout slicing, dehydration, chemical fixation or stain. Recently, using X-ray source in National Synchrotron Radiation Laboratory in Hefei, we have successfully performed some soft X-ray holography experiments on biological specimen. The specimens used in the experiments was the garlic clove epidermis, we got their X-ray hologram, and then reconstructed them by computer programs, the feature of the cell walls, the nuclei and some cytoplasm were clearly resolved. However, there still exist some problems in realization of practical 3D microscopic imaging due to the near-unity refractive index of the matter. There is no X-ray optics having a sufficient high numerical aperture to achieve a depth resolution that is comparable to the transverse resolution. On the other hand, computer tomography needs a record of hundreds of views of the test object at different angles for high resolution. This is because the number of views required for a densely packed object is equal to the object radius divided by the desired depth resolution. Clearly, it is impractical for a radiation-sensitive biological specimen. Moreover, the X-ray diffraction effect makes projection data blur, this badly degrades the resolution of the reconstructed image. In order to observe 3D structure of the biological specimens, McNulty proposed a new method for 3D imaging called "holographic tomography (HT)" in which several holograms of the specimen are recorded from various illumination directions and combined in the reconstruction step. This permits the specimens to be sampled over a wide range of spatial frequencies to improve the depth resolution. In NSRL, we performed soft X-ray holographic tomography experiments. The specimen was the spider filaments and PM M A as recording medium. By 3D CT reconstruction of the projection data, three dimensional density distribution of the specimen was obtained. Also, we developed a new X-ray holographic tomography m ethod called pre-amplified holographic tomography. The method permits a digital real-time 3D reconstruction with high-resolution and a simple and compact experimental setup as well.
Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting
NASA Astrophysics Data System (ADS)
Voronin, V. V.; Marchuk, V. I.; Fisunov, A. V.; Tokareva, S. V.; Egiazarian, K. O.
2015-03-01
RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.
Profiling defect depth in composite materials using thermal imaging NDE
NASA Astrophysics Data System (ADS)
Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan
2018-04-01
Sonic Infrared (IR) NDE, is a relatively new NDE technology; it has been demonstrated as a reliable and sensitive method to detect defects. SIR uses ultrasonic excitation with IR imaging to detect defects and flaws in the structures being inspected. An IR camera captures infrared radiation from the target for a period of time covering the ultrasound pulse. This period of time may be much longer than the pulse depending on the defect depth and the thermal properties of the materials. With the increasing deployment of composites in modern aerospace and automobile structures, fast, wide-area and reliable NDE methods are necessary. Impact damage is one of the major concerns in modern composites. Damage can occur at a certain depth without any visual indication on the surface. Defect depth information can influence maintenance decisions. Depth profiling relies on the time delays in the captured image sequence. We'll present our work on the defect depth profiling by using the temporal information of IR images. An analytical model is introduced to describe heat diffusion from subsurface defects in composite materials. Depth profiling using peak time is introduced as well.
A high resolution prototype small-animal PET scanner dedicated to mouse brain imaging
Yang, Yongfeng; Bec, Julien; Zhou, Jian; Zhang, Mengxi; Judenhofer, Martin S; Bai, Xiaowei; Di, Kun; Wu, Yibao; Rodriguez, Mercedes; Dokhale, Purushottam; Shah, Kanai S.; Farrell, Richard; Qi, Jinyi; Cherry, Simon R.
2017-01-01
A prototype small-animal PET scanner was developed based on depth-encoding detectors using dual-ended readout of very small scintillator elements to produce high and uniform spatial resolution suitable for imaging the mouse brain. Methods The scanner consists of 16 tapered dual-ended readout detectors arranged in a ring of diameter 61 mm. The axial field of view is 7 mm and the transaxial field of view is 30 mm. The scintillator arrays consist of 14×14 lutetium oxyorthosilicate (LSO) elements, with a crystal size of 0.43×0.43 mm2 at the front end and 0.80×0.43 mm2 at the back end, and the crystal elements are 13 mm long. The arrays are read out by 8×8 mm2 and a 13×8 mm2 position-sensitive avalanche photodiodes (PSAPDs) placed at opposite ends of the array. Standard nuclear instrumentation module (NIM) electronics and a custom designed multiplexer are used for signal processing. Results The detector performance was measured and all except the very edge crystals could be clearly resolved. The average detector intrinsic spatial resolution in the axial direction was 0.61 mm. A depth of interaction resolution of 1.7 mm was achieved. The sensitivity of the scanner at center of the field of view was 1.02% for a lower energy threshold of 150 keV and 0.68% for a lower energy threshold of 250 keV. The spatial resolution within a field of view that can accommodate the entire mouse brain was ~0.6 mm using a 3D Maximum Likelihood-Expectation Maximization (ML-EM) reconstruction algorithm. Images of a micro hot-rod phantom showed that rods with diameter down to 0.5 mm could be resolved. First in vivo studies were obtained using 18F-fluoride and confirmed that 0.6 mm resolution can be achieved in the mouse head in vivo. Brain imaging studies with 18F-fluorodeoxyglucose were also acquired. Conclusion A prototype PET scanner achieving a spatial resolution approaching the physical limits for a small-bore PET scanner set by positron range and acolinearity was developed. Future plans are to add more detector rings to extend the axial field of view of the scanner and increase sensitivity. PMID:27013696
Fiber-optic annular detector array for large depth of field photoacoustic macroscopy.
Bauer-Marschallinger, Johannes; Höllinger, Astrid; Jakoby, Bernhard; Burgholzer, Peter; Berer, Thomas
2017-03-01
We report on a novel imaging system for large depth of field photoacoustic scanning macroscopy. Instead of commonly used piezoelectric transducers, fiber-optic based ultrasound detection is applied. The optical fibers are shaped into rings and mainly receive ultrasonic signals stemming from the ring symmetry axes. Four concentric fiber-optic rings with varying diameters are used in order to increase the image quality. Imaging artifacts, originating from the off-axis sensitivity of the rings, are reduced by coherence weighting. We discuss the working principle of the system and present experimental results on tissue mimicking phantoms. The lateral resolution is estimated to be below 200 μm at a depth of 1.5 cm and below 230 μm at a depth of 4.5 cm. The minimum detectable pressure is in the order of 3 Pa. The introduced method has the potential to provide larger imaging depths than acoustic resolution photoacoustic microscopy and an imaging resolution similar to that of photoacoustic computed tomography.
NASA Astrophysics Data System (ADS)
Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin
2018-03-01
Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.
Numerical modeling of two-photon focal modulation microscopy with a sinusoidal phase filter.
Chen, Rui; Shen, Shuhao; Chen, Nanguang
2018-05-01
A spatiotemporal phase modulator (STPM) is theoretically investigated using the vectorial diffraction theory. The STPM is equivalent to a time-dependent phase-only pupil filter that alternates between a homogeneous filter and a stripe-shaped filter with a sinusoidal phase distribution. It is found that two-photon focal modulation microscopy (TPFMM) using this STPM can significantly suppress the background contribution from out-of-focus ballistic excitation and achieve almost the same resolution as two-photon microscopy. The modulation depth is also evaluated and a compromise exists between the signal-to-background ratio and signal-to-noise ratio. The theoretical investigations provide important insights into future implementations of TPFMM and its potential to further extend the penetration depth of nonlinear microscopy in imaging multiple-scattering biological tissues. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Virtual view image synthesis for eye-contact in TV conversation system
NASA Astrophysics Data System (ADS)
Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae
2010-02-01
Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.
RGB-D depth-map restoration using smooth depth neighborhood supports
NASA Astrophysics Data System (ADS)
Liu, Wei; Xue, Haoyang; Yu, Zhongjie; Wu, Qiang; Yang, Jie
2015-05-01
A method to restore the depth map of an RGB-D image using smooth depth neighborhood (SDN) supports is presented. The SDN supports are computed based on the corresponding color image of the depth map. Compared with the most widely used square supports, the proposed SDN supports can well-capture the local structure of the object. Only pixels with similar depth values are allowed to be included in the support. We combine our SDN supports with the joint bilateral filter (JBF) to form the SDN-JBF and use it to restore depth maps. Experimental results show that our SDN-JBF can not only rectify the misaligned depth pixels but also preserve sharp depth discontinuities.
NASA Astrophysics Data System (ADS)
Tavakolian, Pantea; Sfarra, Stefano; Gargiulo, Gianfranco; Sivagurunathan, Koneshwaran; Mandelis, Andreas
2018-06-01
The aim of this research is to investigate the suitability of truncated correlation photothermal coherence tomography (TC-PCT) for the non-destructive imaging of a replica of a real inlay to identify subsurface features that often are invisible areas of vulnerability and damage. Defects of inlays involve glue-rich areas, glue-starved areas, termite attack, insect damage, and laminar splitting. These defects have the potential to result in extensive damage to the art design layers of inlays. Therefore, there is a need for an imaging technique to visualize and determine the location of defects within the sample. The recently introduced TC-PCT modality proved capable of providing 3-D images of specimens with high axial resolution, deep subsurface depth profiling capability, and high signal-to-noise ratio (SNR). Therefore, in this study the authors used TC-PCT to image a fabricated inlay sample with various natural and artificial defects in the middle and top layers. The inlay in question reproduces to scale a piece of art preserved in the "Mirror room" of the Castle Laffitte in France. It was built by a professional restorer following the ancient procedure named element by element. Planar TC-PCT images of the inlay were stacked coherently to provide 3-D visualization of areas with known defects in the sample. The experimental results demonstrated the identification of defects such as empty holes, a hole filled with stucco, subsurface delaminations and natural features such as a wood knot and wood grain in different layers of the sample. For this wooden sample that has a very low thermal diffusivity, a depth range of 2 mm was achieved.
In-line three-dimensional holography of nanocrystalline objects at atomic resolution
Chen, F.-R.; Van Dyck, D.; Kisielowski, C.
2016-01-01
Resolution and sensitivity of the latest generation aberration-corrected transmission electron microscopes allow the vast majority of single atoms to be imaged with sub-Ångstrom resolution and their locations determined in an image plane with a precision that exceeds the 1.9-pm wavelength of 300 kV electrons. Such unprecedented performance allows expansion of electron microscopic investigations with atomic resolution into the third dimension. Here we report a general tomographic method to recover the three-dimensional shape of a crystalline particle from high-resolution images of a single projection without the need for sample rotation. The method is compatible with low dose rate electron microscopy, which improves on signal quality, while minimizing electron beam-induced structure modifications even for small particles or surfaces. We apply it to germanium, gold and magnesium oxide particles, and achieve a depth resolution of 1–2 Å, which is smaller than inter-atomic distances. PMID:26887849
Visualization of the microcirculatory network in skin by high frequency optoacoustic mesoscopy
NASA Astrophysics Data System (ADS)
Schwarz, Mathias; Aguirre, Juan; Buehler, Andreas; Omar, Murad; Ntziachristos, Vasilis
2015-07-01
Optoacoustic (photoacoustic) imaging has a high potential for imaging melanin-rich structures in skin and the microvasculature of the dermis due to the natural chromophores (de)oxyhemoglobin, and melanin. The vascular network in human dermis comprises a large network of arterioles, capillaries, and venules, ranging from 5 μm to more than 100 μm in diameter. The frequency spectrum of the microcirculatory network in human skin is intrinsically broadband, due to the large variety in size of absorbers. In our group we have developed raster-scan optoacoustic mesoscopy (RSOM) that applies a 100 MHz transducer with ultra-wide bandwidth in raster-scan mode achieving lateral resolution of 18 μm. In this study, we applied high frequency RSOM to imaging human skin in a healthy volunteer. We analyzed the frequency spectrum of anatomical structures with respect to depth and show that frequencies >60 MHz contain valuable information of structures in the epidermis and the microvasculature of the papillary dermis. We illustrate that RSOM is capable of visualizing the fine vascular network at and beneath the epidermal-dermal junction, revealing the vascular fingerprint of glabrous skin, as well as the larger venules deeper inside the dermis. We evaluate the ability of the RSOM system in measuring epidermal thickness in both hairy and glabrous skin. Finally, we showcase the capability of RSOM in visualizing benign nevi that will potentially help in imaging the penetration depth of melanoma.
Reflectance confocal microscopy of oral epithelial tissue using an electrically tunable lens
NASA Astrophysics Data System (ADS)
Jabbour, Joey M.; Malik, Bilal H.; Cuenca, Rodrigo; Cheng, Shuna; Jo, Javier A.; Cheng, Yi-Shing L.; Wright, John M.; Maitland, Kristen C.
2014-02-01
We present the use of a commercially available electrically tunable lens to achieve axial scanning in a reflectance confocal microscope. Over a 255 μm axial scan range, the lateral and axial resolutions varied from 1-2 μm and 4-14 μm, respectively, dependent on the variable focal length of the tunable lens. Confocal imaging was performed on normal human biopsies from the oral cavity ex vivo. Sub-cellular morphologic features were seen throughout the depth of the epithelium while axially scanning using the focus tunable lens.
Compressive Coded-Aperture Multimodal Imaging Systems
NASA Astrophysics Data System (ADS)
Rueda-Chacon, Hoover F.
Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.
3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy
NASA Astrophysics Data System (ADS)
Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.
2016-07-01
Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications.
NASA Astrophysics Data System (ADS)
Ravanfar, Mohammadreza; Pfeiffer, Ferris M.; Bozynski, Chantelle C.; Wang, Yuanbo; Yao, Gang
2017-12-01
Collagen degeneration is an important pathological feature of osteoarthritis. The purpose of this study is to investigate whether the polarization-sensitive optical coherence tomography (PSOCT)-based optical polarization tractography (OPT) can be useful in imaging collagen structural changes in human osteoarthritic cartilage samples. OPT eliminated the banding artifacts in conventional PSOCT by calculating the depth-resolved local birefringence and fiber orientation. A close comparison between OPT and PSOCT showed that OPT provided improved visualization and characterization of the zonal structure in human cartilage. Experimental results obtained in this study also underlined the importance of knowing the collagen fiber orientation in conventional polarized light microscopy assessment. In addition, parametric OPT imaging was achieved by quantifying the surface roughness, birefringence, and fiber dispersion in the superficial zone of the cartilage. These quantitative parametric images provided complementary information on the structural changes in cartilage, which can be useful for a comprehensive evaluation of collagen damage in osteoarthritic cartilage.
Gimenez, Y; Busser, B; Trichard, F; Kulesza, A; Laurent, J M; Zaun, V; Lux, F; Benoit, J M; Panczer, G; Dugourd, P; Tillement, O; Pelascini, F; Sancey, L; Motto-Ros, V
2016-07-20
Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications.
Correlation between automatic detection of malaria on thin film and experts' parasitaemia scores
NASA Astrophysics Data System (ADS)
Sunarko, Budi; Williams, Simon; Prescott, William R.; Byker, Scott M.; Bottema, Murk J.
2017-03-01
An algorithm was developed to diagnose the presence of malaria and to estimate the depth of infection by automatically counting individual normal and infected erythrocytes in images of thin blood smears. During the training stage, the parameters of the algorithm were optimized to maximize correlation with estimates of parasitaemia from expert human observers. The correlation was tested on a set of 1590 images from seven thin film blood smears. The correlation between the results from the algorithm and expert human readers was r = 0.836. Results indicate that reliable estimates of parasitaemia may be achieved by computational image analysis methods applied to images of thin film smears. Meanwhile, compared to biological experiments, the algorithm fitted well the three high parasitaemia slides and a mid-level parasitaemia slide, and overestimated the three low parasitaemia slides. To improve the parasitaemia estimation, the sources of the overestimation were identified. Emphasis is laid on the importance of further research in order to identify parasites independently of their erythrocyte hosts
3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy
Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.
2016-01-01
Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications. PMID:27435424
Design of an autofocus capsule endoscope system and the corresponding 3D reconstruction algorithm.
Zhang, Wei; Jin, Yi-Tao; Guo, Xin; Su, Jin-Hui; You, Su-Ping
2016-10-01
A traditional capsule endoscope can only take 2D images, and most of the images are not clear enough to be used for diagnosing. A 3D capsule endoscope can help doctors make a quicker and more accurate diagnosis. However, blurred images negatively affect reconstruction accuracy. A compact, autofocus capsule endoscope system is designed in this study. Using a liquid lens, the system can be electronically controlled to autofocus, and without any moving elements. The depth of field of the system is in the 3-100 mm range and its field of view is about 110°. The images captured by this optical system are much clearer than those taken by a traditional capsule endoscope. A 3D reconstruction algorithm is presented to adapt to the zooming function of our proposed system. Simulations and experiments have shown that more feature points can be correctly matched and a higher reconstruction accuracy can be achieved by this strategy.
Computational and design methods for advanced imaging
NASA Astrophysics Data System (ADS)
Birch, Gabriel C.
This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.
Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera
NASA Astrophysics Data System (ADS)
Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li
2014-09-01
With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.
Zhang, Qi; Yang, Xiong; Hu, Qinglei; Bai, Ke; Yin, Fangfang; Li, Ning; Gang, Yadong; Wang, Xiaojun; Zeng, Shaoqun
2017-01-01
To resolve fine structures of biological systems like neurons, it is required to realize microscopic imaging with sufficient spatial resolution in three dimensional systems. With regular optical imaging systems, high lateral resolution is accessible while high axial resolution is hard to achieve in a large volume. We introduce an imaging system for high 3D resolution fluorescence imaging of large volume tissues. Selective plane illumination was adopted to provide high axial resolution. A scientific CMOS working in sub-array mode kept the imaging area in the sample surface, which restrained the adverse effect of aberrations caused by inclined illumination. Plastic embedding and precise mechanical sectioning extended the axial range and eliminated distortion during the whole imaging process. The combination of these techniques enabled 3D high resolution imaging of large tissues. Fluorescent bead imaging showed resolutions of 0.59 μm, 0.47μm, and 0.59 μm in the x, y, and z directions, respectively. Data acquired from the volume sample of brain tissue demonstrated the applicability of this imaging system. Imaging of different depths showed uniform performance where details could be recognized in either the near-soma area or terminal area, and fine structures of neurons could be seen in both the xy and xz sections. PMID:29296503
Development of Vertical Cable Seismic System for Hydrothermal Deposit Survey (2) - Feasibility Study
NASA Astrophysics Data System (ADS)
Asakawa, E.; Murakami, F.; Sekino, Y.; Okamoto, T.; Mikada, H.; Takekawa, J.; Shimura, T.
2010-12-01
In 2009, Ministry of Education, Culture, Sports, Science and Technology(MEXT) started the survey system development for Hydrothermal deposit. We proposed the Vertical Cable Seismic (VCS), the reflection seismic survey with vertical cable above seabottom. VCS has the following advantages for hydrothermal deposit survey. . (1) VCS is an effective high-resolution 3D seismic survey within limited area. (2) It achieves high-resolution image because the sensors are closely located to the target. (3) It avoids the coupling problems between sensor and seabottom that cause serious damage of seismic data quality. (4) Various types of marine source are applicable with VCS such as sea-surface source (air gun, water gun etc.) , deep-towed or ocean bottom sources. (5) Autonomous recording system. Our first experiment of 2D/3D VCS surveys has been carried out in Lake Biwa, JAPAN. in November 2009. The 2D VCS data processing follows the walk-away VSP, including wave field separation and depth migration. The result gives clearer image than the conventional surface seismic. Prestack depth migration is applied to 3D data to obtain good quality 3D depth volume. Uncertainty of the source/receiver poisons in water causes the serious problem of the imaging. We used several transducer/transponder to estimate these positions. The VCS seismic records themselves can also provide sensor position using the first break of each trace and we calibrate the positions. We are currently developing the autonomous recording VCS system and planning the trial experiment in actual ocean to establish the way of deployment/recovery and the examine the position through the current flow in November, 2010. The second VCS survey will planned over the actual hydrothermal deposit with deep-towed source in February, 2011.
The Frontier Fields: Survey Design and Initial Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lotz, J. M.; Koekemoer, A.; Grogin, N.
What are the faintest distant galaxies we can see with the Hubble Space Telescope ( HST ) now, before the launch of the James Webb Space Telescope ? This is the challenge taken up by the Frontier Fields, a Director’s discretionary time campaign with HST and the Spitzer Space Telescope to see deeper into the universe than ever before. The Frontier Fields combines the power of HST and Spitzer with the natural gravitational telescopes of massive high-magnification clusters of galaxies to produce the deepest observations of clusters and their lensed galaxies ever obtained. Six clusters—Abell 2744, MACSJ0416.1-2403, MACSJ0717.5+3745, MACSJ1149.5+2223, Abellmore » S1063, and Abell 370—have been targeted by the HST ACS/WFC and WFC3/IR cameras with coordinated parallel fields for over 840 HST orbits. The parallel fields are the second-deepest observations thus far by HST with 5 σ point-source depths of ∼29th ABmag. Galaxies behind the clusters experience typical magnification factors of a few, with small regions magnified by factors of 10–100. Therefore, the Frontier Field cluster HST images achieve intrinsic depths of ∼30–33 mag over very small volumes. Spitzer has obtained over 1000 hr of Director’s discretionary imaging of the Frontier Field cluster and parallels in IRAC 3.6 and 4.5 μ m bands to 5 σ point-source depths of ∼26.5, 26.0 ABmag. We demonstrate the exceptional sensitivity of the HST Frontier Field images to faint high-redshift galaxies, and review the initial results related to the primary science goals.« less
Cloud Optical Depth Measured with Ground-Based, Uncooled Infrared Imagers
NASA Technical Reports Server (NTRS)
Shaw, Joseph A.; Nugent, Paul W.; Pust, Nathan J.; Redman, Brian J.; Piazzolla, Sabino
2012-01-01
Recent advances in uncooled, low-cost, long-wave infrared imagers provide excellent opportunities for remotely deployed ground-based remote sensing systems. However, the use of these imagers in demanding atmospheric sensing applications requires that careful attention be paid to characterizing and calibrating the system. We have developed and are using several versions of the ground-based "Infrared Cloud Imager (ICI)" instrument to measure spatial and temporal statistics of clouds and cloud optical depth or attenuation for both climate research and Earth-space optical communications path characterization. In this paper we summarize the ICI instruments and calibration methodology, then show ICI-derived cloud optical depths that are validated using a dual-polarization cloud lidar system for thin clouds (optical depth of approximately 4 or less).
Model based estimation of image depth and displacement
NASA Technical Reports Server (NTRS)
Damour, Kevin T.
1992-01-01
Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.
Chen, Dongmei; Zhu, Shouping; Cao, Xu; Zhao, Fengjun; Liang, Jimin
2015-01-01
X-ray luminescence computed tomography (XLCT) has become a promising imaging technology for biological application based on phosphor nanoparticles. There are mainly three kinds of XLCT imaging systems: pencil beam XLCT, narrow beam XLCT and cone beam XLCT. Narrow beam XLCT can be regarded as a balance between the pencil beam mode and the cone-beam mode in terms of imaging efficiency and image quality. The collimated X-ray beams are assumed to be parallel ones in the traditional narrow beam XLCT. However, we observe that the cone beam X-rays are collimated into X-ray beams with fan-shaped broadening instead of parallel ones in our prototype narrow beam XLCT. Hence we incorporate the distribution of the X-ray beams in the physical model and collected the optical data from only two perpendicular directions to further speed up the scanning time. Meanwhile we propose a depth related adaptive regularized split Bregman (DARSB) method in reconstruction. The simulation experiments show that the proposed physical model and method can achieve better results in the location error, dice coefficient, mean square error and the intensity error than the traditional split Bregman method and validate the feasibility of method. The phantom experiment can obtain the location error less than 1.1 mm and validate that the incorporation of fan-shaped X-ray beams in our model can achieve better results than the parallel X-rays. PMID:26203388
Action recognition in depth video from RGB perspective: A knowledge transfer manner
NASA Astrophysics Data System (ADS)
Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen
2018-03-01
Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
Simulation study of a high performance brain PET system with dodecahedral geometry.
Tao, Weijie; Chen, Gaoyu; Weng, Fenghua; Zan, Yunlong; Zhao, Zhixiang; Peng, Qiyu; Xu, Jianfeng; Huang, Qiu
2018-05-25
In brain imaging, the spherical PET system achieves the highest sensitivity when the solid angle is concerned. However it is not practical. In this work we designed an alternative sphere-like scanner, the dodecahedral scanner, which has a high sensitivity in imaging and a high feasibility to manufacture. We simulated this system and compared the performance with a few other dedicated brain PET systems. Monte Carlo simulations were conducted to generate data of the dedicated brain PET system with the dodecahedral geometry (11 regular pentagon detectors). The data were then reconstructed using the in-house developed software with the fully three-dimensional maximum-likelihood expectation maximization (3D-MLEM) algorithm. Results show that the proposed system has a high sensitivity distribution for the whole field of view (FOV). With a depth-of-interaction (DOI) resolution around 6.67 mm, the proposed system achieves the spatial resolution of 1.98 mm. Our simulation study also shows that the proposed system improves the image contrast and reduces noise compared with a few other dedicated brain PET systems. Finally, simulations with the Hoffman phantom show the potential application of the proposed system in clinical applications. In conclusion, the proposed dodecahedral PET system is potential for widespread applications in high-sensitivity, high-resolution PET imaging, to lower the injected dose. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Multi-functional angiographic OFDI using frequency-multiplexed dual-beam illumination
Kim, SunHee; Park, Taejin; Jang, Sun-Joo; Nam, Ahhyun S.; Vakoc, Benjamin J.; Oh, Wang-Yuhl
2015-01-01
Detection of blood flow inside the tissue sample can be achieved by measuring the local change of complex signal over time in angiographic optical coherence tomography (OCT). In conventional angiographic OCT, the transverse displacement of the imaging beam during the time interval between a pair of OCT signal measurements must be significantly reduced to minimize the noise due to the beam scanning-induced phase decorrelation at the expense of the imaging speed. Recent introduction of dual-beam scan method either using polarization encoding or two identical imaging systems in spectral-domain (SD) OCT scheme shows potential for high-sensitivity vasculature imaging without suffering from spurious phase noise caused by the beam scanning-induced spatial decorrelation. In this paper, we present multi-functional angiographic optical frequency domain imaging (OFDI) using frequency-multiplexed dual-beam illumination. This frequency multiplexing scheme, utilizing unique features of OFDI, provides spatially separated dual imaging beams occupying distinct electrical frequency bands that can be demultiplexed in the frequency domain processing. We demonstrate the 3D multi-functional imaging of the normal mouse skin in the dorsal skin fold chamber visualizing distinct layer structures from the intensity imaging, information about mechanical integrity from the polarization-sensitive imaging, and depth-resolved microvasculature from the angiographic imaging that are simultaneously acquired and automatically co-registered. PMID:25968731
Sensitive test for sea mine identification based on polarization-aided image processing.
Leonard, I; Alfalou, A; Brosseau, C
2013-12-02
Techniques are widely sought to detect and identify sea mines. This issue is characterized by complicated mine shapes and underwater light propagation dependencies. In a preliminary study we use a preprocessing step for denoising underwater images before applying the algorithm for mine detection. Once a mine is detected, the protocol for identifying it is activated. Among many correlation filters, we have focused our attention on the asymmetric segmented phase-only filter for quantifying the recognition rate because it allows us to significantly increase the number of reference images in the fabrication of this filter. Yet they are not entirely satisfactory in terms of recognition rate and the obtained images revealed to be of low quality. In this report, we propose a way to improve upon this preliminary study by using a single wavelength polarimetric camera in order to denoise the images. This permits us to enhance images and improve depth visibility. We present illustrative results using in situ polarization imaging of a target through a milk-water mixture and demonstrate that our challenging objective of increasing the detection rate and decreasing the false alarm rate has been achieved.
The impact of the condenser on cytogenetic image quality in digital microscope system.
Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong
2013-01-01
Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%-70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice.
A 100-200 MHz ultrasound biomicroscope.
Knspik, D A; Starkoski, B; Pavlin, C J; Foster, F S
2000-01-01
The development of higher frequency ultrasound imaging systems affords a unique opportunity to visualize living tissue at the microscopic level. This work was undertaken to assess the potential of ultrasound imaging in vivo using the 100-200 MHz range. Spherically focused lithium niobate transducers were fabricated. The properties of a 200 MHz center frequency device are described in detail. This transducer showed good sensitivity with an insertion loss of 18 dB at 200 MHz. Resolution of 14 /spl mu/m in the lateral direction and 12 /spl mu/m in the axial direction was achieved with f/1.14 focusing. A linear mechanical scan system and a scan converter were used to generate B-scan images at a frame rate up to 12 frames per second. System performance in B-mode imaging is limited by frequency dependent attenuation in tissues. An alternative technique, zone-focus image collection, was investigated to extend depth of field. Images of coronary arteries, the eye, and skin are presented along with some preliminary correlations with histology. These results demonstrate the feasibility of ultrasound biomicroscopy In the 100-200 MHz range. Further development of ultrasound backscatter imaging at frequencies up to and above 200 MHz will contribute valuable information about tissue microstructure.
Rössler, Erik; Mattea, Carlos; Stapf, Siegfried
2015-02-01
Low field Nuclear Magnetic Resonance increases the contrast of the longitudinal relaxation rate in many biological tissues; one prominent example is hyaline articular cartilage. In order to take advantage of this increased contrast and to profile the depth-dependent variations, high resolution parameter measurements are carried out which can be of critical importance in an early diagnosis of cartilage diseases such as osteoarthritis. However, the maximum achievable spatial resolution of parameter profiles is limited by factors such as sensor geometry, sample curvature, and diffusion limitation. In this work, we report on high-resolution single-sided NMR scanner measurements with a commercial device, and quantify these limitations. The highest achievable spatial resolution on the used profiler, and the lateral dimension of the sensitive volume were determined. Since articular cartilage samples are usually bent, we also focus on averaging effects inside the horizontally aligned sensitive volume and their impact on the relaxation profiles. Taking these critical parameters into consideration, depth-dependent relaxation time profiles with the maximum achievable vertical resolution of 20 μm are discussed, and are correlated with diffusion coefficient profiles in hyaline articular cartilage in order to reconstruct T(2) maps from the diffusion-weighted CPMG decays of apparent relaxation rates. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.
2017-03-01
The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.
Isotropic image in structured illumination microscopy patterned with a spatial light modulator.
Chang, Bo-Jui; Chou, Li-Jun; Chang, Yun-Ching; Chiang, Su-Yu
2009-08-17
We developed a structured illumination microscopy (SIM) system that uses a spatial light modulator (SLM) to generate interference illumination patterns at four orientations - 0 degrees, 45 degrees, 90 degrees, and 135 degrees, to reconstruct a high-resolution image. The use of a SLM for pattern alterations is rapid and precise, without mechanical calibration; moreover, our design of SLM patterns allows generating the four illumination patterns of high contrast and nearly equivalent periods to achieve a near isotropic enhancement in lateral resolution. We compare the conventional image of 100-nm beads with those reconstructed from two (0 degrees +90 degrees or 45 degrees +135 degrees) and four (0 degrees +45 degrees +90 degrees +135 degrees) pattern orientations to show the differences in resolution and image, with the support of simulations. The reconstructed images of 200-nm beads at various depths and fine structures of actin filaments near the edge of a HeLa cell are presented to demonstrate the intensity distributions in the axial direction and the prospective application to biological systems. (c) 2009 Optical Society of America
An efficient hole-filling method based on depth map in 3D view generation
NASA Astrophysics Data System (ADS)
Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong
2018-01-01
New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.
Low dose digital X-ray imaging with avalanche amorphous selenium
NASA Astrophysics Data System (ADS)
Scheuermann, James R.; Goldan, Amir H.; Tousignant, Olivier; Léveillé, Sébastien; Zhao, Wei
2015-03-01
Active Matrix Flat Panel Imagers (AMFPI) based on an array of thin film transistors (TFT) have become the dominant technology for digital x-ray imaging. In low dose applications, the performance of both direct and indirect conversion detectors are limited by the electronic noise associated with the TFT array. New concepts of direct and indirect detectors have been proposed using avalanche amorphous selenium (a-Se), referred to as high gain avalanche rushing photoconductor (HARP). The indirect detector utilizes a planar layer of HARP to detect light from an x-ray scintillator and amplify the photogenerated charge. The direct detector utilizes separate interaction (non-avalanche) and amplification (avalanche) regions within the a-Se to achieve depth-independent signal gain. Both detectors require the development of large area, solid state HARP. We have previously reported the first avalanche gain in a-Se with deposition techniques scalable to large area detectors. The goal of the present work is to demonstrate the feasibility of large area HARP fabrication in an a-Se deposition facility established for commercial large area AMFPI. We also examine the effect of alternative pixel electrode materials on avalanche gain. The results show that avalanche gain > 50 is achievable in the HARP layers developed in large area coaters, which is sufficient to achieve x-ray quantum noise limited performance down to a single x-ray photon per pixel. Both chromium (Cr) and indium tin oxide (ITO) have been successfully tested as pixel electrodes.
NASA Astrophysics Data System (ADS)
Cho, Y.; Kumar, A.; Xu, S.; Zou, J.
2016-10-01
Recent studies have shown that micromachined silicon acoustic delay lines can provide a promising solution to achieve real-time photoacoustic tomography without the need for complex transducer arrays and data acquisition electronics. To achieve deeper imaging depth and wider field of view, a longer delay time and therefore delay length are required. However, as the length of the delay line increases, it becomes more vulnerable to structural instability due to reduced mechanical stiffness. In this paper, we report the design, fabrication, and testing of a new silicon acoustic delay line enhanced with 3D printed polymer micro linker structures. First, mechanical deformation of the silicon acoustic delay line (with and without linker structures) under gravity was simulated by using finite element method. Second, the acoustic crosstalk and acoustic attenuation caused by the polymer micro linker structures were evaluated with both numerical simulation and ultrasound transmission testing. The result shows that the use of the polymer micro linker structures significantly improves the structural stability of the silicon acoustic delay lines without creating additional acoustic attenuation and crosstalk. In addition, the improvement of the acoustic acceptance angle of the silicon acoustic delay lines was also investigated to better suppress the reception of unwanted ultrasound signals outside of the imaging plane. These two improvements are expected to provide an effective solution to eliminate current limitations on the achievable acoustic delay time and out-of-plane imaging resolution of micromachined silicon acoustic delay line arrays.
NASA Astrophysics Data System (ADS)
Fabritius, T.; Alarousu, E.; Prykäri, T.; Hast, J.; Myllylä, Risto
2006-02-01
Due to the highly light scattering nature of paper, the imaging depth of optical methods such as optical coherence tomography (OCT) is limited. In this work, we study the effect of refractive index matching on improving the imaging depth of OCT in paper. To this end, four different refractive index matching liquids (ethanol, 1-pentanol, glycerol and benzyl alcohol) with a refraction index between 1.359 and 1.538 were used in experiments. Low coherent light transmission was studied in commercial copy paper sheets, and the results indicate that benzyl alcohol offers the best improvement in imaging depth, while also being sufficiently stable for the intended purpose. Constructed cross-sectional images demonstrate visually that the imaging depth of OCT is considerably improved by optical clearing. Both surfaces of paper sheets can be detected along with information about the sheet's inner structure.
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes
Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel
2015-01-01
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.
Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel
2015-04-20
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.
An, Lin; Li, Peng; Shen, Tueng T.; Wang, Ruikang
2011-01-01
We present a new development of ultrahigh speed spectral domain optical coherence tomography (SDOCT) for human retinal imaging at 850 nm central wavelength by employing two high-speed line scan CMOS cameras, each running at 250 kHz. Through precisely controlling the recording and reading time periods of the two cameras, the SDOCT system realizes an imaging speed at 500,000 A-lines per second, while maintaining both high axial resolution (~8 μm) and acceptable depth ranging (~2.5 mm). With this system, we propose two scanning protocols for human retinal imaging. The first is aimed to achieve isotropic dense sampling and fast scanning speed, enabling a 3D imaging within 0.72 sec for a region covering 4x4 mm2. In this case, the B-frame rate is 700 Hz and the isotropic dense sampling is 500 A-lines along both the fast and slow axes. This scanning protocol minimizes the motion artifacts, thus making it possible to perform two directional averaging so that the signal to noise ratio of the system is enhanced while the degradation of its resolution is minimized. The second protocol is designed to scan the retina in a large field of view, in which 1200 A-lines are captured along both the fast and slow axes, covering 10 mm2, to provide overall information about the retinal status. Because of relatively long imaging time (4 seconds for a 3D scan), the motion artifact is inevitable, making it difficult to interpret the 3D data set, particularly in a way of depth-resolved en-face fundus images. To mitigate this difficulty, we propose to use the relatively high reflecting retinal pigmented epithelium layer as the reference to flatten the original 3D data set along both the fast and slow axes. We show that the proposed system delivers superb performance for human retina imaging. PMID:22025983
An, Lin; Li, Peng; Shen, Tueng T; Wang, Ruikang
2011-10-01
We present a new development of ultrahigh speed spectral domain optical coherence tomography (SDOCT) for human retinal imaging at 850 nm central wavelength by employing two high-speed line scan CMOS cameras, each running at 250 kHz. Through precisely controlling the recording and reading time periods of the two cameras, the SDOCT system realizes an imaging speed at 500,000 A-lines per second, while maintaining both high axial resolution (~8 μm) and acceptable depth ranging (~2.5 mm). With this system, we propose two scanning protocols for human retinal imaging. The first is aimed to achieve isotropic dense sampling and fast scanning speed, enabling a 3D imaging within 0.72 sec for a region covering 4x4 mm(2). In this case, the B-frame rate is 700 Hz and the isotropic dense sampling is 500 A-lines along both the fast and slow axes. This scanning protocol minimizes the motion artifacts, thus making it possible to perform two directional averaging so that the signal to noise ratio of the system is enhanced while the degradation of its resolution is minimized. The second protocol is designed to scan the retina in a large field of view, in which 1200 A-lines are captured along both the fast and slow axes, covering 10 mm(2), to provide overall information about the retinal status. Because of relatively long imaging time (4 seconds for a 3D scan), the motion artifact is inevitable, making it difficult to interpret the 3D data set, particularly in a way of depth-resolved en-face fundus images. To mitigate this difficulty, we propose to use the relatively high reflecting retinal pigmented epithelium layer as the reference to flatten the original 3D data set along both the fast and slow axes. We show that the proposed system delivers superb performance for human retina imaging.
Dark Energy Survey Year 1 Results: The Photometric Data Set for Cosmology
NASA Astrophysics Data System (ADS)
Drlica-Wagner, A.; Sevilla-Noarbe, I.; Rykoff, E. S.; Gruendl, R. A.; Yanny, B.; Tucker, D. L.; Hoyle, B.; Carnero Rosell, A.; Bernstein, G. M.; Bechtol, K.; Becker, M. R.; Benoit-Lévy, A.; Bertin, E.; Carrasco Kind, M.; Davis, C.; de Vicente, J.; Diehl, H. T.; Gruen, D.; Hartley, W. G.; Leistedt, B.; Li, T. S.; Marshall, J. L.; Neilsen, E.; Rau, M. M.; Sheldon, E.; Smith, J.; Troxel, M. A.; Wyatt, S.; Zhang, Y.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Banerji, M.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Capozzi, D.; Carretero, J.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Dietrich, J. P.; Doel, P.; Evrard, A. E.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Gschwend, J.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jeltema, T.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Lima, M.; Lin, H.; Maia, M. A. G.; Martini, P.; McMahon, R. G.; Melchior, P.; Menanteau, F.; Miquel, R.; Nichol, R. C.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Vikram, V.; Walker, A. R.; Wechsler, R. H.; Zuntz, J.; DES Collaboration
2018-04-01
We describe the creation, content, and validation of the Dark Energy Survey (DES) internal year-one cosmology data set, Y1A1 GOLD, in support of upcoming cosmological analyses. The Y1A1 GOLD data set is assembled from multiple epochs of DES imaging and consists of calibrated photometric zero-points, object catalogs, and ancillary data products—e.g., maps of survey depth and observing conditions, star–galaxy classification, and photometric redshift estimates—that are necessary for accurate cosmological analyses. The Y1A1 GOLD wide-area object catalog consists of ∼ 137 million objects detected in co-added images covering ∼ 1800 {\\deg }2 in the DES grizY filters. The 10σ limiting magnitude for galaxies is g=23.4, r=23.2, i=22.5, z=21.8, and Y=20.1. Photometric calibration of Y1A1 GOLD was performed by combining nightly zero-point solutions with stellar locus regression, and the absolute calibration accuracy is better than 2% over the survey area. DES Y1A1 GOLD is the largest photometric data set at the achieved depth to date, enabling precise measurements of cosmic acceleration at z ≲ 1.
Segmentation of malignant lesions in 3D breast ultrasound using a depth-dependent model.
Tan, Tao; Gubern-Mérida, Albert; Borelli, Cristina; Manniesing, Rashindra; van Zelst, Jan; Wang, Lei; Zhang, Wei; Platel, Bram; Mann, Ritse M; Karssemeijer, Nico
2016-07-01
Automated 3D breast ultrasound (ABUS) has been proposed as a complementary screening modality to mammography for early detection of breast cancers. To facilitate the interpretation of ABUS images, automated diagnosis and detection techniques are being developed, in which malignant lesion segmentation plays an important role. However, automated segmentation of cancer in ABUS is challenging since lesion edges might not be well defined. In this study, the authors aim at developing an automated segmentation method for malignant lesions in ABUS that is robust to ill-defined cancer edges and posterior shadowing. A segmentation method using depth-guided dynamic programming based on spiral scanning is proposed. The method automatically adjusts aggressiveness of the segmentation according to the position of the voxels relative to the lesion center. Segmentation is more aggressive in the upper part of the lesion (close to the transducer) than at the bottom (far away from the transducer), where posterior shadowing is usually visible. The authors used Dice similarity coefficient (Dice) for evaluation. The proposed method is compared to existing state of the art approaches such as graph cut, level set, and smart opening and an existing dynamic programming method without depth dependence. In a dataset of 78 cancers, our proposed segmentation method achieved a mean Dice of 0.73 ± 0.14. The method outperforms an existing dynamic programming method (0.70 ± 0.16) on this task (p = 0.03) and it is also significantly (p < 0.001) better than graph cut (0.66 ± 0.18), level set based approach (0.63 ± 0.20) and smart opening (0.65 ± 0.12). The proposed depth-guided dynamic programming method achieves accurate breast malignant lesion segmentation results in automated breast ultrasound.
Validation of luminescent source reconstruction using spectrally resolved bioluminescence images
NASA Astrophysics Data System (ADS)
Virostko, John M.; Powers, Alvin C.; Jansen, E. D.
2008-02-01
This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.
Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.
Zalvidea, D; Sicre, E E
1998-06-10
A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.
Regan, Caitlin; Hayakawa, Carole; Choi, Bernard
2017-12-01
Due to its simplicity and low cost, laser speckle imaging (LSI) has achieved widespread use in biomedical applications. However, interpretation of the blood-flow maps remains ambiguous, as LSI enables only limited visualization of vasculature below scattering layers such as the epidermis and skull. Here, we describe a computational model that enables flexible in-silico study of the impact of these factors on LSI measurements. The model uses Monte Carlo methods to simulate light and momentum transport in a heterogeneous tissue geometry. The virtual detectors of the model track several important characteristics of light. This model enables study of LSI aspects that may be difficult or unwieldy to address in an experimental setting, and enables detailed study of the fundamental origins of speckle contrast modulation in tissue-specific geometries. We applied the model to an in-depth exploration of the spectral dependence of speckle contrast signal in the skin, the effects of epidermal melanin content on LSI, and the depth-dependent origins of our signal. We found that LSI of transmitted light allows for a more homogeneous integration of the signal from the entire bulk of the tissue, whereas epi-illumination measurements of contrast are limited to a fraction of the light penetration depth. We quantified the spectral depth dependence of our contrast signal in the skin, and did not observe a statistically significant effect of epidermal melanin on speckle contrast. Finally, we corroborated these simulated results with experimental LSI measurements of flow beneath a thin absorbing layer. The results of this study suggest the use of LSI in the clinic to monitor perfusion in patients with different skin types, or inhomogeneous epidermal melanin distributions.
Regan, Caitlin; Hayakawa, Carole; Choi, Bernard
2017-01-01
Due to its simplicity and low cost, laser speckle imaging (LSI) has achieved widespread use in biomedical applications. However, interpretation of the blood-flow maps remains ambiguous, as LSI enables only limited visualization of vasculature below scattering layers such as the epidermis and skull. Here, we describe a computational model that enables flexible in-silico study of the impact of these factors on LSI measurements. The model uses Monte Carlo methods to simulate light and momentum transport in a heterogeneous tissue geometry. The virtual detectors of the model track several important characteristics of light. This model enables study of LSI aspects that may be difficult or unwieldy to address in an experimental setting, and enables detailed study of the fundamental origins of speckle contrast modulation in tissue-specific geometries. We applied the model to an in-depth exploration of the spectral dependence of speckle contrast signal in the skin, the effects of epidermal melanin content on LSI, and the depth-dependent origins of our signal. We found that LSI of transmitted light allows for a more homogeneous integration of the signal from the entire bulk of the tissue, whereas epi-illumination measurements of contrast are limited to a fraction of the light penetration depth. We quantified the spectral depth dependence of our contrast signal in the skin, and did not observe a statistically significant effect of epidermal melanin on speckle contrast. Finally, we corroborated these simulated results with experimental LSI measurements of flow beneath a thin absorbing layer. The results of this study suggest the use of LSI in the clinic to monitor perfusion in patients with different skin types, or inhomogeneous epidermal melanin distributions. PMID:29296499
Joint TEM and MT aquifer study in the Atacama Desert, North Chile
NASA Astrophysics Data System (ADS)
Ruthsatz, Alexander D.; Sarmiento Flores, Alvaro; Diaz, Daniel; Reinoso, Pablo Salazar; Herrera, Cristian; Brasse, Heinrich
2018-06-01
The Atacama Desert represents one of the driest regions on earth, and despite the absence of sustainable clean water reserves the demand has increased drastically since 1970 as a result of growing population and expanding mining activities. Magnetotelluric (MT) and Transient Electromagnetic (TEM) measurements were carried out for groundwater exploration in late 2015 in the area of the Profeta Basin at the western margin of the Chilean Precordillera. Both methods complement each other: While MT in general attains larger penetration depths, TEM allows better resolution of near surface layers; furthermore TEM is free from galvanic distortion. Data were collected along three profiles, enabling a continuous resistivity image from the surface to at least several hundred meters depth. TEM data were inverted in a 1-D manner, consistently yielding a poorly conductive near-surface layer with a thickness of approximately 30 m and below a well-conducting layer which we interpret as the aquifer with resistivities around 10 Ωm. At marginal sites of the main SW-NE-profile the resistive basement was found in 150 m. These depths are confirmed by interpretation of the MT soundings. Those were firstly inverted with a 2-D approach and then by 3-D inversion as clear indications of three-dimensionality exist. Several modeling runs were performed with different combinations of transfer functions and smoothing parameters. Generally, MT and TEM results agree reasonably well and an overall image of the resistivity structures in the Profeta Basin could be achieved. The aquifer reaches depths of more than 500 m in parts and, by applying Archie's law, resistivities of 1 Ωm can be assumed, indicating highly saline fluids from the source region of the surrounding high Andes under persisting arid conditions.
Hibbard, Paul B; Scott-Brown, Kenneth C; Haigh, Emma C; Adrain, Melanie
2014-01-01
One of the greatest challenges in visual neuroscience is that of linking neural activity with perceptual experience. In the case of binocular depth perception, important insights have been achieved through comparing neural responses and the perception of depth, for carefully selected stimuli. One of the most important types of stimulus that has been used here is the anti-correlated random dot stereogram (ACRDS). In these stimuli, the contrast polarity of one half of a stereoscopic image is reversed. While neurons in cortical area V1 respond reliably to the binocular disparities in ACRDS, they do not create a sensation of depth. This discrepancy has been used to argue that depth perception must rely on neural activity elsewhere in the brain. Currently, the psychophysical results on which this argument rests are not clear-cut. While it is generally assumed that ACRDS do not support the perception of depth, some studies have reported that some people, some of the time, perceive depth in some types of these stimuli. Given the importance of these results for understanding the neural correlates of stereopsis, we studied depth perception in ACRDS using a large number of observers, in order to provide an unambiguous conclusion about the extent to which these stimuli support the perception of depth. We presented observers with random dot stereograms in which correlated dots were presented in a surrounding annulus and correlated or anti-correlated dots were presented in a central circular region. While observers could reliably report the depth of the central region for correlated stimuli, we found no evidence for depth perception in static or dynamic anti-correlated stimuli. Confidence ratings for stereoscopic perception were uniformly low for anti-correlated stimuli, but showed normal variation with disparity for correlated stimuli. These results establish that the inability of observers to perceive depth in ACRDS is a robust phenomenon.
Hibbard, Paul B.; Scott-Brown, Kenneth C.; Haigh, Emma C.; Adrain, Melanie
2014-01-01
One of the greatest challenges in visual neuroscience is that of linking neural activity with perceptual experience. In the case of binocular depth perception, important insights have been achieved through comparing neural responses and the perception of depth, for carefully selected stimuli. One of the most important types of stimulus that has been used here is the anti-correlated random dot stereogram (ACRDS). In these stimuli, the contrast polarity of one half of a stereoscopic image is reversed. While neurons in cortical area V1 respond reliably to the binocular disparities in ACRDS, they do not create a sensation of depth. This discrepancy has been used to argue that depth perception must rely on neural activity elsewhere in the brain. Currently, the psychophysical results on which this argument rests are not clear-cut. While it is generally assumed that ACRDS do not support the perception of depth, some studies have reported that some people, some of the time, perceive depth in some types of these stimuli. Given the importance of these results for understanding the neural correlates of stereopsis, we studied depth perception in ACRDS using a large number of observers, in order to provide an unambiguous conclusion about the extent to which these stimuli support the perception of depth. We presented observers with random dot stereograms in which correlated dots were presented in a surrounding annulus and correlated or anti-correlated dots were presented in a central circular region. While observers could reliably report the depth of the central region for correlated stimuli, we found no evidence for depth perception in static or dynamic anti-correlated stimuli. Confidence ratings for stereoscopic perception were uniformly low for anti-correlated stimuli, but showed normal variation with disparity for correlated stimuli. These results establish that the inability of observers to perceive depth in ACRDS is a robust phenomenon. PMID:24416195
Organisational culture and learning: a case study.
Bell, Elaine
2013-11-01
To explore the impact organisational cultures have on the learning experience of student nurses and identify the influencing factors. A case study approach was used. The single case being a Defence School of Health Care Studies (DSHCS) and the multiple units of analysis: student nurses, the lecturers and Student Standing Orders. An in depth three dimensional picture was achieved using multiple data collection methods: interview, survey, observation and document analysis. The findings suggest that the DSHCS is perceived to be a sub-culture within a dominant civilian learning culture. Generally, the students and staff believed that the DSHCS is an excellent learning environment and that the defence students overall are high achievers. The common themes that appeared from the data were image, ethos, environment, discipline, support, welfare and a civilian versus military way of thinking. The learning experience of defence student nurses is very positive and enhanced by the positive learning culture of the civilian Higher Educational Institution. The factors influencing a positive learning experience that can be impacted by the overarching culture are discipline, image, ethos of adult learning, support and welfare. Copyright © 2013 Elsevier Ltd. All rights reserved.
Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Liang, Kaicheng; Wang, Zhao; Cleveland, Cody; Booth, Lucas; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Cable, Alex E.; Mashimo, Hiroshi; Langer, Robert; Traverso, Giovanni; Fujimoto, James G.
2016-01-01
We demonstrate a micromotor balloon imaging catheter for ultrahigh speed endoscopic optical coherence tomography (OCT) which provides wide area, circumferential structural and angiographic imaging of the esophagus without contrast agents. Using a 1310 nm MEMS tunable wavelength swept VCSEL light source, the system has a 1.2 MHz A-scan rate and ~8.5 µm axial resolution in tissue. The micromotor balloon catheter enables circumferential imaging of the esophagus at 240 frames per second (fps) with a ~30 µm (FWHM) spot size. Volumetric imaging is achieved by proximal pullback of the micromotor assembly within the balloon at 1.5 mm/sec. Volumetric data consisting of 4200 circumferential images of 5,000 A-scans each over a 2.6 cm length, covering a ~13 cm2 area is acquired in <18 seconds. A non-rigid image registration algorithm is used to suppress motion artifacts from non-uniform rotational distortion (NURD), cardiac motion or respiration. En face OCT images at various depths can be generated. OCT angiography (OCTA) is computed using intensity decorrelation between sequential pairs of circumferential scans and enables three-dimensional visualization of vasculature. Wide area volumetric OCT and OCTA imaging of the swine esophagus in vivo is demonstrated. PMID:27570688
Ultrafast Ultrasound Imaging With Cascaded Dual-Polarity Waves.
Zhang, Yang; Guo, Yuexin; Lee, Wei-Ning
2018-04-01
Ultrafast ultrasound imaging using plane or diverging waves, instead of focused beams, has advanced greatly the development of novel ultrasound imaging methods for evaluating tissue functions beyond anatomical information. However, the sonographic signal-to-noise ratio (SNR) of ultrafast imaging remains limited due to the lack of transmission focusing, and thus insufficient acoustic energy delivery. We hereby propose a new ultrafast ultrasound imaging methodology with cascaded dual-polarity waves (CDWs), which consists of a pulse train with positive and negative polarities. A new coding scheme and a corresponding linear decoding process were thereby designed to obtain the recovered signals with increased amplitude, thus increasing the SNR without sacrificing the frame rate. The newly designed CDW ultrafast ultrasound imaging technique achieved higher quality B-mode images than coherent plane-wave compounding (CPWC) and multiplane wave (MW) imaging in a calibration phantom, ex vivo pork belly, and in vivo human back muscle. CDW imaging shows a significant improvement in the SNR (10.71 dB versus CPWC and 7.62 dB versus MW), penetration depth (36.94% versus CPWC and 35.14% versus MW), and contrast ratio in deep regions (5.97 dB versus CPWC and 5.05 dB versus MW) without compromising other image quality metrics, such as spatial resolution and frame rate. The enhanced image qualities and ultrafast frame rates offered by CDW imaging beget great potential for various novel imaging applications.
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
Development of a large-screen high-definition laser video projection system
NASA Astrophysics Data System (ADS)
Clynick, Tony J.
1991-08-01
A prototype laser video projector which uses electronic, optical, and mechanical means to project a television picture is described. With the primary goal of commercial viability, the price/performance ratio of the chosen means is critical. The fundamental requirement has been to achieve high brightness, high definition images of at least movie-theater size, at a cost comparable with other existing large-screen video projection technologies, while having the opportunity of developing and exploiting the unique properties of the laser projected image, such as its infinite depth-of-field. Two argon lasers are used in combination with a dye laser to achieve a range of colors which, despite not being identical to those of a CRT, prove to be subjectively acceptable. Acousto-optic modulation in combination with a rotary polygon scanner, digital video line stores, novel specialized electro-optics, and a galvanometric frame scanner form the basis of the projection technique achieving a 30 MHz video bandwidth, high- definition scan rates (1125/60 and 1250/50), high contrast ratio, and good optical efficiency. Auditorium projection of HDTV pictures wider than 20 meters are possible. Applications including 360 degree(s) projection and 3-D video provide further scope for exploitation of the HD laser video projector.
Electrically tunable metasurface perfect absorbers for ultrathin mid-infrared optical modulators.
Yao, Yu; Shankar, Raji; Kats, Mikhail A; Song, Yi; Kong, Jing; Loncar, Marko; Capasso, Federico
2014-11-12
Dynamically reconfigurable metasurfaces open up unprecedented opportunities in applications such as high capacity communications, dynamic beam shaping, hyperspectral imaging, and adaptive optics. The realization of high performance metasurface-based devices remains a great challenge due to very limited tuning ranges and modulation depths. Here we show that a widely tunable metasurface composed of optical antennas on graphene can be incorporated into a subwavelength-thick optical cavity to create an electrically tunable perfect absorber. By switching the absorber in and out of the critical coupling condition via the gate voltage applied on graphene, a modulation depth of up to 100% can be achieved. In particular, we demonstrated ultrathin (thickness < λ0/10) high speed (up to 20 GHz) optical modulators over a broad wavelength range (5-7 μm). The operating wavelength can be scaled from the near-infrared to the terahertz by simply tailoring the metasurface and cavity dimensions.
Highly Resolved Intravital Striped-illumination Microscopy of Germinal Centers
Andresen, Volker; Sporbert, Anje
2014-01-01
Monitoring cellular communication by intravital deep-tissue multi-photon microscopy is the key for understanding the fate of immune cells within thick tissue samples and organs in health and disease. By controlling the scanning pattern in multi-photon microscopy and applying appropriate numerical algorithms, we developed a striped-illumination approach, which enabled us to achieve 3-fold better axial resolution and improved signal-to-noise ratio, i.e. contrast, in more than 100 µm tissue depth within highly scattering tissue of lymphoid organs as compared to standard multi-photon microscopy. The acquisition speed as well as photobleaching and photodamage effects were similar to standard photo-multiplier-based technique, whereas the imaging depth was slightly lower due to the use of field detectors. By using the striped-illumination approach, we are able to observe the dynamics of immune complex deposits on secondary follicular dendritic cells – on the level of a few protein molecules in germinal centers. PMID:24748007
Pose-Invariant Face Recognition via RGB-D Images.
Sang, Gaoli; Li, Jing; Zhao, Qijun
2016-01-01
Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.
Dillman, Jonathan R; Chen, Shigao; Davenport, Matthew S; Zhao, Heng; Urban, Matthew W; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L
2015-03-01
There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. To assess the repeatability and reproducibility of superficial shear wave speed measurements acquired from elasticity phantoms at varying imaging depths using three imaging methods, two US systems and multiple operators. Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems Inc. (Norfolk, VA) were utilized for our investigation. Institution No. 1 used an Acuson S3000 US system (Siemens Medical Solutions USA, Malvern, PA) and three shear wave imaging method/transducer combinations, while institution No. 2 used an Aixplorer US system (SuperSonic Imagine, Bothell, WA) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0 cm, 2.5 cm and 4.0 cm) by four operators at each institution. Student's t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single-measure intra-class correlation coefficients (ICCs) and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (P = 0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (P > 0.05). The coefficients of variation were low (0.5-6.8%), and interoperator agreement was near-perfect (ICCs ≥ 0.99). Shear wave imaging method and imaging depth significantly affected measured SWS (P < 0.0001). Superficial shear wave speed measurements in elasticity phantoms demonstrate minimal variability across imaging method/transducer combinations, imaging depths and operators. The exact clinical significance of this variation is uncertain and may change according to organ and specific disease state.
Dillman, Jonathan R.; Chen, Shigao; Davenport, Matthew S.; Zhao, Heng; Urban, Matthew W.; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L.
2014-01-01
Background There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. Purpose To assess the repeatability and reproducibility of superficial shear wave speed (SWS) measurements acquired from elasticity phantoms at varying imaging depths using three different imaging methods, two different ultrasound systems, and multiple operators. Methods and Materials Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems, Inc. (Norfolk, VA) were utilized for our investigation. Institution #1 used an Acuson S3000 ultrasound system (Siemens Medical Solutions USA, Inc.) and three different shear wave imaging method/transducer combinations, while institution #2 used an Aixplorer ultrasound system (Supersonic Imagine) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0, 2.5, and 4.0 cm) by four operators at each institution. Student’s t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single measure intra-class correlation coefficients and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. Results For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (p=0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (p>0.05). The coefficients of variation were low (0.5–6.8%), and inter-operator agreement was near-perfect (ICCs ≥0.99). Shear wave imaging method and imaging depth significantly affected measured SWS (p<0.0001). Conclusions Superficial SWS measurements in elasticity phantoms demonstrate minimal variability across imaging method/transducer combinations, imaging depths, and between operators. The exact clinical significance of this variability is uncertain and may vary by organ and specific disease state. PMID:25249389
Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.
Arikan, Murat; Preiner, Reinhold; Wimmer, Michael
2016-02-01
With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.
Penetration depth measurement of near-infrared hyperspectral imaging light for milk powder
USDA-ARS?s Scientific Manuscript database
The increasingly common application of near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging ligh...
The Dark Energy Survey Data Release 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, T.M.C.; et al.
We describe the first public data release of the Dark Energy Survey, DES DR1, consisting of reduced single epoch images, coadded images, coadded source catalogs, and associated products and services assembled over the first three years of DES science operations. DES DR1 is based on optical/near-infrared imaging from 345 distinct nights (August 2013 to February 2016) by the Dark Energy Camera mounted on the 4-m Blanco telescope at Cerro Tololo Inter-American Observatory in Chile. We release data from the DES wide-area survey covering ~5,000 sq. deg. of the southern Galactic cap in five broad photometric bands, grizY. DES DR1 hasmore » a median delivered point-spread function of g = 1.12, r = 0.96, i = 0.88, z = 0.84, and Y = 0.90 arcsec FWHM, a photometric precision of < 1% in all bands, and an astrometric precision of 151 mas. The median coadded catalog depth for a 1.95" diameter aperture at S/N = 10 is g = 24.33, r = 24.08, i = 23.44, z = 22.69, and Y = 21.44 mag. DES DR1 includes nearly 400M distinct astronomical objects detected in ~10,000 coadd tiles of size 0.534 sq. deg. produced from ~39,000 individual exposures. Benchmark galaxy and stellar samples contain ~310M and ~ 80M objects, respectively, following a basic object quality selection. These data are accessible through a range of interfaces, including query web clients, image cutout servers, jupyter notebooks, and an interactive coadd image visualization tool. DES DR1 constitutes the largest photometric data set to date at the achieved depth and photometric precision.« less