Sample records for imaging depth range

  1. Large depth of focus dynamic micro integral imaging for optical see-through augmented reality display using a focus-tunable lens.

    PubMed

    Shen, Xin; Javidi, Bahram

    2018-03-01

    We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.

  2. High resolution axicon-based endoscopic FD OCT imaging with a large depth range

    NASA Astrophysics Data System (ADS)

    Lee, Kye-Sung; Hurley, William; Deegan, John; Dean, Scott; Rolland, Jannick P.

    2010-02-01

    Endoscopic imaging in tubular structures, such as the tracheobronchial tree, could benefit from imaging optics with an extended depth of focus (DOF). This optics could accommodate for varying sizes of tubular structures across patients and along the tree within a single patient. In the paper, we demonstrate an extended DOF without sacrificing resolution showing rotational images in biological tubular samples with 2.5 μm axial resolution, 10 ìm lateral resolution, and > 4 mm depth range using a custom designed probe.

  3. Enhanced truncated-correlation photothermal coherence tomography with application to deep subsurface defect imaging and 3-dimensional reconstructions

    NASA Astrophysics Data System (ADS)

    Tavakolian, Pantea; Sivagurunathan, Koneswaran; Mandelis, Andreas

    2017-07-01

    Photothermal diffusion-wave imaging is a promising technique for non-destructive evaluation and medical applications. Several diffusion-wave techniques have been developed to produce depth-resolved planar images of solids and to overcome imaging depth and image blurring limitations imposed by the physics of parabolic diffusion waves. Truncated-Correlation Photothermal Coherence Tomography (TC-PCT) is the most successful class of these methodologies to-date providing 3-D subsurface visualization with maximum depth penetration and high axial and lateral resolution. To extend the depth range and axial and lateral resolution, an in-depth analysis of TC-PCT, a novel imaging system with improved instrumentation, and an optimized reconstruction algorithm over the original TC-PCT technique is developed. Thermal waves produced by a laser chirped pulsed heat source in a finite thickness solid and the image reconstruction algorithm are investigated from the theoretical point of view. 3-D visualization of subsurface defects utilizing the new TC-PCT system is reported. The results demonstrate that this method is able to detect subsurface defects at the depth range of ˜4 mm in a steel sample, which exhibits dynamic range improvement by a factor of 2.6 compared to the original TC-PCT. This depth does not represent the upper limit of the enhanced TC-PCT. Lateral resolution in the steel sample was measured to be ˜31 μm.

  4. Optical-domain subsampling for data efficient depth ranging in Fourier-domain optical coherence tomography

    PubMed Central

    Siddiqui, Meena; Vakoc, Benjamin J.

    2012-01-01

    Recent advances in optical coherence tomography (OCT) have led to higher-speed sources that support imaging over longer depth ranges. Limitations in the bandwidth of state-of-the-art acquisition electronics, however, prevent adoption of these advances into the clinical applications. Here, we introduce optical-domain subsampling as a method for imaging at high-speeds and over extended depth ranges but with a lower acquisition bandwidth than that required using conventional approaches. Optically subsampled laser sources utilize a discrete set of wavelengths to alias fringe signals along an extended depth range into a bandwidth limited frequency window. By detecting the complex fringe signals and under the assumption of a depth-constrained signal, optical-domain subsampling enables recovery of the depth-resolved scattering signal without overlapping artifacts from this bandwidth-limited window. We highlight key principles behind optical-domain subsampled imaging, and demonstrate this principle experimentally using a polygon-filter based swept-source laser that includes an intra-cavity Fabry-Perot (FP) etalon. PMID:23038343

  5. High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.

    PubMed

    Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S

    2018-03-05

    A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.

  6. Fast range estimation based on active range-gated imaging for coastal surveillance

    NASA Astrophysics Data System (ADS)

    Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang

    2012-11-01

    Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.

  7. Three-dimensional anterior segment imaging in patients with type 1 Boston Keratoprosthesis with switchable full depth range swept source optical coherence tomography

    PubMed Central

    Poddar, Raju; Cortés, Dennis E.; Werner, John S.; Mannis, Mark J.

    2013-01-01

    Abstract. A high-speed (100 kHz A-scans/s) complex conjugate resolved 1 μm swept source optical coherence tomography (SS-OCT) system using coherence revival of the light source is suitable for dense three-dimensional (3-D) imaging of the anterior segment. The short acquisition time helps to minimize the influence of motion artifacts. The extended depth range of the SS-OCT system allows topographic analysis of clinically relevant images of the entire depth of the anterior segment of the eye. Patients with the type 1 Boston Keratoprosthesis (KPro) require evaluation of the full anterior segment depth. Current commercially available OCT systems are not suitable for this application due to limited acquisition speed, resolution, and axial imaging range. Moreover, most commonly used research grade and some clinical OCT systems implement a commercially available SS (Axsun) that offers only 3.7 mm imaging range (in air) in its standard configuration. We describe implementation of a common swept laser with built-in k-clock to allow phase stable imaging in both low range and high range, 3.7 and 11.5 mm in air, respectively, without the need to build an external MZI k-clock. As a result, 3-D morphology of the KPro position with respect to the surrounding tissue could be investigated in vivo both at high resolution and with large depth range to achieve noninvasive and precise evaluation of success of the surgical procedure. PMID:23912759

  8. The implementation of depth measurement and related algorithms based on binocular vision in embedded AM5728

    NASA Astrophysics Data System (ADS)

    Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan

    2018-01-01

    Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.

  9. Development of Extended-Depth Swept Source Optical Coherence Tomography for Applications in Ophthalmic Imaging of the Anterior and Posterior Eye

    NASA Astrophysics Data System (ADS)

    Dhalla, Al-Hafeez Zahir

    Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for extending the imaging range of OCT systems are developed. These techniques include the use of a high spectral purity swept source laser in a full-field OCT system, as well as the use of a peculiar phenomenon known as coherence revival to resolve the complex conjugate ambiguity in swept source OCT. In addition, a technique for extending the depth of focus of OCT systems by using a polarization-encoded, dual-focus sample arm is demonstrated. Along the way, other related advances are also presented, including the development of techniques to reduce crosstalk and speckle artifacts in full-field OCT, and the use of fast optical switches to increase the imaging speed of certain low-duty cycle swept source OCT systems. Finally, the clinical utility of these techniques is demonstrated by combining them to demonstrate high-speed, high resolution, extended-depth imaging of both the anterior and posterior eye simultaneously and in vivo.

  10. Long-range and depth-selective imaging of macroscopic targets using low-coherence and wide-field interferometry (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Choi, Wonshik

    2016-03-01

    With the advancement of 3D display technology, 3D imaging of macroscopic objects has drawn much attention as they provide the contents to display. The most widely used imaging methods include a depth camera, which measures time of flight for the depth discrimination, and various structured illumination techniques. However, these existing methods have poor depth resolution, which makes imaging complicated structures a difficult task. In order to resolve this issue, we propose an imaging system based upon low-coherence interferometry and off-axis digital holographic imaging. By using light source with coherence length of 200 micro, we achieved the depth resolution of 100 micro. In order to map the macroscopic objects with this high axial resolution, we installed a pair of prisms in the reference beam path for the long-range scanning of the optical path length. Specifically, one prism was fixed in position, and the other prism was mounted on a translation stage and translated in parallel to the first prism. Due to the multiple internal reflections between the two prisms, the overall path length was elongated by a factor of 50. In this way, we could cover a depth range more than 1 meter. In addition, we employed multiple speckle illuminations and incoherent averaging of the acquired holographic images for reducing the specular reflections from the target surface. Using this newly developed system, we performed imaging targets with multiple different layers and demonstrated imaging targets hidden behind the scattering layers. The method was also applied to imaging targets located around the corner.

  11. Adaptive DOF for plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Oberdörster, Alexander; Lensch, Hendrik P. A.

    2013-03-01

    Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.

  12. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  13. Experimental study on the sensitive depth of backwards detected light in turbid media.

    PubMed

    Zhang, Yunyao; Huang, Liqing; Zhang, Ning; Tian, Heng; Zhu, Jingping

    2018-05-28

    In the recent past, optical spectroscopy and imaging methods for biomedical diagnosis and target enhancing have been widely researched. The challenge to improve the performance of these methods is to know the sensitive depth of the backwards detected light well. Former research mainly employed a Monte Carlo method to run simulations to statistically describe the light sensitive depth. An experimental method for investigating the sensitive depth was developed and is presented here. An absorption plate was employed to remove all the light that may have travelled deeper than the plate, leaving only the light which cannot reach the plate. By measuring the received backwards light intensity and the depth between the probe and the plate, the light intensity distribution along the depth dimension can be achieved. The depth with the maximum light intensity was recorded as the sensitive depth. The experimental results showed that the maximum light intensity was nearly the same in a short depth range. It could be deduced that the sensitive depth was a range, rather than a single depth. This sensitive depth range as well as its central depth increased consistently with the increasing source-detection distance. Relationships between sensitive depth and optical properties were also investigated. It also showed that the reduced scattering coefficient affects the central sensitive depth and the range of the sensitive depth more than the absorption coefficient, so they cannot be simply added as reduced distinct coefficients to describe the sensitive depth. This study provides an efficient method for investigation of sensitive depth. It may facilitate the development of spectroscopy and imaging techniques for biomedical diagnosis and underwater imaging.

  14. Use of laser range finders and range image analysis in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A proposition to study the effect of filtering processes on range images and to evaluate the performance of two different laser range mappers is made. Median filtering was utilized to remove noise from the range images. First and second order derivatives are then utilized to locate the similarities and dissimilarities between the processed and the original images. Range depth information is converted into spatial coordinates, and a set of coefficients which describe 3-D objects is generated using the algorithm developed in the second phase of this research. Range images of spheres and cylinders are used for experimental purposes. An algorithm was developed to compare the performance of two different laser range mappers based upon the range depth information of surfaces generated by each of the mappers. Furthermore, an approach based on 2-D analytic geometry is also proposed which serves as a basis for the recognition of regular 3-D geometric objects.

  15. Study on super-resolution three-dimensional range-gated imaging technology

    NASA Astrophysics Data System (ADS)

    Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao

    2018-04-01

    Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.

  16. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography. We anticipate the results will be of particular interest to 3D filmmaking and real time computer games.

  17. A range/depth modulation transfer function (RMTF) framework for characterizing 3D imaging LADAR performance

    NASA Astrophysics Data System (ADS)

    Staple, Bevan; Earhart, R. P.; Slaymaker, Philip A.; Drouillard, Thomas F., II; Mahony, Thomas

    2005-05-01

    3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system"s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.

  18. A coaxially focused multi-mode beam for optical coherence tomography imaging with extended depth of focus (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yin, Biwei; Liang, Chia-Pin; Vuong, Barry; Tearney, Guillermo J.

    2017-02-01

    Conventional OCT images, obtained using a focused Gaussian beam have a lateral resolution of approximately 30 μm and a depth of focus (DOF) of 2-3 mm, defined as the confocal parameter (twice of Gaussian beam Rayleigh range). Improvement of lateral resolution without sacrificing imaging range requires techniques that can extend the DOF. Previously, we described a self-imaging wavefront division optical system that provided an estimated one order of magnitude DOF extension. In this study, we further investigate the properties of the coaxially focused multi-mode (CAFM) beam created by this self-imaging wavefront division optical system and demonstrate its feasibility for real-time biological tissue imaging. Gaussian beam and CAFM beam fiber optic probes with similar numerical apertures (objective NA≈0.5) were fabricated, providing lateral resolutions of approximately 2 μm. Rigorous lateral resolution characterization over depth was performed for both probes. The CAFM beam probe was found to be able to provide a DOF that was approximately one order of magnitude greater than that of Gaussian beam probe. By incorporating the CAFM beam fiber optic probe into a μOCT system with 1.5 μm axial resolution, we were able to acquire cross-sectional images of swine small intestine ex vivo, enabling the visualization of subcellular structures, providing high quality OCT images over more than a 300 μm depth range.

  19. Refocusing-range and image-quality enhanced optical reconstruction of 3-D objects from integral images using a principal periodic δ-function array

    NASA Astrophysics Data System (ADS)

    Ai, Lingyu; Kim, Eun-Soo

    2018-03-01

    We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.

  20. Long-wavelength optical coherence tomography at 1.7 µm for enhanced imaging depth

    PubMed Central

    Sharma, Utkarsh; Chang, Ernest W.; Yun, Seok H.

    2009-01-01

    Multiple scattering in a sample presents a significant limitation to achieve meaningful structural information at deeper penetration depths in optical coherence tomography (OCT). Previous studies suggest that the spectral region around 1.7 µm may exhibit reduced scattering coefficients in biological tissues compared to the widely used wavelengths around 1.3 µm. To investigate this long-wavelength region, we developed a wavelength-swept laser at 1.7 µm wavelength and conducted OCT or optical frequency domain imaging (OFDI) for the first time in this spectral range. The constructed laser is capable of providing a wide tuning range from 1.59 to 1.75 µm over 160 nm. When the laser was operated with a reduced tuning range over 95 nm at a repetition rate of 10.9 kHz and an average output power of 12.3 mW, the OFDI imaging system exhibited a sensitivity of about 100 dB and axial and lateral resolution of 24 µm and 14 µm, respectively. We imaged several phantom and biological samples using 1.3 µm and 1.7 µm OFDI systems and found that the depth-dependent signal decay rate is substantially lower at 1.7 µm wavelength in most, if not all samples. Our results suggest that this imaging window may offer an advantage over shorter wavelengths by increasing the penetration depths as well as enhancing image contrast at deeper penetration depths where otherwise multiple scattered photons dominate over ballistic photons. PMID:19030057

  1. Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates

    NASA Astrophysics Data System (ADS)

    Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.

    2010-04-01

    Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.

  2. Automatic Focusing for a 675 GHz Imaging Radar with Target Standoff Distances from 14 to 34 Meters

    NASA Technical Reports Server (NTRS)

    Tang, Adrian; Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria; Siegel, Peter H.

    2013-01-01

    This paper dicusses the issue of limited focal depth for high-resolution imaging radar operating over a wide range of standoff distances. We describe a technique for automatically focusing a THz imaging radar system using translational optics combined with range estimation based on a reduced chirp bandwidth setting. The demonstarted focusing algorithm estimates the correct focal depth for desired targets in the field of view at unknown standoffs and in the presence of clutter to provide good imagery at 14 to 30 meters of standoff.

  3. No scanning depth imaging system based on TOF

    NASA Astrophysics Data System (ADS)

    Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo

    2016-03-01

    To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.

  4. Full range line-field parallel swept source imaging utilizing digital refocusing

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-12-01

    We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.

  5. Performance comparison between 8 and 14 bit-depth imaging in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.

    2011-03-01

    We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.

  6. SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platt, M; Platt, M; Lamba, M

    2016-06-15

    Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less

  7. On the relationships between higher and lower bit-depth system measurements

    NASA Astrophysics Data System (ADS)

    Burks, Stephen D.; Haefner, David P.; Doe, Joshua M.

    2018-04-01

    The quality of an imaging system can be assessed through controlled laboratory objective measurements. Currently, all imaging measurements require some form of digitization in order to evaluate a metric. Depending on the device, the amount of bits available, relative to a fixed dynamic range, will exhibit quantization artifacts. From a measurement standpoint, measurements are desired to be performed at the highest possible bit-depth available. In this correspondence, we described the relationship between higher and lower bit-depth measurements. The limits to which quantization alters the observed measurements will be presented. Specifically, we address dynamic range, MTF, SiTF, and noise. Our results provide guidelines to how systems of lower bit-depth should be characterized and the corresponding experimental methods.

  8. Tunable semiconductor laser at 1025-1095 nm range for OCT applications with an extended imaging depth

    NASA Astrophysics Data System (ADS)

    Shramenko, Mikhail V.; Chamorovskiy, Alexander; Lyu, Hong-Chou; Lobintsov, Andrei A.; Karnowski, Karol; Yakubovich, Sergei D.; Wojtkowski, Maciej

    2015-03-01

    Tunable semiconductor laser for 1025-1095 nm spectral range is developed based on the InGaAs semiconductor optical amplifier and a narrow band-pass acousto-optic tunable filter in a fiber ring cavity. Mode-hop-free sweeping with tuning speeds of up to 104 nm/s was demonstrated. Instantaneous linewidth is in the range of 0.06-0.15 nm, side-mode suppression is up to 50 dB and polarization extinction ratio exceeds 18 dB. Optical power in output single mode fiber reaches 20 mW. The laser was used in OCT system for imaging a contact lens immersed in a 0.5% intra-lipid solution. The cross-section image provided the imaging depth of more than 5mm.

  9. MPR-CT Imaging for Stapes Prosthesis: Accuracy and Clinical Significance.

    PubMed

    Fang, Yanqing; Wang, Bing; Galvin, John J; Tao, Duoduo; Deng, Rui; Ou, Xiong; Liu, Yangwenyi; Dai, Peidong; Sha, Yan; Zhang, Tianyu; Chen, Bing

    2016-04-01

    The aims of this article are: 1) to re-evaluate the accuracy of multiple planar reconstruction computed tomography (MPR-CT) imaging on stapes-prosthesis parameters, and 2) to clarify possible relationships between prosthesis intravestibular depth and postoperative hearing outcomes. Seventy patients (46 women and 24 men; 32 right and 38 left sides) with the mean age of 40 years (range, 19-62 yr) with clinical otosclerosis. All patients underwent stapedotomy and were implanted with the same type of titanium piston prosthesis by the same surgeon. Postoperative MPR-CTs were obtained at patients' follow-up visits. The length and intravestibular depth of the stapes prosthesis (including absolute and relative depth) were calculated from the MPR-CT imaging. Relationships between the intravestibular depth of the prosthesis and hearing outcomes (pre- and postoperative audiograms) were analyzed using Spearman correlation analyses. The length of the prosthesis was overestimated by 1.8% (0.1 mm) by the MPR-CT imaging. Axial and coronal measurements were significantly correlated (p < 0.05). There was great intersubject variability in hearing outcomes differed insignificantly, regardless of intravestibular depth within the security range. No relationships were found between the intravestibular depth of the stapes prosthesis, as measured with MPR-CT, and postoperative hearing results. MPR-CT can provide an accurate estimation of stapes prosthesis parameters. However, the prosthesis intravestibular depth did not seem to affect postoperative hearing outcomes.

  10. Removing the depth-degeneracy in optical frequency domain imaging with frequency shifting

    PubMed Central

    Yun, S. H.; Tearney, G. J.; de Boer, J. F.; Bouma, B. E.

    2009-01-01

    A novel technique using an acousto-optic frequency shifter in optical frequency domain imaging (OFDI) is presented. The frequency shift eliminates the ambiguity between positive and negative differential delays, effectively doubling the interferometric ranging depth while avoiding image cross-talk. A signal processing algorithm is demonstrated to accommodate nonlinearity in the tuning slope of the wavelength-swept OFDI laser source. PMID:19484034

  11. Bayesian depth estimation from monocular natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  12. In-vivo gingival sulcus imaging using full-range, complex-conjugate-free, endoscopic spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.

    2012-01-01

    Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.

  13. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  14. Self-interference fluorescence microscopy with three-phase detection for depth-resolved confocal epi-fluorescence imaging.

    PubMed

    Braaf, Boy; de Boer, Johannes F

    2017-03-20

    Three-dimensional confocal fluorescence imaging of in vivo tissues is challenging due to sample motion and limited imaging speeds. In this paper a novel method is therefore presented for scanning confocal epi-fluorescence microscopy with instantaneous depth-sensing based on self-interference fluorescence microscopy (SIFM). A tabletop epi-fluorescence SIFM setup was constructed with an annular phase plate in the emission path to create a spectral self-interference signal that is phase-dependent on the axial position of a fluorescent sample. A Mach-Zehnder interferometer based on a 3 × 3 fiber-coupler was developed for a sensitive phase analysis of the SIFM signal with three photon-counter detectors instead of a spectrometer. The Mach-Zehnder interferometer created three intensity signals that alternately oscillated as a function of the SIFM spectral phase and therefore encoded directly for the axial sample position. Controlled axial translation of fluorescent microsphere layers showed a linear dependence of the SIFM spectral phase with sample depth over axial image ranges of 500 µm and 80 µm (3.9 × Rayleigh range) for 4 × and 10 × microscope objectives respectively. In addition, SIFM was in good agreement with optical coherence tomography depth measurements on a sample with indocyanine green dye filled capillaries placed at multiple depths. High-resolution SIFM imaging applications are demonstrated for fluorescence angiography on a dye-filled capillary blood vessel phantom and for autofluorescence imaging on an ex vivo fly eye.

  15. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  16. High dynamic range coding imaging system

    NASA Astrophysics Data System (ADS)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  17. Design of efficient, broadband single-element (20-80 MHz) ultrasonic transducers for medical imaging applications.

    PubMed

    Cannata, Jonathan M; Ritter, Timothy A; Chen, Wo-Hsing; Silverman, Ronald H; Shung, K Kirk

    2003-11-01

    This paper discusses the design, fabrication, and testing of sensitive broadband lithium niobate (LiNbO3) single-element ultrasonic transducers in the 20-80 MHz frequency range. Transducers of varying dimensions were built for an f# range of 2.0-3.1. The desired focal depths were achieved by either casting an acoustic lens on the transducer face or press-focusing the piezoelectric into a spherical curvature. For designs that required electrical impedance matching, a low impedance transmission line coaxial cable was used. All transducers were tested in a pulse-echo arrangement, whereby the center frequency, bandwidth, insertion loss, and focal depth were measured. Several transducers were fabricated with center frequencies in the 20-80 MHz range with the measured -6 dB bandwidths and two-way insertion loss values ranging from 57 to 74% and 9.6 to 21.3 dB, respectively. Both transducer focusing techniques proved successful in producing highly sensitive, high-frequency, single-element, ultrasonic-imaging transducers. In vivo and in vitro ultrasonic backscatter microscope (UBM) images of human eyes were obtained with the 50 MHz transducers. The high sensitivity of these devices could possibly allow for an increase in depth of penetration, higher image signal-to-noise ratio (SNR), and improved image contrast at high frequencies when compared to previously reported results.

  18. National Defense Center of Excellence for Industrial Metrology and 3D Imaging

    DTIC Science & Technology

    2012-10-18

    validation rather than mundane data-reduction/analysis tasks. Indeed, the new financial and technical resources being brought to bear by integrating CT...of extremely fast axial scanners. By replacing the single-spot detector by a detector array, a three-dimensional image is acquired by one depth scan...the number of acquired voxels per complete two-dimensional or three-dimensional image, the axial and lateral resolution, the depth range, the

  19. High-frequency Pulse-compression Ultrasound Imaging with an Annular Array

    NASA Astrophysics Data System (ADS)

    Mamou, J.; Ketterling, J. A.; Silverman, R. H.

    High-frequency ultrasound (HFU) allows fine-resolution imaging at the expense of limited depth-of-field (DOF) and shallow acoustic penetration depth. Coded-excitation imaging permits a significant increase in the signal-to-noise ratio (SNR) and therefore, the acoustic penetration depth. A 17-MHz, five-element annular array with a focal length of 31 mm and a total aperture of 10 mm was fabricated using a 25-μm thick piezopolymer membrane. An optimized 8-μs linear chirp spanning 6.5-32 MHz was used to excite the transducer. After data acquisition, the received signals were linearly filtered by a compression filter and synthetically focused. To compare the chirp-array imaging method with conventional impulse imaging in terms of resolution, a 25-μm wire was scanned and the -6-dB axial and lateral resolutions were computed at depths ranging from 20.5 to 40.5 mm. A tissue-mimicking phantom containing 10-μm glass beads was scanned, and backscattered signals were analyzed to evaluate SNR and penetration depth. Finally, ex-vivo ophthalmic images were formed and chirp-coded images showed features that were not visible in conventional impulse images.

  20. Handheld, rapidly switchable, anterior/posterior segment swept source optical coherence tomography probe

    PubMed Central

    Nankivil, Derek; Waterman, Gar; LaRocca, Francesco; Keller, Brenton; Kuo, Anthony N.; Izatt, Joseph A.

    2015-01-01

    We describe the first handheld, swept source optical coherence tomography (SSOCT) system capable of imaging both the anterior and posterior segments of the eye in rapid succession. A single 2D microelectromechanical systems (MEMS) scanner was utilized for both imaging modes, and the optical paths for each imaging mode were optimized for their respective application using a combination of commercial and custom optics. The system has a working distance of 26.1 mm and a measured axial resolution of 8 μm (in air). In posterior segment mode, the design has a lateral resolution of 9 μm, 7.4 mm imaging depth range (in air), 4.9 mm 6dB fall-off range (in air), and peak sensitivity of 103 dB over a 22° field of view (FOV). In anterior segment mode, the design has a lateral resolution of 24 μm, imaging depth range of 7.4 mm (in air), 6dB fall-off range of 4.5 mm (in air), depth-of-focus of 3.6 mm, and a peak sensitivity of 99 dB over a 17.5 mm FOV. In addition, the probe includes a wide-field iris imaging system to simplify alignment. A fold mirror assembly actuated by a bi-stable rotary solenoid was used to switch between anterior and posterior segment imaging modes, and a miniature motorized translation stage was used to adjust the objective lens position to correct for patient refraction between −12.6 and + 9.9 D. The entire probe weighs less than 630 g with a form factor of 20.3 x 9.5 x 8.8 cm. Healthy volunteers were imaged to illustrate imaging performance. PMID:26601014

  1. Design of high-performance adaptive objective lens with large optical depth scanning range for ultrabroad near infrared microscopic imaging

    PubMed Central

    Lan, Gongpu; Mauger, Thomas F.; Li, Guoqiang

    2015-01-01

    We report on the theory and design of adaptive objective lens for ultra broadband near infrared light imaging with large dynamic optical depth scanning range by using an embedded tunable lens, which can find wide applications in deep tissue biomedical imaging systems, such as confocal microscope, optical coherence tomography (OCT), two-photon microscopy, etc., both in vivo and ex vivo. This design is based on, but not limited to, a home-made prototype of liquid-filled membrane lens with a clear aperture of 8mm and the thickness of 2.55mm ~3.18mm. It is beneficial to have an adaptive objective lens which allows an extended depth scanning range larger than the focal length zoom range, since this will keep the magnification of the whole system, numerical aperture (NA), field of view (FOV), and resolution more consistent. To achieve this goal, a systematic theory is presented, for the first time to our acknowledgment, by inserting the varifocal lens in between a front and a back solid lens group. The designed objective has a compact size (10mm-diameter and 15mm-length), ultrabroad working bandwidth (760nm - 920nm), a large depth scanning range (7.36mm in air) — 1.533 times of focal length zoom range (4.8mm in air), and a FOV around 1mm × 1mm. Diffraction-limited performance can be achieved within this ultrabroad bandwidth through all the scanning depth (the resolution is 2.22 μm - 2.81 μm, calculated at the wavelength of 800nm with the NA of 0.214 - 0.171). The chromatic focal shift value is within the depth of focus (field). The chromatic difference in distortion is nearly zero and the maximum distortion is less than 0.05%. PMID:26417508

  2. Volumetric segmentation of range images for printed circuit board inspection

    NASA Astrophysics Data System (ADS)

    Van Dop, Erik R.; Regtien, Paul P. L.

    1996-10-01

    Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.

  3. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  4. Image registration reveals central lens thickness minimally increases during accommodation

    PubMed Central

    Schachar, Ronald A; Mani, Majid; Schachar, Ira H

    2017-01-01

    Purpose To evaluate anterior chamber depth, central crystalline lens thickness and lens curvature during accommodation. Setting California Retina Associates, El Centro, CA, USA. Design Healthy volunteer, prospective, clinical research swept-source optical coherence biometric image registration study of accommodation. Methods Ten subjects (4 females and 6 males) with an average age of 22.5 years (range: 20–26 years) participated in the study. A 45° beam splitter attached to a Zeiss IOLMaster 700 (Carl Zeiss Meditec Inc., Jena, Germany) biometer enabled simultaneous imaging of the cornea, anterior chamber, entire central crystalline lens and fovea in the dilated right eyes of subjects before, and during focus on a target 11 cm from the cornea. Images with superimposable foveal images, obtained before and during accommodation, that met all of the predetermined alignment criteria were selected for comparison. This registration requirement assured that changes in anterior chamber depth and central lens thickness could be accurately and reliably measured. The lens radii of curvatures were measured with a pixel stick circle. Results Images from only 3 of 10 subjects met the predetermined criteria for registration. Mean anterior chamber depth decreased, −67 μm (range: −0.40 to −110 μm), and mean central lens thickness increased, 117 μm (range: 100–130 μm). The lens surfaces steepened, anterior greater than posterior, while the lens, itself, did not move or shift its position as appeared from the lack of movement of the lens nucleus, during 7.8 diopters of accommodation, (range: 6.6–9.7 diopters). Conclusion Image registration, with stable invariant references for image correspondence, reveals that during accommodation a large increase in lens surface curvatures is associated with only a small increase in central lens thickness and no change in lens position. PMID:28979092

  5. Laser biostimulation therapy planning supported by imaging

    NASA Astrophysics Data System (ADS)

    Mester, Adam R.

    2018-04-01

    Ultrasonography and MR imaging can help to identify the area and depth of different lesions, like injury, overuse, inflammation, degenerative diseases. The appropriate power density, sufficient dose and direction of the laser treatment can be optimally estimated. If required minimum 5 mW photon density and required optimal energy dose: 2-4 Joule/cm2 wouldn't arrive into the depth of the target volume - additional techniques can help: slight compression of soft tissues can decrease the tissue thickness or multiple laser diodes can be used. In case of multiple diode clusters light scattering results deeper penetration. Another method to increase the penetration depth is a second pulsation (in kHz range) of laser light. (So called continuous wave laser itself has inherent THz pulsation by temporal coherence). Third solution of higher light intensity in the target volume is the multi-gate technique: from different angles the same joint can be reached based on imaging findings. Recent developments is ultrasonography: elastosonography and tissue harmonic imaging with contrast material offer optimal therapy planning. While MRI is too expensive modality for laser planning images can be optimally used if a diagnostic MRI already was done. Usual DICOM images offer "postprocessing" measurements in mm range.

  6. Super-resolution depth information from a short-wave infrared laser gated-viewing system by using correlated double sampling

    NASA Astrophysics Data System (ADS)

    Göhler, Benjamin; Lutzmann, Peter

    2017-10-01

    Primarily, a laser gated-viewing (GV) system provides range-gated 2D images without any range resolution within the range gate. By combining two GV images with slightly different gate positions, 3D information within a part of the range gate can be obtained. The depth resolution is higher (super-resolution) than the minimal gate shift step size in a tomographic sequence of the scene. For a state-of-the-art system with a typical frame rate of 20 Hz, the time difference between the two required GV images is 50 ms which may be too long in a dynamic scenario with moving objects. Therefore, we have applied this approach to the reset and signal level images of a new short-wave infrared (SWIR) GV camera whose read-out integrated circuit supports correlated double sampling (CDS) actually intended for the reduction of kTC noise (reset noise). These images are extracted from only one single laser pulse with a marginal time difference in between. The SWIR GV camera consists of 640 x 512 avalanche photodiodes based on mercury cadmium telluride with a pixel pitch of 15 μm. A Q-switched, flash lamp pumped solid-state laser with 1.57 μm wavelength (OPO), 52 mJ pulse energy after beam shaping, 7 ns pulse length and 20 Hz pulse repetition frequency is used for flash illumination. In this paper, the experimental set-up is described and the operating principle of CDS is explained. The method of deriving super-resolution depth information from a GV system by using CDS is introduced and optimized. Further, the range accuracy is estimated from measured image data.

  7. Off-axis holographic laser speckle contrast imaging of blood vessels in tissues

    NASA Astrophysics Data System (ADS)

    Abdurashitov, Arkady; Bragina, Olga; Sindeeva, Olga; Sergey, Sindeev; Semyachkina-Glushkovskaya, Oxana V.; Tuchin, Valery V.

    2017-09-01

    Laser speckle contrast imaging (LSCI) has become one of the most common tools for functional imaging in tissues. Incomplete theoretical description and sophisticated interpretation of measurement results are completely sidelined by a low-cost and simple hardware, fastness, consistent results, and repeatability. In addition to the relatively low measuring volume with around 700 μm of the probing depth for the visible spectral range of illumination, there is no depth selectivity in conventional LSCI configuration; furthermore, in a case of high NA objective, the actual penetration depth of light in tissues is greater than depth of field (DOF) of an imaging system. Thus, the information about these out-of-focus regions persists in the recorded frames but cannot be retrieved due to intensity-based registration method. We propose a simple modification of LSCI system based on the off-axis holography to introduce after-registration refocusing ability to overcome both depth-selectivity and DOF problems as well as to get the potential possibility of producing a cross-section view of the specimen.

  8. Deep skin structural and microcirculation imaging with extended-focus OCT

    NASA Astrophysics Data System (ADS)

    Blatter, Cedric; Grajciar, Branislav; Huber, Robert; Leitgeb, Rainer A.

    2012-02-01

    We present an extended focus OCT system for dermatologic applications that maintains high lateral resolution over a large depth range by using Bessel beam illumination. More, Bessel beams exhibit a self-reconstruction property that is particularly useful to avoid shadowing from surface structures such as hairs. High lateral resolution and high-speed measurement, thanks to a rapidly tuning swept source, allows not only for imaging of small skin structures in depth but also for comprehensive visualization of the small capillary network within the human skin in-vivo. We use this information for studying temporal vaso-responses to hypothermia. In contrast to other perfusion imaging methods such as laser Doppler imaging (LDI), OCT gives specific access to vascular responses in different vascular beds in depth.

  9. Acquisition and Post-Processing of Immunohistochemical Images.

    PubMed

    Sedgewick, Jerry

    2017-01-01

    Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.

  10. SAR studies in the Yuma Desert, Arizona: Sand penetration, geology, and the detection of military ordnance debris

    USGS Publications Warehouse

    Schaber, G.G.

    1999-01-01

    Synthetic Aperture Radar (SAR) images acquired over part of the Yuma Desert in southwestern Arizona demonstrate the ability of C-band (5.7-cm wavelength), L-band (24.5 cm), and P-band (68 cm) AIRSAR signals to backscatter from increasingly greater depths reaching several meters in blow sand and sandy alluvium. AIRSAR images obtained within the Barry M. Goldwater Bombing and Gunnery Range near Yuma, Arizona, show a total reversal of C- and P-band backscatter contrast (image tone) for three distinct geologic units. This phenomenon results from an increasingly greater depth of radar imaging with increasing radar wavelength. In the case of sandy- and small pebble-alluvium surfaces mantled by up to several meters of blow sand, backscatter increases directly with SAR wavelength as a result of volume scattering from a calcic soil horizon at shallow depth and by volume scattering from the root mounds of healthy desert vegetation that locally stabilize blow sand. AIRSAR images obtained within the military range are also shown to be useful for detecting metallic military ordnance debris that is located either at the surface or covered by tens of centimeters to several meters of blow sand. The degree of detectability of this ordnance increases with SAR wavelength and is clearly maximized on P-band images that are processed in the cross-polarized mode (HV). This effect is attributed to maximum signal penetration at P-band and the enhanced PHV image contrast between the radar-bright ordnance debris and the radar-dark sandy desert. This article focuses on the interpretation of high resolution AIRSAR images but also Compares these airborne SAR images with those acquired from spacecraft sensors such as ERS-SAR and Space Radar Laboratory (SIR-C/X-SAR).Synthetic Aperture Radar (SAR) images acquired over part of the Yuma Desert in southwestern Arizona demonstrate the ability of C-band (5.7-cm wavelength), L-band (24.5 cm), and P-band (68 cm) AIRSAR signals to backscatter from increasingly greater depths reaching several meters in blow sand and sandy alluvium. AIRSAR images obtained within the Barry M. Goldwater Bombing and Gunnery Range near Yuma, Arizona, show a total reversal of C- and P-band backscatter contrast (image tone) for three distinct geologic units. This phenomenon results from an increasingly greater depth of radar imaging with increasing radar wavelength. In the case of sandy- and small pebble-alluvium surfaces mantled by up to several meters of blow sand, backscatter increases directly with SAR wavelength as a result of volume scattering from a calcic soil horizon at shallow depth and by volume scattering from the root mounds of healthy desert vegetation that locally stabilize blow sand. AIRSAR images obtained within the military range are also shown to be useful for detecting metallic military ordnance debris that is located either at the surface or covered by tens of centimeters to several meters of blow sand. The degree of detectability of this ordnance increases with SAR wavelength and is clearly maximized on P-band images that are processed in the cross-polarized mode (HV). This effect is attributed to maximum signal penetration at P-band and the enhanced PHV image contrast between the radar-bright ordnance debris and the radar-dark sandy desert. This article focuses on the interpretation of high resolution AIRSAR images but also compares these airborne SAR images with those acquired from spacecraft sensors such as ERS-SAR and Space Radar Laboratory (SIR-C/X-SAR).

  11. 110 °C range athermalization of wavefront coding infrared imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong

    2017-09-01

    110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.

  12. Buried Object Detection Method Using Optimum Frequency Range in Extremely Shallow Underground

    NASA Astrophysics Data System (ADS)

    Sugimoto, Tsuneyoshi; Abe, Touma

    2011-07-01

    We propose a new detection method for buried objects using the optimum frequency response range of the corresponding vibration velocity. Flat speakers and a scanning laser Doppler vibrometer (SLDV) are used for noncontact acoustic imaging in the extremely shallow underground. The exploration depth depends on the sound pressure, but it is usually less than 10 cm. Styrofoam, wood (silver fir), and acrylic boards of the same size, different size styrofoam boards, a hollow toy duck, a hollow plastic container, a plastic container filled with sand, a hollow steel can and an unglazed pot are used as buried objects which are buried in sand to about 2 cm depth. The imaging procedure of buried objects using the optimum frequency range is given below. First, the standardized difference from the average vibration velocity is calculated for all scan points. Next, using this result, underground images are made using a constant frequency width to search for the frequency response range of the buried object. After choosing an approximate frequency response range, the difference between the average vibration velocity for all points and that for several points that showed a clear response is calculated for the final confirmation of the optimum frequency range. Using this optimum frequency range, we can obtain the clearest image of the buried object. From the experimental results, we confirmed the effectiveness of our proposed method. In particular, a clear image of the buried object was obtained when the SLDV image was unclear.

  13. Chromatic confocal microscopy for multi-depth imaging of epithelial tissue

    PubMed Central

    Olsovsky, Cory; Shelton, Ryan; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.

    2013-01-01

    We present a novel chromatic confocal microscope capable of volumetric reflectance imaging of microstructure in non-transparent tissue. Our design takes advantage of the chromatic aberration of aspheric lenses that are otherwise well corrected. Strong chromatic aberration, generated by multiple aspheres, longitudinally disperses supercontinuum light onto the sample. The backscattered light detected with a spectrometer is therefore wavelength encoded and each spectrum corresponds to a line image. This approach obviates the need for traditional axial mechanical scanning techniques that are difficult to implement for endoscopy and susceptible to motion artifact. A wavelength range of 590-775 nm yielded a >150 µm imaging depth with ~3 µm axial resolution. The system was further demonstrated by capturing volumetric images of buccal mucosa. We believe these represent the first microstructural images in non-transparent biological tissue using chromatic confocal microscopy that exhibit long imaging depth while maintaining acceptable resolution for resolving cell morphology. Miniaturization of this optical system could bring enhanced speed and accuracy to endomicroscopic in vivo volumetric imaging of epithelial tissue. PMID:23667789

  14. Diffraction-Limited Plenoptic Imaging with Correlated Light

    NASA Astrophysics Data System (ADS)

    Pepe, Francesco V.; Di Lena, Francesco; Mazzilli, Aldo; Edrei, Eitan; Garuccio, Augusto; Scarcelli, Giuliano; D'Angelo, Milena

    2017-12-01

    Traditional optical imaging faces an unavoidable trade-off between resolution and depth of field (DOF). To increase resolution, high numerical apertures (NAs) are needed, but the associated large angular uncertainty results in a limited range of depths that can be put in sharp focus. Plenoptic imaging was introduced a few years ago to remedy this trade-off. To this aim, plenoptic imaging reconstructs the path of light rays from the lens to the sensor. However, the improvement offered by standard plenoptic imaging is practical and not fundamental: The increased DOF leads to a proportional reduction of the resolution well above the diffraction limit imposed by the lens NA. In this Letter, we demonstrate that correlation measurements enable pushing plenoptic imaging to its fundamental limits of both resolution and DOF. Namely, we demonstrate maintaining the imaging resolution at the diffraction limit while increasing the depth of field by a factor of 7. Our results represent the theoretical and experimental basis for the effective development of promising applications of plenoptic imaging.

  15. Diffraction-Limited Plenoptic Imaging with Correlated Light.

    PubMed

    Pepe, Francesco V; Di Lena, Francesco; Mazzilli, Aldo; Edrei, Eitan; Garuccio, Augusto; Scarcelli, Giuliano; D'Angelo, Milena

    2017-12-15

    Traditional optical imaging faces an unavoidable trade-off between resolution and depth of field (DOF). To increase resolution, high numerical apertures (NAs) are needed, but the associated large angular uncertainty results in a limited range of depths that can be put in sharp focus. Plenoptic imaging was introduced a few years ago to remedy this trade-off. To this aim, plenoptic imaging reconstructs the path of light rays from the lens to the sensor. However, the improvement offered by standard plenoptic imaging is practical and not fundamental: The increased DOF leads to a proportional reduction of the resolution well above the diffraction limit imposed by the lens NA. In this Letter, we demonstrate that correlation measurements enable pushing plenoptic imaging to its fundamental limits of both resolution and DOF. Namely, we demonstrate maintaining the imaging resolution at the diffraction limit while increasing the depth of field by a factor of 7. Our results represent the theoretical and experimental basis for the effective development of promising applications of plenoptic imaging.

  16. Overcoming sampling depth variations in the analysis of broadband hyperspectral images of breast tissue (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kho, Esther; de Boer, Lisanne L.; Van de Vijver, Koen K.; Sterenborg, Henricus J. C. M.; Ruers, Theo J. M.

    2017-02-01

    Worldwide, up to 40% of the breast conserving surgeries require additional operations due to positive resection margins. We propose to reduce this percentage by using hyperspectral imaging for resection margin assessment during surgery. Spectral hypercubes were collected from 26 freshly excised breast specimens with a pushbroom camera (900-1700nm). Computer simulations of the penetration depth in breast tissue suggest a strong variation in sampling depth ( 0.5-10 mm) over this wavelength range. This was confirmed with a breast tissue mimicking phantom study. Smaller penetration depths are observed in wavelength regions with high water and/or fat absorption. Consequently, tissue classification based on spectral analysis over the whole wavelength range becomes complicated. This is especially a problem in highly inhomogeneous human tissue. We developed a method, called derivative imaging, which allows accurate tissue analysis, without the impediment of dissimilar sampling volumes. A few assumptions were made based on previous research. First, the spectra acquired with our camera from breast tissue are mainly shaped by fat and water absorption. Second, tumor tissue contains less fat and more water than healthy tissue. Third, scattering slopes of different tissue types are assumed to be alike. In derivative imaging, the derivatives are calculated of wavelengths a few nanometers apart; ensuring similar penetration depths. The wavelength choice determines the accuracy of the method and the resolution. Preliminary results on 3 breast specimens indicate a classification accuracy of 93% when using wavelength regions characterized by water and fat absorption. The sampling depths at these regions are 1mm and 5mm.

  17. High speed parallel spectral-domain OCT using spectrally encoded line-field illumination

    NASA Astrophysics Data System (ADS)

    Lee, Kye-Sung; Hur, Hwan; Bae, Ji Yong; Kim, I. Jong; Kim, Dong Uk; Nam, Ki-Hwan; Kim, Geon-Hee; Chang, Ki Soo

    2018-01-01

    We report parallel spectral-domain optical coherence tomography (OCT) at 500 000 A-scan/s. This is the highest-speed spectral-domain (SD) OCT system using a single line camera. Spectrally encoded line-field scanning is proposed to increase the imaging speed in SD-OCT effectively, and the tradeoff between speed, depth range, and sensitivity is demonstrated. We show that three imaging modes of 125k, 250k, and 500k A-scan/s can be simply switched according to the sample to be imaged considering the depth range and sensitivity. To demonstrate the biological imaging performance of the high-speed imaging modes of the spectrally encoded line-field OCT system, human skin and a whole leaf were imaged at the speed of 250k and 500k A-scan/s, respectively. In addition, there is no sensitivity dependence in the B-scan direction, which is implicit in line-field parallel OCT using line focusing of a Gaussian beam with a cylindrical lens.

  18. Thermographic imaging for high-temperature composite materials: A defect detection study

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Bodis, James R.; Bishop, Chip

    1995-01-01

    The ability of a thermographic imaging technique for detecting flat-bottom hole defects of various diameters and depths was evaluated in four composite systems (two types of ceramic matrix composites, one metal matrix composite, and one polymer matrix composite) of interest as high-temperature structural materials. The holes ranged from 1 to 13 mm in diameter and 0.1 to 2.5 mm in depth in samples approximately 2-3 mm thick. The thermographic imaging system utilized a scanning mirror optical system and infrared (IR) focusing lens in conjunction with a mercury cadmium telluride infrared detector element to obtain high resolution infrared images. High intensity flash lamps located on the same side as the infrared camera were used to heat the samples. After heating, up to 30 images were sequentially acquired at 70-150 msec intervals. Limits of detectability based on depth and diameter of the flat-bottom holes were defined for each composite material. Ultrasonic and radiographic images of the samples were obtained and compared with the thermographic images.

  19. Remote measurement of river discharge using thermal particle image velocimetry (PIV) and various sources of bathymetric information

    USGS Publications Warehouse

    Legleiter, Carl; Kinzel, Paul J.; Nelson, Jonathan M.

    2017-01-01

    Although river discharge is a fundamental hydrologic quantity, conventional methods of streamgaging are impractical, expensive, and potentially dangerous in remote locations. This study evaluated the potential for measuring discharge via various forms of remote sensing, primarily thermal imaging of flow velocities but also spectrally-based depth retrieval from passive optical image data. We acquired thermal image time series from bridges spanning five streams in Alaska and observed strong agreement between velocities measured in situ and those inferred by Particle Image Velocimetry (PIV), which quantified advection of thermal features by the flow. The resulting surface velocities were converted to depth-averaged velocities by applying site-specific, calibrated velocity indices. Field spectra from three clear-flowing streams provided strong relationships between depth and reflectance, suggesting that, under favorable conditions, spectrally-based bathymetric mapping could complement thermal PIV in a hybrid approach to remote sensing of river discharge; this strategy would not be applicable to larger, more turbid rivers, however. A more flexible and efficient alternative might involve inferring depth from thermal data based on relationships between depth and integral length scales of turbulent fluctuations in temperature, captured as variations in image brightness. We observed moderately strong correlations for a site-aggregated data set that reduced station-to-station variability but encompassed a broad range of depths. Discharges calculated using thermal PIV-derived velocities were within 15% of in situ measurements when combined with depths measured directly in the field or estimated from field spectra and within 40% when the depth information also was derived from thermal images. The results of this initial, proof-of-concept investigation suggest that remote sensing techniques could facilitate measurement of river discharge.

  20. Expansion-based passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1993-01-01

    A new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases is described. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they were used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts - as well as the other parameters - can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline - resulting in a proportionately higher depth accuracy.

  1. Expansion-based passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1993-01-01

    This paper describes a new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they have been used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts--as well as the other parameters--can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline resulting in a proportionately higher depth accuracy.

  2. Miniature objective lens with variable focus for confocal endomicroscopy

    PubMed Central

    Kim, Minkyu; Kang, DongKyun; Wu, Tao; Tabatabaei, Nima; Carruth, Robert W.; Martinez, Ramses V; Whitesides, George M.; Nakajima, Yoshikazu; Tearney, Guillermo J.

    2014-01-01

    Spectrally encoded confocal microscopy (SECM) is a reflectance confocal microscopy technology that can rapidly image large areas of luminal organs at microscopic resolution. One of the main challenges for large-area SECM imaging in vivo is maintaining the same imaging depth within the tissue when patient motion and tissue surface irregularity are present. In this paper, we report the development of a miniature vari-focal objective lens that can be used in an SECM endoscopic probe to conduct adaptive focusing and to maintain the same imaging depth during in vivo imaging. The vari-focal objective lens is composed of an aspheric singlet with an NA of 0.5, a miniature water chamber, and a thin elastic membrane. The water volume within the chamber was changed to control curvature of the elastic membrane, which subsequently altered the position of the SECM focus. The vari-focal objective lens has a diameter of 5 mm and thickness of 4 mm. A vari-focal range of 240 μm was achieved while maintaining lateral resolution better than 2.6 μm and axial resolution better than 26 μm. Volumetric SECM images of swine esophageal tissues were obtained over the vari-focal range of 260 μm. SECM images clearly visualized cellular features of the swine esophagus at all focal depths, including basal cell nuclei, papillae, and lamina propria. PMID:25574443

  3. Software-based stacking techniques to enhance depth of field and dynamic range in digital photomicrography.

    PubMed

    Piper, Jörg

    2010-01-01

    Several software solutions are powerful tools to enhance the depth of field and improve focus in digital photomicrography. By these means, the focal depth can be fundamentally optimized so that three-dimensional structures within specimens can be documented with superior quality. Thus, images can be created in light microscopy which will be comparable with scanning electron micrographs. The remaining sharpness will no longer be dependent on the specimen's vertical dimension or its range in regional thickness. Moreover, any potential lack of definition associated with loss of planarity and unsteadiness in the visual accommodation can be mitigated or eliminated so that the contour sharpness and resolution can be strongly enhanced.Through the use of complementary software, ultrahigh ranges in brightness and contrast (the so-called high-dynamic range) can be corrected so that the final images will also be free from locally over- or underexposed zones. Furthermore, fine detail in low natural contrast can be visualized in much higher clarity. Fundamental enhancements of the global visual information will result from both techniques.

  4. Correction of a liquid lens for 3D imaging systems

    NASA Astrophysics Data System (ADS)

    Bower, Andrew J.; Bunch, Robert M.; Leisher, Paul O.; Li, Weixu; Christopher, Lauren A.

    2012-06-01

    3D imaging systems are currently being developed using liquid lens technology for use in medical devices as well as in consumer electronics. Liquid lenses operate on the principle of electrowetting to control the curvature of a buried surface, allowing for a voltage-controlled change in focal length. Imaging systems which utilize a liquid lens allow extraction of depth information from the object field through a controlled introduction of defocus into the system. The design of such a system must be carefully considered in order to simultaneously deliver good image quality and meet the depth of field requirements for image processing. In this work a corrective model has been designed for use with the Varioptic Arctic 316 liquid lens. The design is able to be optimized for depth of field while minimizing aberrations for a 3D imaging application. The modeled performance is compared to the measured performance of the corrected system over a large range of focal lengths.

  5. Concept of proton radiography using energy resolved dose measurement.

    PubMed

    Bentefour, El H; Schnuerer, Roland; Lu, Hsiao-Ming

    2016-08-21

    Energy resolved dosimetry offers a potential path to single detector based proton imaging using scanned proton beams. This is because energy resolved dose functions encrypt the radiological depth at which the measurements are made. When a set of predetermined proton beams 'proton imaging field' are used to deliver a well determined dose distribution in a specific volume, then, at any given depth x of this volume, the behavior of the dose against the energies of the proton imaging field is unique and characterizes the depth x. This concept applies directly to proton therapy scanning delivery methods (pencil beam scanning and uniform scanning) and it can be extended to the proton therapy passive delivery methods (single and double scattering) if the delivery of the irradiation is time-controlled with a known time-energy relationship. To derive the water equivalent path length (WEPL) from the energy resolved dose measurement, one may proceed in two different ways. A first method is by matching the measured energy resolved dose function to a pre-established calibration database of the behavior of the energy resolved dose in water, measured over the entire range of radiological depths with at least 1 mm spatial resolution. This calibration database can also be made specific to the patient if computed using the patient x-CT data. A second method to determine the WEPL is by using the empirical relationships between the WEPL and the integral dose or the depth at 80% of the proximal fall off of the energy resolved dose functions in water. In this note, we establish the evidence of the fundamental relationship between the energy resolved dose and the WEPL at the depth of the measurement. Then, we illustrate this relationship with experimental data and discuss its imaging dynamic range for 230 MeV protons.

  6. Ultrahigh speed 1050nm swept source / Fourier domain OCT retinal and anterior segment imaging at 100,000 to 400,000 axial scans per second

    PubMed Central

    Potsaid, Benjamin; Baumann, Bernhard; Huang, David; Barry, Scott; Cable, Alex E.; Schuman, Joel S.; Duker, Jay S.; Fujimoto, James G.

    2011-01-01

    We demonstrate ultrahigh speed swept source/Fourier domain ophthalmic OCT imaging using a short cavity swept laser at 100,000–400,000 axial scan rates. Several design configurations illustrate tradeoffs in imaging speed, sensitivity, axial resolution, and imaging depth. Variable rate A/D optical clocking is used to acquire linear-in-k OCT fringe data at 100kHz axial scan rate with 5.3um axial resolution in tissue. Fixed rate sampling at 1 GSPS achieves a 7.5mm imaging range in tissue with 6.0um axial resolution at 100kHz axial scan rate. A 200kHz axial scan rate with 5.3um axial resolution over 4mm imaging range is achieved by buffering the laser sweep. Dual spot OCT using two parallel interferometers achieves 400kHz axial scan rate, almost 2X faster than previous 1050nm ophthalmic results and 20X faster than current commercial instruments. Superior sensitivity roll-off performance is shown. Imaging is demonstrated in the human retina and anterior segment. Wide field 12×12mm data sets include the macula and optic nerve head. Small area, high density imaging shows individual cone photoreceptors. The 7.5mm imaging range configuration can show the cornea, iris, and anterior lens in a single image. These improvements in imaging speed and depth range provide important advantages for ophthalmic imaging. The ability to rapidly acquire 3D-OCT data over a wide field of view promises to simplify examination protocols. The ability to image fine structures can provide detailed information on focal pathologies. The large imaging range and improved image penetration at 1050nm wavelengths promises to improve performance for instrumentation which images both the retina and anterior eye. These advantages suggest that swept source OCT at 1050nm wavelengths will play an important role in future ophthalmic instrumentation. PMID:20940894

  7. Flexible non-diffractive vortex microscope for three-dimensional depth-enhanced super-localization of dielectric, metal and fluorescent nanoparticles

    NASA Astrophysics Data System (ADS)

    Bouchal, Petr; Bouchal, Zdeněk

    2017-10-01

    In the past decade, probe-based super-resolution using temporally resolved localization of emitters became a groundbreaking imaging strategy in fluorescence microscopy. Here we demonstrate a non-diffractive vortex microscope (NVM), enabling three-dimensional super-resolution fluorescence imaging and localization and tracking of metal and dielectric nanoparticles. The NVM benefits from vortex non-diffractive beams (NBs) creating a double-helix point spread function that rotates under defocusing while maintaining its size and shape unchanged. Using intrinsic properties of the NBs, the dark-field localization of weakly scattering objects is achieved in a large axial range exceeding the depth of field of the microscope objective up to 23 times. The NVM was developed using an upright microscope Nikon Eclipse E600 operating with a spiral lithographic mask optimized using Fisher information and built into an add-on imaging module or microscope objective. In evaluation of the axial localization accuracy the root mean square error below 18 nm and 280 nm was verified over depth ranges of 3.5 μm and 13.6 μm, respectively. Subwavelength gold and polystyrene beads were localized with isotropic precision below 10 nm in the axial range of 3.5 μm and the axial precision reduced to 30 nm in the extended range of 13.6 μm. In the fluorescence imaging, the localization with isotropic precision below 15 nm was demonstrated in the range of 2.5 μm, whereas in the range of 8.3 μm, the precision of 15 nm laterally and 30-50 nm axially was achieved. The tracking of nanoparticles undergoing Brownian motion was demonstrated in the volume of 14 × 10 × 16 μm3. Applicability of the NVM was tested by fluorescence imaging of LW13K2 cells and localization of cellular proteins.

  8. Note: long range and accurate measurement of deep trench microstructures by a specialized scanning tunneling microscope.

    PubMed

    Ju, Bing-Feng; Chen, Yuan-Liu; Zhang, Wei; Zhu, Wule; Jin, Chao; Fang, F Z

    2012-05-01

    A compact but practical scanning tunneling microscope (STM) with high aspect ratio and high depth capability has been specially developed. Long range scanning mechanism with tilt-adjustment stage is adopted for the purpose of adjusting the probe-sample relative angle to compensate the non-parallel effects. A periodical trench microstructure with a pitch of 10 μm has been successfully imaged with a long scanning range up to 2.0 mm. More innovatively, a deep trench with depth and step height of 23.0 μm has also been successfully measured, and slope angle of the sidewall can approximately achieve 67°. The probe can continuously climb the high step and exploring the trench bottom without tip crashing. The new STM could perform long range measurement for the deep trench and high step surfaces without image distortion. It enables accurate measurement and quality control of periodical trench microstructures.

  9. Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging.

    PubMed

    Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho

    2004-12-01

    A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.

  10. Expanding the Detection of Traversable Area with RealSense for the Visually Impaired

    PubMed Central

    Yang, Kailun; Wang, Kaiwei; Hu, Weijian; Bai, Jian

    2016-01-01

    The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers. PMID:27879634

  11. Estimation of the optical errors on the luminescence imaging of water for proton beam

    NASA Astrophysics Data System (ADS)

    Yabe, Takuya; Komori, Masataka; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi

    2018-04-01

    Although luminescence imaging of water during proton-beam irradiation can be applied to range estimation, the height of the Bragg peak of the luminescence image was smaller than that measured with an ionization chamber. We hypothesized that the reasons of the difference were attributed to the optical phenomena; parallax errors of the optical system and the reflection of the luminescence from the water phantom. We estimated the errors cause by these optical phenomena affecting the luminescence image of water. To estimate the parallax error on the luminescence images, we measured the luminescence images during proton-beam irradiation using a cooled charge-coupled camera by changing the heights of the optical axis of the camera from those of the Bragg peak. When the heights of the optical axis matched to the depths of the Bragg peak, the Bragg peak heights in the depth profiles were the highest. The reflection of the luminescence of water with a black wall phantom was slightly smaller than that with a transparent phantom and changed the shapes of the depth profiles. We conclude that the parallax error significantly affects the heights of the Bragg peak and the reflection of the phantom affects the shapes of depth profiles of the luminescence images of water.

  12. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.

    PubMed

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F

    2018-03-01

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.

  13. Evaluation of a novel collimator for molecular breast tomosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilland, David R.; Welch, Benjamin L.; Lee, Seungjoon

    Here, this study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. Methods The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelatedmore » (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (–25° to 25°) using 99mTc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. Results The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. Conclusion The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging.« less

  14. Evaluation of a novel collimator for molecular breast tomosynthesis.

    PubMed

    Gilland, David R; Welch, Benjamin L; Lee, Seungjoon; Kross, Brian; Weisenberger, Andrew G

    2017-11-01

    This study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelated (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (-25° to 25°) using 99m Tc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging. © 2017 American Association of Physicists in Medicine.

  15. Evaluation of a novel collimator for molecular breast tomosynthesis

    DOE PAGES

    Gilland, David R.; Welch, Benjamin L.; Lee, Seungjoon; ...

    2017-09-06

    Here, this study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. Methods The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelatedmore » (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (–25° to 25°) using 99mTc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. Results The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. Conclusion The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging.« less

  16. Telecentric 3D profilometry based on phase-shifting fringe projection.

    PubMed

    Li, Dong; Liu, Chunyang; Tian, Jindong

    2014-12-29

    Three dimensional shape measurement in the microscopic range becomes increasingly important with the development of micro manufacturing technology. Microscopic fringe projection techniques offer a fast, robust, and full-field measurement for field sizes from approximately 1 mm2 to several cm2. However, the depth of field is very small due to the imaging of non-telecentric microscope, which is often not sufficient to measure the complete depth of a 3D-object. And the calibration of phase-to-depth conversion is complicated which need a precision translation stage and a reference plane. In this paper, we propose a novel telecentric phase-shifting projected fringe profilometry for small and thick objects. Telecentric imaging extends the depth of field approximately to millimeter order, which is much larger than that of microscopy. To avoid the complicated phase-to-depth conversion in microscopic fringe projection, we develop a new system calibration method of camera and projector based on telecentric imaging model. Based on these, a 3D reconstruction of telecentric imaging is presented with stereovision aided by fringe phase maps. Experiments demonstrated the feasibility and high measurement accuracy of the proposed system for thick object.

  17. Multidepth imaging by chromatic dispersion confocal microscopy

    NASA Astrophysics Data System (ADS)

    Olsovsky, Cory A.; Shelton, Ryan L.; Saldua, Meagan A.; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.

    2012-03-01

    Confocal microscopy has shown potential as an imaging technique to detect precancer. Imaging cellular features throughout the depth of epithelial tissue may provide useful information for diagnosis. However, the current in vivo axial scanning techniques for confocal microscopy are cumbersome, time-consuming, and restrictive when attempting to reconstruct volumetric images acquired in breathing patients. Chromatic dispersion confocal microscopy (CDCM) exploits severe longitudinal chromatic aberration in the system to axially disperse light from a broadband source and, ultimately, spectrally encode high resolution images along the depth of the object. Hyperchromat lenses are designed to have severe and linear longitudinal chromatic aberration, but have not yet been used in confocal microscopy. We use a hyperchromat lens in a stage scanning confocal microscope to demonstrate the capability to simultaneously capture information at multiple depths without mechanical scanning. A photonic crystal fiber pumped with a 830nm wavelength Ti:Sapphire laser was used as a supercontinuum source, and a spectrometer was used as the detector. The chromatic aberration and magnification in the system give a focal shift of 140μm after the objective lens and an axial resolution of 5.2-7.6μm over the wavelength range from 585nm to 830nm. A 400x400x140μm3 volume of pig cheek epithelium was imaged in a single X-Y scan. Nuclei can be seen at several depths within the epithelium. The capability of this technique to achieve simultaneous high resolution confocal imaging at multiple depths may reduce imaging time and motion artifacts and enable volumetric reconstruction of in vivo confocal images of the epithelium.

  18. Imaging and full-length biometry of the eye during accommodation using spectral domain OCT with an optical switch

    PubMed Central

    Ruggeri, Marco; Uhlhorn, Stephen R.; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie

    2012-01-01

    Abstract: An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time. PMID:22808424

  19. Imaging and full-length biometry of the eye during accommodation using spectral domain OCT with an optical switch.

    PubMed

    Ruggeri, Marco; Uhlhorn, Stephen R; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie

    2012-07-01

    An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time.

  20. Visibility of fiducial markers used for image-guided radiation therapy on optical coherence tomography for registration with CT: An esophageal phantom study.

    PubMed

    Jelvehgaran, Pouya; Alderliesten, Tanja; Weda, Jelmer J A; de Bruin, Martijn; Faber, Dirk J; Hulshof, Maarten C C M; van Leeuwen, Ton G; van Herk, Marcel; de Boer, Johannes F

    2017-12-01

    Optical coherence tomography (OCT) is of interest to visualize microscopic esophageal tumor extensions to improve tumor delineation for radiation therapy (RT) planning. Fiducial marker placement is a common method to ensure target localization during planning and treatment. Visualization of these fiducial markers on OCT permits integrating OCT and computed tomography (CT) images used for RT planning via image registration. We studied the visibility of 13 (eight types) commercially available solid and liquid fiducial markers in OCT images at different depths using dedicated esophageal phantoms and evaluated marker placement depth in clinical practice. We designed and fabricated dedicated esophageal phantoms, in which three layers mimic the anatomical wall structures of a healthy human esophagus. We successfully implanted 13 commercially available fiducial markers that varied in diameter and material property at depths between 0.5 and 3.0 mm. The resulting esophageal phantoms were imaged with OCT, and marker visibility was assessed qualitatively and quantitatively using the contrast-to-background-noise ratio (CNR). The CNR was defined as the difference between the mean intensity of the fiducial markers and the mean intensity of the background divided by the standard deviation of the background intensity. To determine whether, in current clinical practice, the implanted fiducial markers are within the OCT visualization range (up to 3.0 mm depth), we retrospectively measured the distance of 19 fiducial markers to the esophageal lumen on CT scans of 16 esophageal cancer patients. In the esophageal phantoms, all the included fiducial markers were visible on OCT at all investigated depths. Solid fiducial markers were better visible on OCT than liquid fiducial markers with a 1.74-fold higher CNR. Although fiducial marker identification per type and size was slightly easier for superficially implanted fiducial markers, we observed no difference in the ability of OCT to visualize the markers over the investigated depth range. Retrospective distance measurements of 19 fiducial markers on the CT scan of esophageal cancer patients showed that 84% (distance from the closest border of the marker to the lumen) and 53% (distance from the center of the marker to the lumen) of the fiducial markers were located within the OCT visualization range of up to 3.0 mm. We studied the visibility of eight types of commercially available fiducial markers at different depths on OCT using dedicated esophageal phantoms. All tested fiducial markers were visible at depths ≤3.0 mm and most, but not all, clinically implanted markers were at a depth accessible to OCT. Consequently, the use of fiducial markers as a reference for OCT to CT registration is feasible. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  1. Imaging high-pressure rock exhumation along the arc-continent suture in eastern Taiwan

    NASA Astrophysics Data System (ADS)

    Brown, Dennis; Feng, Kuan-Fu; Wu, Yih-Min; Huang, Hsin-Hua

    2015-04-01

    Imaging high-pressure rock exhumation in active tectonic settings is considered to be one of the important observations that could potentially help to move forward the understanding of how this process works. Petrophysical analyses carried out along a high velocity zone imaged by seismic travel time tomography along the suture zone between the actively colliding Luzon Arc and the southeastern margin of Eurasia in Taiwan suggests that high-pressure rocks are being exhumed from at least a depth of 50 km below the arc-continent suture to the shallow subsurface where they coincide with an outcropping tectonic mélange called the Yuli Belt. The Yuli Belt comprises mainly greenschist facies quartz-mica schist, with lesser metabasite, metamorphosed mantle fragments and, importantly, minor blueschist. Modeling of published data bases of measured seismic velocities for a large suite of rocks suggests that all of the Yuli belt lithologies fit well with the measured Vp, Vs, and Vp/Vs at ambient pressures and temperatures (a 20 oC/km geotherm is used) from 10 to about 20 km depth. With the exception of hornblendite, mantle rocks need 30% to 40 % serpentinization to approximate the in situ range of Vp and and Vs at these depths. From about 20 km to 30 km, most continental crust and volcanic arc lithologies move out of the range of velocities measured by the tomography model at these depths. Blueschist (including the calculated Vp and Vs for the Yuli Belt samples), pyroxenite, and harzburgite, lherzolite, and dunite with around 20% to 30% serpentinization now enter into the range of velocities for these depths. From 40 km to 50 km depth, the mantle rocks pyroxenite, and weakly to unserpentinized harzburgite, lherzolite, and dunite, together with mafic eclogite velocities best fit the range of Vp, Vs and Vp/Vs at these depths. Seismicity along the arc-continent suture, the upper bounding fault of the high velocity zone examined here, indicate that it is a moderately oblique-slip thrust. The western boundary is a near vertical, sharp velocity gradient that, in the upper 10 to 15 km appears to link with a sinistral strike-slip fault. The high velocity zone itself is very seismically active down to a depth of 50 km. Focal mechanisms determined from within the high velocity zone are mostly strike-slip, oblique-slip, and extensional, with rare thrust mechanisms.

  2. Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing

    DTIC Science & Technology

    2014-06-01

    price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure

  3. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  4. Optical Drug Monitoring: Photoacoustic Imaging of Nanosensors to Monitor Therapeutic Lithium In Vivo

    PubMed Central

    Cash, Kevin J.; Li, Chiye; Xia, Jun; Wang, Lihong V.; Clark, Heather A.

    2015-01-01

    Personalized medicine could revolutionize how primary care physicians treat chronic disease and how researchers study fundamental biological questions. To realize this goal we need to develop more robust, modular tools and imaging approaches for in vivo monitoring of analytes. In this report, we demonstrate that synthetic nanosensors can measure physiologic parameters with photoacoustic contrast, and we apply that platform to continuously track lithium levels in vivo. Photoacoustic imaging achieves imaging depths that are unattainable with fluorescence or multiphoton microscopy. We validated the photoacoustic results that illustrate the superior imaging depth and quality of photoacoustic imaging with optical measurements. This powerful combination of techniques will unlock the ability to measure analyte changes in deep tissue and will open up photoacoustic imaging as a diagnostic tool for continuous physiological tracking of a wide range of analytes. PMID:25588028

  5. Optical drug monitoring: photoacoustic imaging of nanosensors to monitor therapeutic lithium in vivo.

    PubMed

    Cash, Kevin J; Li, Chiye; Xia, Jun; Wang, Lihong V; Clark, Heather A

    2015-02-24

    Personalized medicine could revolutionize how primary care physicians treat chronic disease and how researchers study fundamental biological questions. To realize this goal, we need to develop more robust, modular tools and imaging approaches for in vivo monitoring of analytes. In this report, we demonstrate that synthetic nanosensors can measure physiologic parameters with photoacoustic contrast, and we apply that platform to continuously track lithium levels in vivo. Photoacoustic imaging achieves imaging depths that are unattainable with fluorescence or multiphoton microscopy. We validated the photoacoustic results that illustrate the superior imaging depth and quality of photoacoustic imaging with optical measurements. This powerful combination of techniques will unlock the ability to measure analyte changes in deep tissue and will open up photoacoustic imaging as a diagnostic tool for continuous physiological tracking of a wide range of analytes.

  6. Quantitative light-induced fluorescence technology for quantitative evaluation of tooth wear

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Kyeom; Lee, Hyung-Suk; Park, Seok-Woo; Lee, Eun-Song; de Josselin de Jong, Elbert; Jung, Hoi-In; Kim, Baek-Il

    2017-12-01

    Various technologies used to objectively determine enamel thickness or dentin exposure have been suggested. However, most methods have clinical limitations. This study was conducted to confirm the potential of quantitative light-induced fluorescence (QLF) using autofluorescence intensity of occlusal surfaces of worn teeth according to enamel grinding depth in vitro. Sixteen permanent premolars were used. Each tooth was gradationally ground down at the occlusal surface in the apical direction. QLF-digital and swept-source optical coherence tomography images were acquired at each grinding depth (in steps of 100 μm). All QLF images were converted to 8-bit grayscale images to calculate the fluorescence intensity. The maximum brightness (MB) values of the same sound regions in grayscale images before (MB) and phased values after (MB) the grinding process were calculated. Finally, 13 samples were evaluated. MB increased over the grinding depth range with a strong correlation (r=0.994, P<0.001). In conclusion, the fluorescence intensity of the teeth and grinding depth was strongly correlated in the QLF images. Therefore, QLF technology may be a useful noninvasive tool used to monitor the progression of tooth wear and to conveniently estimate enamel thickness.

  7. The suitability of lightfield camera depth maps for coordinate measurement applications

    NASA Astrophysics Data System (ADS)

    Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael

    2015-12-01

    Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.

  8. Joint optic disc and cup boundary extraction from monocular fundus images.

    PubMed

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  10. Penetration depth of photons in biological tissues from hyperspectral imaging in shortwave infrared in transmission and reflection geometries.

    PubMed

    Zhang, Hairong; Salo, Daniel; Kim, David M; Komarov, Sergey; Tai, Yuan-Chuan; Berezin, Mikhail Y

    2016-12-01

    Measurement of photon penetration in biological tissues is a central theme in optical imaging. A great number of endogenous tissue factors such as absorption, scattering, and anisotropy affect the path of photons in tissue, making it difficult to predict the penetration depth at different wavelengths. Traditional studies evaluating photon penetration at different wavelengths are focused on tissue spectroscopy that does not take into account the heterogeneity within the sample. This is especially critical in shortwave infrared where the individual vibration-based absorption properties of the tissue molecules are affected by nearby tissue components. We have explored the depth penetration in biological tissues from 900 to 1650 nm using Monte–Carlo simulation and a hyperspectral imaging system with Michelson spatial contrast as a metric of light penetration. Chromatic aberration-free hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 min. Relatively short recording time minimized artifacts from sample drying. Results from both transmission and reflection geometries consistently revealed that the highest spatial contrast in the wavelength range for deep tissue lies within 1300 to 1375 nm; however, in heavily pigmented tissue such as the liver, the range 1550 to 1600 nm is also prominent.

  11. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.

  12. Moho Depth Variations in the Northeastern North China Craton Revealed by Receiver Function Imaging

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, L.; Yao, H.; Fang, L.

    2016-12-01

    The North China Craton (NCC), one of the oldest cratons in the world, has attracted wide attention in Earth Science for decades because of the unusual Mesozoic destruction of its cratonic lithosphere. Understanding the deep processes and mechanism of this craton destruction demands detailed knowledge about the deep structure of the region. In this study, we used two-year teleseismic receiver function data from the North China Seismic Array consisting of 200 broadband stations deployed in the northeastern NCC to image the Moho undulation of the region. A 2-D wave equation-based poststack depth migration method was employed to construct the structural images along 19 profiles, and a pseudo 3D crustal velocity model of the region based on previous ambient noise tomography and receiver function study was adopted in the migration. We considered both the Ps and PpPs phases, but in some cases we also conducted PpSs+PsPs migration using different back azimuth ranges of the data, and calculated the travel times of all the considered phases to constrain the Moho depths. By combining the structure images along the 19 profiles, we got a high-resolution Moho depth map beneath the northeastern NCC. Our results broadly consist with the results of previous active source studies [http://www.craton.cn/data], and show a good correlation of the Moho depths with geological and tectonic features. Generally, the Moho depths are distinctly different on the opposite sides of the North-South Gravity Lineament. The Moho in the west are deeper than 40 km and shows a rapid uplift from 40 km to 30 km beneath the Taihang Mountain Range in the middle. To the east in the Bohai Bay Basin, the Moho further shallows to 30-26 km depth and undulates by 3 km, coinciding well with the depressions and uplifts inside the basin. The Moho depth beneath the Yin-Yan Mountains in the north gradually decreases from 42 km in the west to 25 km in the east, varying much smoother than that to the south.

  13. Depth-encoded all-fiber swept source polarization sensitive OCT

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Lee, ByungKun; Choi, WooJhon; Potsaid, Benjamin; Liu, Jonathan; Jayaraman, Vijaysekhar; Cable, Alex; Kraus, Martin F.; Liang, Kaicheng; Hornegger, Joachim; Fujimoto, James G.

    2014-01-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of conventional OCT and can assess depth-resolved tissue birefringence in addition to intensity. Most existing PS-OCT systems are relatively complex and their clinical translation remains difficult. We present a simple and robust all-fiber PS-OCT system based on swept source technology and polarization depth-encoding. Polarization multiplexing was achieved using a polarization maintaining fiber. Polarization sensitive signals were detected using fiber based polarization beam splitters and polarization controllers were used to remove the polarization ambiguity. A simplified post-processing algorithm was proposed for speckle noise reduction relaxing the demand for phase stability. We demonstrated systems design for both ophthalmic and catheter-based PS-OCT. For ophthalmic imaging, we used an optical clock frequency doubling method to extend the imaging range of a commercially available short cavity light source to improve polarization depth-encoding. For catheter based imaging, we demonstrated 200 kHz PS-OCT imaging using a MEMS-tunable vertical cavity surface emitting laser (VCSEL) and a high speed micromotor imaging catheter. The system was demonstrated in human retina, finger and lip imaging, as well as ex vivo swine esophagus and cardiovascular imaging. The all-fiber PS-OCT is easier to implement and maintain compared to previous PS-OCT systems and can be more easily translated to clinical applications due to its robust design. PMID:25401008

  14. Video-Rate Confocal Microscopy for Single-Molecule Imaging in Live Cells and Superresolution Fluorescence Imaging

    PubMed Central

    Lee, Jinwoo; Miyanaga, Yukihiro; Ueda, Masahiro; Hohng, Sungchul

    2012-01-01

    There is no confocal microscope optimized for single-molecule imaging in live cells and superresolution fluorescence imaging. By combining the swiftness of the line-scanning method and the high sensitivity of wide-field detection, we have developed a, to our knowledge, novel confocal fluorescence microscope with a good optical-sectioning capability (1.0 μm), fast frame rates (<33 fps), and superior fluorescence detection efficiency. Full compatibility of the microscope with conventional cell-imaging techniques allowed us to do single-molecule imaging with a great ease at arbitrary depths of live cells. With the new microscope, we monitored diffusion motion of fluorescently labeled cAMP receptors of Dictyostelium discoideum at both the basal and apical surfaces and obtained superresolution fluorescence images of microtubules of COS-7 cells at depths in the range 0–85 μm from the surface of a coverglass. PMID:23083712

  15. Development and Application of Stable Phantoms for the Evaluation of Photoacoustic Imaging Instruments

    PubMed Central

    Bohndiek, Sarah E.; Bodapati, Sandhya; Van De Sompel, Dominique; Kothapalli, Sri-Rajasekhar; Gambhir, Sanjiv S.

    2013-01-01

    Photoacoustic imaging combines the high contrast of optical imaging with the spatial resolution and penetration depth of ultrasound. This technique holds tremendous potential for imaging in small animals and importantly, is clinically translatable. At present, there is no accepted standard physical phantom that can be used to provide routine quality control and performance evaluation of photoacoustic imaging instruments. With the growing popularity of the technique and the advent of several commercial small animal imaging systems, it is important to develop a strategy for assessment of such instruments. Here, we developed a protocol for fabrication of physical phantoms for photoacoustic imaging from polyvinyl chloride plastisol (PVCP). Using this material, we designed and constructed a range of phantoms by tuning the optical properties of the background matrix and embedding spherical absorbing targets of the same material at different depths. We created specific designs to enable: routine quality control; the testing of robustness of photoacoustic signals as a function of background; and the evaluation of the maximum imaging depth available. Furthermore, we demonstrated that we could, for the first time, evaluate two small animal photoacoustic imaging systems with distinctly different light delivery, ultrasound imaging geometries and center frequencies, using stable physical phantoms and directly compare the results from both systems. PMID:24086557

  16. The Hellenic Subduction Zone: A tomographic image and its geodynamic implications

    NASA Astrophysics Data System (ADS)

    Spakman, W.; Wortel, M. J. R.; Vlaar, N. J.

    1988-01-01

    New tomographic images of the Hellenic subduction zone demonstrate slab penetration in the Aegean Upper Mantle to depths of at least 600 km. Beneath Greece the lower part of the slab appears to be detached at a depth of about 200 km whereas it still seems to be unruptured beneath the southern Aegean. Schematically we derive minimum time estimates for the duration of the Hellenic subduction zone that range from 26 to 40 Ma. This is considerably longer than earlier estimates which vary between 5 and about 13 Ma.

  17. Sampling strategies to improve passive optical remote sensing of river bathymetry

    USGS Publications Warehouse

    Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.

    2018-01-01

    Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.

  18. Novel dental dynamic depth profilometric imaging using simultaneous frequency-domain infrared photothermal radiometry and laser luminescence

    NASA Astrophysics Data System (ADS)

    Nicolaides, Lena; Mandelis, Andreas

    2000-01-01

    A high-spatial-resolution dynamic experimental imaging setup, which can provide simultaneous measurements of laser- induced frequency-domain infrared photothermal radiometric and luminescence signals from defects in teeth, has been developed for the first time. The major findings of this work are: (1) radiometric images are complementary to (anticorrelated with) luminescence images, as a result of the nature of the two physical signal generation processes; (2) the radiometric amplitude exhibits much superior dynamic (signal resolution) range to luminescence in distinguishing between intact and cracked sub-surface structures in the enamel; (3) the radiometric signal (amplitude and phase) produces dental images with much better defect localization, delineation, and resolution; (4) radiometric images (amplitude and phase) at a fixed modulation frequency are depth profilometric, whereas luminescence images are not; and (5) luminescence frequency responses from enamel and hydroxyapatite exhibit two relaxation lifetimes, the longer of which (approximately ms) is common to all and is not sensitive to the defect state and overall quality of the enamel. Simultaneous radiometric and luminescence frequency scans for the purpose of depth profiling were performed and a quantitative theoretical two-lifetime rate model of dental luminescence was advanced.

  19. Automatic optimization high-speed high-resolution OCT retinal imaging at 1μm

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Liu, Xiyun; Miao, Dongkai; Lee, Sujin; Lee, Sieun; Bonora, Stefano; Zawadzki, Robert J.; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2015-03-01

    High-resolution OCT retinal imaging is important in providing visualization of various retinal structures to aid researchers in better understanding the pathogenesis of vision-robbing diseases. However, conventional optical coherence tomography (OCT) systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking optical coherence tomography (OCT) system with automatic optimization for high-resolution, extended-focal-range clinical retinal imaging. A variable-focus liquid lens was added to correct for de-focus in real-time. A GPU-accelerated segmentation and optimization was used to provide real-time layer-specific enface visualization as well as depth-specific focus adjustment. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the ONH, from which we extracted clinically-relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  20. P-wave tomography of the western United States: Insight into the Yellowstone hotspot and the Juan de Fuca slab

    NASA Astrophysics Data System (ADS)

    Tian, You; Zhao, Dapeng

    2012-06-01

    We used 190,947 high-quality P-wave arrival times from 8421 local earthquakes and 1,098,022 precise travel-time residuals from 6470 teleseismic events recorded by the EarthScope/USArray transportable array to determine a detailed three-dimensional P-wave velocity model of the crust and mantle down to 1000 km depth under the western United States (US). Our tomography revealed strong heterogeneities in the crust and upper mantle under the western US. Prominent high-velocity anomalies are imaged beneath Idaho Batholith, central Colorado Plateau, Cascadian subduction zone, stable North American Craton, Transverse Ranges, and Southern Sierra Nevada. Prominent low-velocity anomalies are imaged at depths of 0-200 km beneath Snake River Plain, which may represent a small-scale convection beneath the western US. The low-velocity structure deviates variably from a narrow vertical plume conduit extending down to ˜1000 km depth, suggesting that the Yellowstone hotspot may have a lower-mantle origin. The Juan de Fuca slab is imaged as a dipping high-velocity anomaly under the western US. The slab geometry and its subducted depth vary in the north-south direction. In the southern parts the slab may have subducted down to >600 km depth. A "slab hole" is revealed beneath Oregon, which shows up as a low-velocity anomaly at depths of ˜100 to 300 km. The formation of the slab hole may be related to the Newberry magmatism. The removal of flat subducted Farallon slab may have triggered the vigorous magmatism in the Basin and Range and southern part of Rocky Mountains and also resulted in the uplift of the Colorado Plateau and Rocky Mountains.

  1. Vision based obstacle detection and grouping for helicopter guidance

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Chatterji, Gano

    1993-01-01

    Electro-optical sensors can be used to compute range to objects in the flight path of a helicopter. The computation is based on the optical flow/motion at different points in the image. The motion algorithms provide a sparse set of ranges to discrete features in the image sequence as a function of azimuth and elevation. For obstacle avoidance guidance and display purposes, these discrete set of ranges, varying from a few hundreds to several thousands, need to be grouped into sets which correspond to objects in the real world. This paper presents a new method for object segmentation based on clustering the sparse range information provided by motion algorithms together with the spatial relation provided by the static image. The range values are initially grouped into clusters based on depth. Subsequently, the clusters are modified by using the K-means algorithm in the inertial horizontal plane and the minimum spanning tree algorithms in the image plane. The object grouping allows interpolation within a group and enables the creation of dense range maps. Researchers in robotics have used densely scanned sequence of laser range images to build three-dimensional representation of the outside world. Thus, modeling techniques developed for dense range images can be extended to sparse range images. The paper presents object segmentation results for a sequence of flight images.

  2. Chirp-coded excitation imaging with a high-frequency ultrasound annular array.

    PubMed

    Mamou, Jonathan; Ketterling, Jeffrey A; Silverman, Ronald H

    2008-02-01

    High-frequency ultrasound (HFU, > 15 MHz) is an effective means of obtaining fine-resolution images of biological tissues for applications such as opthalmologic, dermatologic, and small animal imaging. HFU has two inherent drawbacks. First, HFU images have a limited depth of field (DOF) because of the short wavelength and the low fixed F-number of conventional HFU transducers. Second, HFU can be used to image only a few millimeters deep into a tissue because attenuation increases with frequency. In this study, a five-element annular array was used in conjunction with a synthetic-focusing algorithm to extend the DOF. The annular array had an aperture of 10 mm, a focal length of 31 mm, and a center frequency of 17 MHz. To increase penetration depth, 8-micros, chirp-coded signals were designed, input into an arbitrary waveform generator, and used to excite each array element. After data acquisition, the received signals were linearly filtered to restore axial resolution and increase the SNR. To compare the chirpcoded imaging method with conventional impulse imaging in terms of resolution, a 25-microm diameter wire was scanned and the -6-dB axial and lateral resolutions were computed at depths ranging from 20.5 to 40.5 mm. The results demonstrated that chirp-coded excitation did not degrade axial or lateral resolution. A tissue-mimicking phantom containing 10-microm glass beads was scanned, and backscattered signals were analyzed to evaluate SNR and penetration depth. Finally, ex vivo ophthalmic images were formed and chirpcoded images showed features that were not visible in conventional impulse images.

  3. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  4. SU-E-J-121: Measuring Prompt Gamma Emission Profiles with a Multi-Stage Compton Camera During Proton Beam Irradiation: Initial Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polf, J; McCleskey, M; Brown, S

    2014-06-01

    Purpose: Recent studies have suggested that the characteristics of prompt gammas (PG) emitted during proton beam irradiation are advantageous for determining beam range during treatment delivery. The purpose of this work was to determine the feasibility of determining the proton beam range from PG data measured with a prototype Compton camera (CC) during proton beam irradiation. Methods: Using a prototype multi-stage CC the PG emission from a water phantom was measured during irradiation with clinical proton therapy beams. The measured PG emission data was used to reconstruct an image of the PG emission using a backprojection reconstruction algorithm. One dimensionalmore » (1D) profiles extracted from the PG images were compared to: 1) PG emission data measured at fixed depths using collimated high purity Germanium and Lanthanum Bromide detectors, and 2) the measured depth dose profiles of the proton beams. Results: Comparisons showed that the PG emission profiles reconstructed from CC measurements agreed very well with the measurements of PG emission as a function of depth made with the collimated detectors. The distal falloff of the measured PG profile was between 1 mm to 4 mm proximal to the distal edge of the Bragg peak for proton beam ranges from 4 cm to 16 cm in water. Doses of at least 5 Gy were needed for the CC to measure sufficient data to image the PG profile and localize the distal PG falloff. Conclusion: Initial tests of a prototype CC for imaging PG emission during proton beam irradiation indicated that measurement and reconstruction of the PG profile was possible. However, due to limitations of the operational parameters (energy range and count rate) of the current CC prototype, doses of greater than a typical treatment dose (∼2 Gy) were needed to measure adequate PG signal to reconstruct viable images. Funding support for this project provided by a grant from DoD.« less

  5. Water penetration study

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.

    1973-01-01

    Nine film-filter combinations have been tested for effectiveness in recording water subsurface detail when exposed from an aerial platform over a typical water body. An experimental 2-layer positive color film, a 2-layer (minus blue layer) film, a normal 3-layer color film, a panchromatic black-and-white film, and an infrared film with selected filters were tested. Results have been tabulated to show the relative capability of each film-filter combination for: (1) image contrast in shallow water (0 to 5 feet); (2) image contrast at medium depth (5 to 10 feet); (3) image contrast in deep water (10 feet plus); (4) water penetration; maximum depth where detail was discriminated; (5) image color (the spectral range of the image); (6) vegetation visible above a water background; (7) specular reflections visible from the water surface; and (8) visual compatibility; ease of discriminating image detail. Recommendations for future recording over water bodies are included.

  6. Retrieving the axial position of fluorescent light emitting spots by shearing interferometry

    NASA Astrophysics Data System (ADS)

    Schindler, Johannes; Schau, Philipp; Brodhag, Nicole; Frenner, Karsten; Osten, Wolfgang

    2016-12-01

    A method for the depth-resolved detection of fluorescent radiation based on imaging of an interference pattern of two intersecting beams and shearing interferometry is presented. The illumination setup provides the local addressing of the excitation of fluorescence and a coarse confinement of the excitation volume in axial and lateral directions. The reconstruction of the depth relies on the measurement of the phase of the fluorescent wave fronts. Their curvature is directly related to the distance of a source to the focus of the imaging system. Access to the phase information is enabled by a lateral shearing interferometer based on a Michelson setup. This allows the evaluation of interference signals even for spatially and temporally incoherent light such as emitted by fluorophors. An analytical signal model is presented and the relations for obtaining the depth information are derived. Measurements of reference samples with different concentrations and spatial distributions of fluorophors and scatterers prove the experimental feasibility of the method. In a setup optimized for flexibility and operating in the visible range, sufficiently large interference signals are recorded for scatterers placed in depths in the range of hundred micrometers below the surface in a material with scattering properties comparable to dental enamel.

  7. Retrieving the axial position of fluorescent light emitting spots by shearing interferometry.

    PubMed

    Schindler, Johannes; Schau, Philipp; Brodhag, Nicole; Frenner, Karsten; Osten, Wolfgang

    2016-12-01

    A method for the depth-resolved detection of fluorescent radiation based on imaging of an interference pattern of two intersecting beams and shearing interferometry is presented. The illumination setup provides the local addressing of the excitation of fluorescence and a coarse confinement of the excitation volume in axial and lateral directions. The reconstruction of the depth relies on the measurement of the phase of the fluorescent wave fronts. Their curvature is directly related to the distance of a source to the focus of the imaging system. Access to the phase information is enabled by a lateral shearing interferometer based on a Michelson setup. This allows the evaluation of interference signals even for spatially and temporally incoherent light such as emitted by fluorophors. An analytical signal model is presented and the relations for obtaining the depth information are derived. Measurements of reference samples with different concentrations and spatial distributions of fluorophors and scatterers prove the experimental feasibility of the method. In a setup optimized for flexibility and operating in the visible range, sufficiently large interference signals are recorded for scatterers placed in depths in the range of hundred micrometers below the surface in a material with scattering properties comparable to dental enamel.

  8. Research of an optimization design method of integral imaging three-dimensional display system

    NASA Astrophysics Data System (ADS)

    Gao, Hui; Yan, Zhiqiang; Wen, Jun; Jiang, Guanwu

    2016-03-01

    The information warfare needs a highly transparent environment of battlefield, it follows that true three-dimensional display technology has obvious advantages than traditional display technology in the current field of military science and technology. It also focuses on the research progress of lens array imaging technology and aims at what restrict the development of integral imaging, main including low spatial resolution, narrow depth range and small viewing angle. This paper summarizes the principle, characteristics and development history of the integral imaging. A variety of methods are compared and analyzed that how to improve the resolution, extend depth of field, increase scope and eliminate the artifact aiming at problems currently. And makes a discussion about the experimental results of the research, comparing the display performance of different methods.

  9. A Flexible Annular-Array Imaging Platform for Micro-Ultrasound

    PubMed Central

    Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei

    2013-01-01

    Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923

  10. Detection of Underwater UXOs in Mud

    DTIC Science & Technology

    2013-04-01

    the system can operate in a water depth up to 30 m. 4 1.3 Outline of Report The report is structured as follows: Section 2 provides an...and tilt angle can be modified, such that the system can operate in a water depth up to 30 m. Figure 2 – Data flow diagram for the MUD processing...ground-truth location. The water depth is in the range between 8 and 15 m. Figure 4 – SAS image snippets of the CMRE EVA cylinder using (a) regular

  11. Design, Fabrication and Characterization of A Bi-Frequency Co-Linear Array

    PubMed Central

    Wang, Zhuochen; Li, Sibo; Czernuszewicz, Tomasz J; Gallippi, Caterina M.; Liu, Ruibin; Geng, Xuecang

    2016-01-01

    Ultrasound imaging with high resolution and large penetration depth has been increasingly adopted in medical diagnosis, surgery guidance, and treatment assessment. Conventional ultrasound works at a particular frequency, with a −6 dB fractional bandwidth of ~70 %, limiting the imaging resolution or depth of field. In this paper, a bi-frequency co-linear array with resonant frequencies of 8 MHz and 20 MHz was investigated to meet the requirements of resolution and penetration depth for a broad range of ultrasound imaging applications. Specifically, a 32-element bi-frequency co-linear array was designed and fabricated, followed by element characterization and real-time sectorial scan (S-scan) phantom imaging using a Verasonics system. The bi-frequency co-linear array was tested in four different modes by switching between low and high frequencies on transmit and receive. The four modes included the following: (1) transmit low, receive low, (2) transmit low, receive high, (3) transmit high, receive low, (4) transmit high, receive high. After testing, the axial and lateral resolutions of all modes were calculated and compared. The results of this study suggest that bi-frequency co-linear arrays are potential aids for wideband fundamental imaging and harmonic/sub-harmonic imaging. PMID:26661069

  12. A carbon CT system: how to obtain accurate stopping power ratio using a Bragg peak reduction technique

    NASA Astrophysics Data System (ADS)

    Lee, Sung Hyun; Sunaguchi, Naoki; Hirano, Yoshiyuki; Kano, Yosuke; Liu, Chang; Torikoshi, Masami; Ohno, Tatsuya; Nakano, Takashi; Kanai, Tatsuaki

    2018-02-01

    In this study, we investigate the performance of the Gunma University Heavy Ion Medical Center’s ion computed tomography (CT) system, which measures the residual range of a carbon-ion beam using a fluoroscopy screen, a charge-coupled-device camera, and a moving wedge absorber and collects CT reconstruction images from each projection angle. Each 2D image was obtained by changing the polymethyl methacrylate (PMMA) thickness, such that all images for one projection could be expressed as the depth distribution in PMMA. The residual range as a function of PMMA depth was related to the range in water through a calibration factor, which was determined by comparing the PMMA-equivalent thickness measured by the ion CT system to the water-equivalent thickness measured by a water column. Aluminium, graphite, PMMA, and five biological phantoms were placed in a sample holder, and the residual range for each was quantified simultaneously. A novel method of CT reconstruction to correct for the angular deflection of incident carbon ions in the heterogeneous region utilising the Bragg peak reduction (BPR) is also introduced in this paper, and its performance is compared with other methods present in the literature such as the decomposition and differential methods. Stopping power ratio values derived with the BPR method from carbon-ion CT images matched closely with the true water-equivalent length values obtained from the validation slab experiment.

  13. Evaluation of color encodings for high dynamic range pixels

    NASA Astrophysics Data System (ADS)

    Boitard, Ronan; Mantiuk, Rafal K.; Pouli, Tania

    2015-03-01

    Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut, which does not encompass the range of colors produced on upcoming High Dynamic Range (HDR) displays. Future imaging systems will require encoding much wider color gamut and luminance range. Such wide color gamut can be represented using floating point HDR pixel values but those are inefficient to encode. They also lack perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by most LDR color spaces. Therefore, there is a need to devise an efficient, perceptually uniform and integer valued representation for high dynamic range pixel values. In this paper we evaluate several methods for encoding colour HDR pixel values, in particular for use in image and video compression. Unlike other studies we test both luminance and color difference encoding in a rigorous 4AFC threshold experiments to determine the minimum bit-depth required. Results show that the Perceptual Quantizer (PQ) encoding provides the best perceptual uniformity in the considered luminance range, however the gain in bit-depth is rather modest. More significant difference can be observed between color difference encoding schemes, from which YDuDv encoding seems to be the most efficient.

  14. Potential of coded excitation in medical ultrasound imaging.

    PubMed

    Misaridis, T X; Gammelmark, K; Jørgensen, C H; Lindberg, N; Thomsen, A H; Pedersen, M H; Jensen, J A

    2000-03-01

    Improvement in signal-to-noise ratio (SNR) and/or penetration depth can be achieved in medical ultrasound by using long coded waveforms, in a similar manner as in radars or sonars. However, the time-bandwidth product (TB) improvement, and thereby SNR improvement is considerably lower in medical ultrasound, due to the lower available bandwidth. There is still space for about 20 dB improvement in the SNR, which will yield a penetration depth up to 20 cm at 5 MHz [M. O'Donnell, IEEE Trans. Ultrason. Ferroelectr. Freq. Contr., 39(3) (1992) 341]. The limited TB additionally yields unacceptably high range sidelobes. However, the frequency weighting from the ultrasonic transducer's bandwidth, although suboptimal, can be beneficial in sidelobe reduction. The purpose of this study is an experimental evaluation of the above considerations in a coded excitation ultrasound system. A coded excitation system based on a modified commercial scanner is presented. A predistorted FM signal is proposed in order to keep the resulting range sidelobes at acceptably low levels. The effect of the transducer is taken into account in the design of the compression filter. Intensity levels have been considered and simulations on the expected improvement in SNR are also presented. Images of a wire phantom and clinical images have been taken with the coded system. The images show a significant improvement in penetration depth and they preserve both axial resolution and contrast.

  15. Penetration depth of photons in biological tissues from hyperspectral imaging in shortwave infrared in transmission and reflection geometries

    PubMed Central

    Zhang, Hairong; Salo, Daniel; Kim, David M.; Komarov, Sergey; Tai, Yuan-Chuan; Berezin, Mikhail Y.

    2016-01-01

    Abstract. Measurement of photon penetration in biological tissues is a central theme in optical imaging. A great number of endogenous tissue factors such as absorption, scattering, and anisotropy affect the path of photons in tissue, making it difficult to predict the penetration depth at different wavelengths. Traditional studies evaluating photon penetration at different wavelengths are focused on tissue spectroscopy that does not take into account the heterogeneity within the sample. This is especially critical in shortwave infrared where the individual vibration-based absorption properties of the tissue molecules are affected by nearby tissue components. We have explored the depth penetration in biological tissues from 900 to 1650 nm using Monte–Carlo simulation and a hyperspectral imaging system with Michelson spatial contrast as a metric of light penetration. Chromatic aberration-free hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 min. Relatively short recording time minimized artifacts from sample drying. Results from both transmission and reflection geometries consistently revealed that the highest spatial contrast in the wavelength range for deep tissue lies within 1300 to 1375 nm; however, in heavily pigmented tissue such as the liver, the range 1550 to 1600 nm is also prominent. PMID:27930773

  16. Direct local building inundation depth determination in 3-D point clouds generated from user-generated flood images

    NASA Astrophysics Data System (ADS)

    Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard

    2017-07-01

    In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.

  17. In vivo quantitative imaging of point-like bioluminescent and fluorescent sources: Validation studies in phantoms and small animals post mortem

    NASA Astrophysics Data System (ADS)

    Comsa, Daria Craita

    2008-10-01

    There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.

  18. Motionless active depth from defocus system using smart optics for camera autofocus applications

    NASA Astrophysics Data System (ADS)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  19. Faulting apparently related to the 1994 Northridge, California, earthquake and possible co-seismic origin of surface cracks in Potrero Canyon, Los Angeles County, California

    USGS Publications Warehouse

    Catchings, R.D.; Goldman, M.R.; Lee, W.H.K.; Rymer, M.J.; Ponti, D.J.

    1998-01-01

    Apparent southward-dipping, reverse-fault zones are imaged to depths of about 1.5 km beneath Potrero Canyon, Los Angeles County, California. Based on their orientation and projection to the surface, we suggest that the imaged fault zones are extensions of the Oak Ridge fault. Geologic mapping by others and correlations with seismicity studies suggest that the Oak Ridge fault is the causative fault of the 17 January 1994 Northridge earthquake (Northridge fault). Our seismically imaged faults may be among several faults that collectively comprise the Northridge thrust fault system. Unusually strong shaking in Potrero Canyon during the Northridge earthquake may have resulted from focusing of seismic energy or co-seismic movement along existing, related shallow-depth faults. The strong shaking produced ground-surface cracks and sand blows distributed along the length of the canyon. Seismic reflection and refraction images show that shallow-depth faults may underlie some of the observed surface cracks. The relationship between observed surface cracks and imaged faults indicates that some of the surface cracks may have developed from nontectonic alluvial movement, but others may be fault related. Immediately beneath the surface cracks, P-wave velocities are unusually low (<400 m/sec), and there are velocity anomalies consistent with a seismic reflection image of shallow faulting to depths of at least 100 m. On the basis of velocity data, we suggest that unconsolidated soils (<800 m/sec) extend to depths of about 15 to 20 m beneath our datum (<25 m below ground surface). The underlying rocks range in velocity from about 1000 to 5000 m/sec in the upper 100 m. This study illustrates the utility of high-resolution seismic imaging in assessing local and regional seismic hazards.

  20. Estimation and correction of produced light from prompt gamma photons on luminescence imaging of water for proton therapy dosimetry

    NASA Astrophysics Data System (ADS)

    Yabe, Takuya; Komori, Masataka; Toshito, Toshiyuki; Yamaguchi, Mitsutaka; Kawachi, Naoki; Yamamoto, Seiichi

    2018-02-01

    Although the luminescence images of water during proton-beam irradiation using a cooled charge-coupled device camera showed almost the same ranges of proton beams as those measured by an ionization chamber, the depth profiles showed lower Bragg peak intensities than those measured by an ionization chamber. In addition, a broad optical baseline signal was observed in depths that exceed the depth of the Bragg peak. We hypothesize that this broad baseline signal originates from the interaction of proton-induced prompt gamma photons with water. These prompt gamma photons interact with water to form high-energy Compton electrons, which may cause luminescence or Cherenkov emission from depths exceeding the location of the Bragg peak. To clarify this idea, we measured the luminescence images of water during the irradiations of protons in water with minimized parallax errors, and also simulated the produced light by the interactions of prompt gamma photons with water. We corrected the measured depth profiles of the luminescence images by subtracting the simulated distributions of the produced light by the interactions of prompt gamma photons in water. Corrections were also conducted using the estimated depth profiles of the light of the prompt gamma photons, as obtained from the off-beam areas of the luminescence images of water. With these corrections, we successfully obtained depth profiles that have almost identical distributions as the simulated dose distributions for protons. The percentage relative height of the Bragg peak with corrections to that of the simulation data increased to 94% from 80% without correction. Also, the percentage relative offset heights of the deeper part of the Bragg peak with corrections decreased to 0.2%-0.4% from 4% without correction. These results indicate that the luminescence imaging of water has potential for the dose distribution measurements for proton therapy dosimetry.

  1. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  2. Annular phased-array high-intensity focused ultrasound device for image-guided therapy of uterine fibroids.

    PubMed

    Held, Robert Thomas; Zderic, Vesna; Nguyen, Thuc Nghi; Vaezy, Shahram

    2006-02-01

    An ultrasound (US), image-guided high-intensity focused ultrasound (HIFU) device was developed for noninvasive ablation of uterine fibroids. The HIFU device was an annular phased array, with a focal depth range of 30-60 mm, a natural focus of 50 mm, and a resonant frequency of 3 MHz. The in-house control software was developed to operate the HIFU electronics drive system for inducing tissue coagulation at different distances from the array. A novel imaging algorithm was developed to minimize the HIFU-induced noise in the US images. The device was able to produce lesions in bovine serum albumin-embedded polyacrylamide gels and excised pig liver. The lesions could be seen on the US images as hyperechoic regions. Depths ranging from 30 to 60 mm were sonicated at acoustic intensities of 4100 and 6100 W/cm2 for 15 s each, with the latter producing average lesion volumes at least 63% larger than the former. Tissue sonication patterns that began distal to the transducer produced longer lesions than those that began proximally. The variation in lesion dimensions indicates the possible development of HIFU protocols that increase HIFU throughput and shorten tumor treatment times.

  3. Robust stereo matching with trinary cross color census and triple image-based refinements

    NASA Astrophysics Data System (ADS)

    Chang, Ting-An; Lu, Xiao; Yang, Jar-Ferr

    2017-12-01

    For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.

  4. Salton Seismic Imaging Project Line 5—the San Andreas Fault and Northern Coachella Valley Structure, Riverside County, California

    NASA Astrophysics Data System (ADS)

    Rymer, M. J.; Fuis, G.; Catchings, R. D.; Goldman, M.; Tarnowski, J. M.; Hole, J. A.; Stock, J. M.; Matti, J. C.

    2012-12-01

    The Salton Seismic Imaging Project (SSIP) is a large-scale, active- and passive-source seismic project designed to image the San Andreas Fault (SAF) and the adjacent basins (Imperial and Coachella Valleys) in southern California. Here, we focus on SSIP Line 5, one of four 2-D NE-SW-oriented seismic profiles that were acquired across the Coachella Valley. The 38-km-long SSIP-Line-5 seismic profile extends from the Santa Rosa Ranges to the Little San Bernardino Mountains and crosses both strands of the SAF, the Mission Creek (MCF) and Banning (BF) strands, near Palm Desert. Data for Line 5 were generated from nine buried explosive sources (most spaced about 2 to 8 km apart) and were recorded on approximately 281 Texan seismographs (average spacing 138 m). First-arrival refractions were used to develop a refraction tomographic velocity image of the upper crust along the seismic profile. The seismic data were also stacked and migrated to develop low-fold reflection images of the crust. From the surface to about 8 km depth, P-wave velocities range from about 2 km/s to more than 7.5 km/s, with the lowest velocities within a well-defined (~2-km-deep, 15-km-wide) basin (< 4 km/s), and the highest velocities below the transition from the Coachella Valley to the Santa Rosa Ranges on the southwest and within the Little San Bernardino Mountains on the northeast. The MCF and BF strands of the SAF bound an approximately 2.5-km-wide horst-type structure on the northeastern side of the Coachella Valley, beneath which the upper crust is characterized by a pronounced low-velocity zone that extends to the bottom of the velocity image. Rocks within the low-velocity zone have significantly lower velocities than those to the northeast and the southwest at the same depths. Conversely, the velocities of rocks on both sides of the Coachella Valley are greater than 7 km/s at depths exceeding about 4 km. The relatively narrow zone of shallow high-velocity rocks between the surface traces of the MCF and BF strands is associated with a zone of uplifted strata. Along SSIP Line 5, we infer that the MCF and BF strands are steeply dipping and merge at about 2 km depth. We base our interpretation on a prominent basement low-velocity zone (fault zone) that is centered southwest of the MCF and BF strands and extends to at least 8 km depth.

  5. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling

    PubMed Central

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-01-01

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028

  6. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.

    PubMed

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-09-27

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.

  7. A wave equation migration method for receiver function imaging: 2. Application to the Japan subduction zone

    NASA Astrophysics Data System (ADS)

    Chen, Ling; Wen, Lianxing; Zheng, Tianyu

    2005-11-01

    The newly developed wave equation poststack depth migration method for receiver function imaging is applied to study the subsurface structures of the Japan subduction zone using the Fundamental Research on Earthquakes and Earth's Interior Anomalies (FREESIA) broadband data. Three profiles are chosen in the subsurface imaging, two in northeast (NE) Japan to study the subducting Pacific plate and one in southwest (SW) Japan to study the Philippine Sea plate. The descending Pacific plate in NE Japan is well imaged within a depth range of 50-150 km. The slab image exhibits a little more steeply dipping angle (˜32°) in the south than in the north (˜27°), although the general characteristics between the two profiles in NE Japan are similar. The imaged Philippine Sea plate in eastern SW Japan, in contrast, exhibits a much shallower subduction angle (˜19°) and is only identifiable at the uppermost depths of no more than 60 km. Synthetic tests indicate that the top 150 km of the migrated images of the Pacific plate is well resolved by our seismic data, but the resolution of deep part of the slab images becomes poor due to the limited data coverage. Synthetic tests also suggest that the breakdown of the Philippine Sea plate at shallow depths reflects the real structural features of the subduction zone, rather than caused by insufficient coverage of data. Comparative studies on both synthetics and real data images show the possibility of retrieval of fine-scale structures from high-frequency contributions if high-frequency noise can be effectively suppressed and a small bin size can be used in future studies. The derived slab geometry and image feature also appear to have relatively weak dependence on overlying velocity structure. The observed seismicity in the region confirms the geometries inferred from the migrated images for both subducting plates. Moreover, the deep extent of the Pacific plate image and the shallow breakdown of the Philippine Sea plate image are observed to correlate well with the depth extent of the seismicity beneath NE and SW Japan. Such a correlation supports the inference that the specific appearance of slabs and intermediate-depth earthquakes are a consequence of temperature-dependent dehydration induced metamorphism occurring in the hydrated descending oceanic crust.

  8. Video-rate confocal microscopy for single-molecule imaging in live cells and superresolution fluorescence imaging.

    PubMed

    Lee, Jinwoo; Miyanaga, Yukihiro; Ueda, Masahiro; Hohng, Sungchul

    2012-10-17

    There is no confocal microscope optimized for single-molecule imaging in live cells and superresolution fluorescence imaging. By combining the swiftness of the line-scanning method and the high sensitivity of wide-field detection, we have developed a, to our knowledge, novel confocal fluorescence microscope with a good optical-sectioning capability (1.0 μm), fast frame rates (<33 fps), and superior fluorescence detection efficiency. Full compatibility of the microscope with conventional cell-imaging techniques allowed us to do single-molecule imaging with a great ease at arbitrary depths of live cells. With the new microscope, we monitored diffusion motion of fluorescently labeled cAMP receptors of Dictyostelium discoideum at both the basal and apical surfaces and obtained superresolution fluorescence images of microtubules of COS-7 cells at depths in the range 0-85 μm from the surface of a coverglass. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. PlenoPatch: Patch-Based Plenoptic Image Manipulation.

    PubMed

    Zhang, Fang-Lue; Wang, Jue; Shechtman, Eli; Zhou, Zi-Ye; Shi, Jia-Xin; Hu, Shi-Min

    2017-05-01

    Patch-based image synthesis methods have been successfully applied for various editing tasks on still images, videos and stereo pairs. In this work we extend patch-based synthesis to plenoptic images captured by consumer-level lenselet-based devices for interactive, efficient light field editing. In our method the light field is represented as a set of images captured from different viewpoints. We decompose the central view into different depth layers, and present it to the user for specifying the editing goals. Given an editing task, our method performs patch-based image synthesis on all affected layers of the central view, and then propagates the edits to all other views. Interaction is done through a conventional 2D image editing user interface that is familiar to novice users. Our method correctly handles object boundary occlusion with semi-transparency, thus can generate more realistic results than previous methods. We demonstrate compelling results on a wide range of applications such as hole-filling, object reshuffling and resizing, changing object depth, light field upscaling and parallax magnification.

  10. Cytology 3D structure formation based on optical microscopy images

    NASA Astrophysics Data System (ADS)

    Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.

  11. Reflection Acoustic Microscopy for Micro-NDE.

    DTIC Science & Technology

    1983-02-01

    WORDS (Coni, wu rere side. 14 It noeeeey And Idenify1 by block esife) Nondestructive Evaluation Acoustic Microscopy I Subsurface Imaging Pulsecio Cmrsin... subsurface imaging is presented and it is shown that with such lenses it is possible to obtain good focussing performance over a wide depth range...typically few millimeters at 50 MHz. A major problem in subsurface imaging derives from the large reflection obtained frnm the surface, and the small amount

  12. Extended depth of focus adaptive optics spectral domain optical coherence tomography

    PubMed Central

    Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Yasuno, Yoshiaki

    2012-01-01

    We present an adaptive optics spectral domain optical coherence tomography (AO-SDOCT) with a long focal range by active phase modulation of the pupil. A long focal range is achieved by introducing AO-controlled third-order spherical aberration (SA). The property of SA and its effects on focal range are investigated in detail using the Huygens-Fresnel principle, beam profile measurement and OCT imaging of a phantom. The results indicate that the focal range is extended by applying SA, and the direction of extension can be controlled by the sign of applied SA. Finally, we demonstrated in vivo human retinal imaging by altering the applied SA. PMID:23082278

  13. Extended depth of focus adaptive optics spectral domain optical coherence tomography.

    PubMed

    Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Yasuno, Yoshiaki

    2012-10-01

    We present an adaptive optics spectral domain optical coherence tomography (AO-SDOCT) with a long focal range by active phase modulation of the pupil. A long focal range is achieved by introducing AO-controlled third-order spherical aberration (SA). The property of SA and its effects on focal range are investigated in detail using the Huygens-Fresnel principle, beam profile measurement and OCT imaging of a phantom. The results indicate that the focal range is extended by applying SA, and the direction of extension can be controlled by the sign of applied SA. Finally, we demonstrated in vivo human retinal imaging by altering the applied SA.

  14. Development of collision avoidance system for useful UAV applications using image sensors with laser transmitter

    NASA Astrophysics Data System (ADS)

    Cheong, M. K.; Bahiki, M. R.; Azrad, S.

    2016-10-01

    The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.

  15. A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light

    PubMed Central

    Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning

    2017-01-01

    Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759

  16. SU-E-T-108: An Investigation of Cerenkov Light Production in the Exradin W1 Scintillator Under Various Measurement Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simiele, E; Culberson, W

    2015-06-15

    Purpose: To investigate the effects of depth, fiber-optic cable bends, and incident radiation angle on Cerenkov production in the Standard Imaging Exradin W1. Methods: Measurements were completed using a Varian Clinac 21EX linear accelerator with an Exradin W1 scintillator as well as a cable-only scintillator (no scintillation material) to isolate the Cerenkov signal. The effects of cable bend radius and location were investigated by bending the fiber-optic cable into a circle with radii ranging from 1.0 to 10.8 cm and positioning the center of the coil at distances ranging from 10.0 to 175.0 cm from the photodiode. The effects ofmore » depth and incident radiation angle were investigated by performing measurements in water at depths ranging from 1.0 cm to 25.0 cm and angles ranging from 0° to 80°. Eclipse treatment-planning software was utilized to ensure a consistent dose was delivered to the W1 regardless of depth or angle. Results: Measured signal in both channels of the cable-only scintillator decreased as the bend radius decreased and as the distance between the bend and photodiode increased. A fiber bend of 1.0 cm radius produced a 17.1% decrease in the green channel response in the cable-only scintillator. The effect of depth was less severe; a maximum increase of 6.6% in the green channel response was observed at a depth of 25.0 cm in the W1. In the angular dependence investigation, the signal in both channels of the W1 peaked at an angle of 40°; which is in agreement with the nominal Cerenkov emission angle of 45°. Conclusion: The green channel response in the W1 (mainly Cerenkov signal) varied with depth, fiber-optic cable bends, and incident radiation angle. Fully characterizing Cerenkov production is essential to ensure it is properly accounted for in scintillator measurements. Research funding and materials received by Standard Imaging, Inc. (Middleton WI)« less

  17. Extreme depth-of-field intraocular lenses

    NASA Astrophysics Data System (ADS)

    Baker, Kenneth M.

    1996-05-01

    A new technology brings the full aperture single vision pseudophakic eye's effective hyperfocal distance within the half-meter range. A modulated index IOL containing a subsurface zeroth order coherent microlenticular mosaic defined by an index gradient adds a normalizing function to the vergences or parallactic angles of incoming light rays subtended from field object points and redirects them, in the case of near-field images, to that of far-field images. Along with a scalar reduction of the IOL's linear focal range, this results in an extreme depth of field with a narrow depth of focus and avoids the focal split-up, halo, and inherent reduction in contrast of multifocal IOLs. A high microlenticular spatial frequency, which, while still retaining an anisotropic medium, results in a nearly total zeroth order propagation throughout the visible spectrum. The curved lens surfaces still provide most of the refractive power of the IOL, and the unique holographic fabrication technology is especially suitable not only for IOLs but also for contact lenses, artificial corneas, and miniature lens elements for cameras and other optical devices.

  18. Differential standard deviation of log-scale intensity based optical coherence tomography angiography.

    PubMed

    Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D

    2017-12-01

    In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Subsurface geometry of the San Andreas fault in southern California: Results from the Salton Seismic Imaging Project (SSIP) and strong ground motion expectations

    USGS Publications Warehouse

    Fuis, Gary S.; Bauer, Klaus; Goldman, Mark R.; Ryberg, Trond; Langenheim, Victoria; Scheirer, Daniel S.; Rymer, Michael J.; Stock, Joann M.; Hole, John A.; Catchings, Rufus D.; Graves, Robert; Aagaard, Brad T.

    2017-01-01

    The San Andreas fault (SAF) is one of the most studied strike‐slip faults in the world; yet its subsurface geometry is still uncertain in most locations. The Salton Seismic Imaging Project (SSIP) was undertaken to image the structure surrounding the SAF and also its subsurface geometry. We present SSIP studies at two locations in the Coachella Valley of the northern Salton trough. On our line 4, a fault‐crossing profile just north of the Salton Sea, sedimentary basin depth reaches 4 km southwest of the SAF. On our line 6, a fault‐crossing profile at the north end of the Coachella Valley, sedimentary basin depth is ∼2–3  km">∼2–3  km and centered on the central, most active trace of the SAF. Subsurface geometry of the SAF and nearby faults along these two lines is determined using a new method of seismic‐reflection imaging, combined with potential‐field studies and earthquakes. Below a 6–9 km depth range, the SAF dips ∼50°–60°">∼50°–60° NE, and above this depth range it dips more steeply. Nearby faults are also imaged in the upper 10 km, many of which dip steeply and project to mapped surface fault traces. These secondary faults may join the SAF at depths below about 10 km to form a flower‐like structure. In Appendix D, we show that rupture on a northeast‐dipping SAF, using a single plane that approximates the two dips seen in our study, produces shaking that differs from shaking calculated for the Great California ShakeOut, for which the southern SAF was modeled as vertical in most places: shorter‐period (T<1  s">T<1  s) shaking is increased locally by up to a factor of 2 on the hanging wall and is decreased locally by up to a factor of 2 on the footwall, compared to shaking calculated for a vertical fault.

  20. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  1. Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2018-05-01

    To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  2. Enhancing swimming pool safety by the use of range-imaging cameras

    NASA Astrophysics Data System (ADS)

    Geerardyn, D.; Boulanger, S.; Kuijk, M.

    2015-05-01

    Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.

  3. Fluorescence tomography characterization for sub-surface imaging with protoporphyrin IX

    PubMed Central

    Kepshire, Dax; Davis, Scott C.; Dehghani, Hamid; Paulsen, Keith D.; Pogue, Brian W.

    2009-01-01

    Optical imaging of fluorescent objects embedded in a tissue simulating medium was characterized using non-contact based approaches to fluorescence remittance imaging (FRI) and sub-surface fluorescence diffuse optical tomography (FDOT). Using Protoporphyrin IX as a fluorescent agent, experiments were performed on tissue phantoms comprised of typical in-vivo tumor to normal tissue contrast ratios, ranging from 3.5:1 up to 10:1. It was found that tomographic imaging was able to recover interior inclusions with high contrast relative to the background; however, simple planar fluorescence imaging provided a superior contrast to noise ratio. Overall, FRI performed optimally when the object was located on or close to the surface and, perhaps most importantly, FDOT was able to recover specific depth information about the location of embedded regions. The results indicate that an optimal system for localizing embedded fluorescent regions should combine fluorescence reflectance imaging for high sensitivity and sub-surface tomography for depth detection, thereby allowing more accurate localization in all three directions within the tissue. PMID:18545571

  4. Depth-resolved imaging of colon tumor using optical coherence tomography and fluorescence laminar optical tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tang, Qinggong; Frank, Aaron; Wang, Jianting; Chen, Chao-wei; Jin, Lily; Lin, Jon; Chan, Joanne M.; Chen, Yu

    2016-03-01

    Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is unable to detect those subsurface lesions. Since cancer development is associated with both morphological and molecular alterations, imaging technologies that can quantitative image tissue's morphological and molecular biomarkers and assess the depth extent of a lesion in real time, without the need for tissue excision, would be a major advance in GI cancer diagnostics and therapy. In this research, we investigated the feasibility of multi-modal optical imaging including high-resolution optical coherence tomography (OCT) and depth-resolved high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. APC (adenomatous polyposis coli) mice model were imaged using OCT and FLOT and the correlated histopathological diagnosis was obtained. Quantitative structural (the scattering coefficient) and molecular imaging parameters (fluorescence intensity) from OCT and FLOT images were developed for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 87.4% (87.3%) for sensitivity (specificity) which gives the most optimal diagnosis (the largest area under receiver operating characteristic (ROC) curve). This project results in a new non-invasive multi-modal imaging platform for improved GI cancer detection, which is expected to have a major impact on detection, diagnosis, and characterization of GI cancers, as well as a wide range of epithelial cancers.

  5. Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, Aongus; Collins, Robert J.; Krichel, Nils J.

    2009-11-10

    We describe a scanning time-of-flight system which uses the time-correlated single-photon counting technique to produce three-dimensional depth images of distant, noncooperative surfaces when these targets are illuminated by a kHz to MHz repetition rate pulsed laser source. The data for the scene are acquired using a scanning optical system and an individual single-photon detector. Depth images have been successfully acquired with centimeter xyz resolution, in daylight conditions, for low-signature targets in field trials at distances of up to 325 m using an output illumination with an average optical power of less than 50 {mu}W.

  6. Automatic laser welding and milling with in situ inline coherent imaging.

    PubMed

    Webster, P J L; Wright, L G; Ji, Y; Galbraith, C M; Kinross, A W; Van Vlack, C; Fraser, J M

    2014-11-01

    Although new affordable high-power laser technologies enable many processing applications in science and industry, depth control remains a serious technical challenge. In this Letter we show that inline coherent imaging (ICI), with line rates up to 312 kHz and microsecond-duration capture times, is capable of directly measuring laser penetration depth, in a process as violent as kW-class keyhole welding. We exploit ICI's high speed, high dynamic range, and robustness to interference from other optical sources to achieve automatic, adaptive control of laser welding, as well as ablation, achieving 3D micron-scale sculpting in vastly different heterogeneous biological materials.

  7. Time-of-flight range imaging for underwater applications

    NASA Astrophysics Data System (ADS)

    Merbold, Hannes; Catregn, Gion-Pol; Leutenegger, Tobias

    2018-02-01

    Precise and low-cost range imaging in underwater settings with object distances on the meter level is demonstrated. This is addressed through silicon-based time-of-flight (TOF) cameras operated with light emitting diodes (LEDs) at visible, rather than near-IR wavelengths. We find that the attainable performance depends on a variety of parameters, such as the wavelength dependent absorption of water, the emitted optical power and response times of the LEDs, or the spectral sensitivity of the TOF chip. An in-depth analysis of the interplay between the different parameters is given and the performance of underwater TOF imaging using different visible illumination wavelengths is analyzed.

  8. Depth profiling and imaging capabilities of an ultrashort pulse laser ablation time of flight mass spectrometer

    PubMed Central

    Cui, Yang; Moore, Jerry F.; Milasinovic, Slobodan; Liu, Yaoming; Gordon, Robert J.; Hanley, Luke

    2012-01-01

    An ultrafast laser ablation time-of-flight mass spectrometer (AToF-MS) and associated data acquisition software that permits imaging at micron-scale resolution and sub-micron-scale depth profiling are described. The ion funnel-based source of this instrument can be operated at pressures ranging from 10−8 to ∼0.3 mbar. Mass spectra may be collected and stored at a rate of 1 kHz by the data acquisition system, allowing the instrument to be coupled with standard commercial Ti:sapphire lasers. The capabilities of the AToF-MS instrument are demonstrated on metal foils and semiconductor wafers using a Ti:sapphire laser emitting 800 nm, ∼75 fs pulses at 1 kHz. Results show that elemental quantification and depth profiling are feasible with this instrument. PMID:23020378

  9. High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform

    PubMed Central

    Chan, Kenny K. H.; Tang, Shuo

    2010-01-01

    The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551

  10. Mapping the opacity of paint layers in paintings with coloured grounds using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, Ping; Hall-Aquitania, Moorea; Hermens, Erma; Groves, Roger M.

    2017-07-01

    Optical diagnostics techniques are becoming important for technical art history (TAH) as well as for heritage conservation. In recent years, optical coherence tomography (OCT) has been increasingly used as a novel technique for the inspection of artwork, revealing the stratigraphy of paintings. It has also shown to be an effective tool for vanish layer inspection. OCT is a contactless and non-destructive technique for microstructural imaging of turbid media, originally developed for medical applications. However current OCT instruments have difficulty in paint layer inspection due to the opacity of most pigments. This paper explores the potential of OCT for the investigation of paintings with coloured grounds. Depth scans were processed to determine the light penetration depth at the optical wavelength based on a 1/e light attenuation calculation. The variation in paint opacity was mapped based on the microstructural images and 3D penetration depth profiles was calculated and related back to the construction of the artwork. By determining the light penetration depth over a range of wavelengths the 3D depth perception of a painting with coloured grounds can be characterized optically.

  11. Anterior segment and retinal OCT imaging with simplified sample arm using focus tunable lens technology (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Grulkowski, Ireneusz; Karnowski, Karol; Ruminski, Daniel; Wojtkowski, Maciej

    2016-03-01

    Availability of the long-depth-range OCT systems enables comprehensive structural imaging of the eye and extraction of biometric parameters characterizing the entire eye. Several approaches have been developed to perform OCT imaging with extended depth ranges. In particular, current SS-OCT technology seems to be suited to visualize both anterior and posterior eye in a single measurement. The aim of this study is to demonstrate integrated anterior segment and retinal SS-OCT imaging using a single instrument, in which the sample arm is equipped with the electrically tunable lens (ETL). ETL is composed of the optical liquid confined in the space by an elastic polymer membrane. The shape of the membrane, electrically controlled by a specific ring, defines the radius of curvature of the lens surface, thus it regulates the power of the lens. ETL can be also equipped with additional offset lens to adjust the tuning range of the optical power. We characterize the operation of the tunable lens using wavefront sensing. We develop the optimized optical set-up with two adaptive operational states of the ETL in order to focus the light either on the retina or on the anterior segment of the eye. We test the performance of the set-up by utilizing whole eye phantom as the object. Finally, we perform human eye in vivo imaging using the SS-OCT instrument with versatile imaging functionality that accounts for the optics of the eye and enables dynamic control of the optical beam focus.

  12. Swept-source optical coherence tomography powered by a 1.3-μm vertical cavity surface emitting laser enables 2.3-mm-deep brain imaging in mice in vivo

    NASA Astrophysics Data System (ADS)

    Choi, Woo June; Wang, Ruikang K.

    2015-10-01

    We report noninvasive, in vivo optical imaging deep within a mouse brain by swept-source optical coherence tomography (SS-OCT), enabled by a 1.3-μm vertical cavity surface emitting laser (VCSEL). VCSEL SS-OCT offers a constant signal sensitivity of 105 dB throughout an entire depth of 4.25 mm in air, ensuring an extended usable imaging depth range of more than 2 mm in turbid biological tissue. Using this approach, we show deep brain imaging in mice with an open-skull cranial window preparation, revealing intact mouse brain anatomy from the superficial cerebral cortex to the deep hippocampus. VCSEL SS-OCT would be applicable to small animal studies for the investigation of deep tissue compartments in living brains where diseases such as dementia and tumor can take their toll.

  13. A commercialized photoacoustic microscopy system with switchable optical and acoustic resolutions

    NASA Astrophysics Data System (ADS)

    Pu, Yang; Bi, Renzhe; Olivo, Malini; Zhao, Xiaojie

    2018-02-01

    A focused-scanning photoacoustic microscopy (PAM) is available to help advancing life science research in neuroscience, cell biology, and in vivo imaging. At this early stage, the only one manufacturer of PAM systems, MicroPhotoAcoustics (MPA; Ronkonkoma, NY), MPA has developed a commercial PAM system with switchable optical and acoustic resolution (OR- and AR-PAM), using multiple patents licensed from the lab of Lihong Wang, who pioneered photoacoustics. The system includes different excitation sources. Two kilohertz-tunable, Q-switched, Diode Pumped Solid-State (DPSS) lasers offering a up to 30kHz pulse repetition rate and 9 ns pulse duration with 532 and 559 nm to achieve functional photoacoustic tomography for sO2 (oxygen saturation of hemoglobin) imaging in OR-PAM. A Ti:sapphire laser from 700 to 900 nm to achieve deep-tissue imaging. OR-PAM provides up to 1 mm penetration depth and 5 μm lateral resolution. while AR-PAM offers up to 3 mm imaging depth and 45 μm lateral resolution. The scanning step sizes for OR- and AR-PAM are 0.625 and 6.25 μm, respectively. Researchers have used the system for a range of applications, including preclinical neural imaging; imaging of cell nuclei in intestine, ear, and leg; and preclinical human imaging of finger cuticle. With the continuation of new technological advancements and discoveries, MPA plans to further advance PAM to achieve faster imaging speed, higher spatial resolution at deeper tissue layer, and address a broader range of biomedical applications.

  14. Total variation based image deconvolution for extended depth-of-field microscopy images

    NASA Astrophysics Data System (ADS)

    Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.

    2015-03-01

    One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.

  15. Visualization of the 3-D topography of the optic nerve head through a passive stereo vision model

    NASA Astrophysics Data System (ADS)

    Ramirez, Juan M.; Mitra, Sunanda; Morales, Jose

    1999-01-01

    This paper describes a system for surface recovery and visualization of the 3D topography of the optic nerve head, as support of early diagnosis and follow up to glaucoma. In stereo vision, depth information is obtained from triangulation of corresponding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement technique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse-to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This methodology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy.

  16. In vivo cross-sectional imaging of the phonating larynx using long-range Doppler optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Coughlan, Carolyn A.; Chou, Li-Dek; Jing, Joseph C.; Chen, Jason J.; Rangarajan, Swathi; Chang, Theodore H.; Sharma, Giriraj K.; Cho, Kyoungrai; Lee, Donghoon; Goddard, Julie A.; Chen, Zhongping; Wong, Brian J. F.

    2016-03-01

    Diagnosis and treatment of vocal fold lesions has been a long-evolving science for the otolaryngologist. Contemporary practice requires biopsy of a glottal lesion in the operating room under general anesthesia for diagnosis. Current in-office technology is limited to visualizing the surface of the vocal folds with fiber-optic or rigid endoscopy and using stroboscopic or high-speed video to infer information about submucosal processes. Previous efforts using optical coherence tomography (OCT) have been limited by small working distances and imaging ranges. Here we report the first full field, high-speed, and long-range OCT images of awake patients’ vocal folds as well as cross-sectional video and Doppler analysis of their vocal fold motions during phonation. These vertical-cavity surface-emitting laser source (VCSEL) OCT images offer depth resolved, high-resolution, high-speed, and panoramic images of both the true and false vocal folds. This technology has the potential to revolutionize in-office imaging of the larynx.

  17. Toward 1-mm depth precision with a solid state full-field range imaging system

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.

    2006-02-01

    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.

  18. Benthic Habitat Mapping by Combining Lyzenga’s Optical Model and Relative Water Depth Model in Lintea Island, Southeast Sulawesi

    NASA Astrophysics Data System (ADS)

    Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.

    2017-12-01

    Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.

  19. Multi-scale Functional and Molecular Photoacoustic Tomography

    PubMed Central

    Yao, Junjie; Xia, Jun; Wang, Lihong V.

    2015-01-01

    Photoacoustic tomography (PAT) combines rich optical absorption contrast with the high spatial resolution of ultrasound at depths in tissue. The high scalability of PAT has enabled anatomical imaging of biological structures ranging from organelles to organs. The inherent functional and molecular imaging capabilities of PAT have further allowed it to measure important physiological parameters and track critical cellular activities. Integration of PAT with other imaging technologies provides complementary capabilities and can potentially accelerate the clinical translation of PAT. PMID:25933617

  20. Remote sensing in marine environment - acquiring, processing, and interpreting GLORIA sidescan sonor images of deep sea floor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Leary, D.W.

    1989-03-01

    The US Geological Survey's remote sensing instrument for regional imaging of the deep sea floor (> 400 m water depth) is the GLORIA (Geologic Long-Range Inclined Asdic) sidescan sonar system, designed and operated by the British Institute of Oceanographic Sciences. A 30-sec sweep rate provides for a swath width of approximately 45 km, depending on water depth. The return signal is digitally recorded as 8 bit data to provide a cross-range pixel dimension of 50 m. Postcruise image processing is carried out by using USGS software. Processing includes precision water-column removal, geometric and radiometric corrections, and contrast enhancement. Mosaicking includesmore » map grid fitting, concatenation, and tone matching. Seismic reflection profiles, acquired along track during the survey, are image correlative and provide a subsurface dimension unique to marine remote sensing. Generally GLORIA image interpretation is based on brightness variations which are largely a function of (1) surface roughness at a scale of approximately 1 m and (2) slope changes of more than about 4/degrees/ over distances of at least 50 m. Broader, low-frequency changes in slope that cannot be detected from the Gloria data can be determined from seismic profiles. Digital files of bathymetry derived from echo-sounder data can be merged with GLORIA image data to create relief models of the sea floor for geomorphic interpretation of regional slope effects.« less

  1. A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Hu, W.; Ning, J.

    2017-12-01

    Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.

  2. Frequency multiplexed long range swept source optical coherence tomography

    PubMed Central

    Zurauskas, Mantas; Bradu, Adrian; Podoleanu, Adrian Gh.

    2013-01-01

    We present a novel swept source optical coherence tomography configuration, equipped with acousto-optic deflectors that can be used to simultaneously acquire multiple B-scans originating from different depths. The sensitivity range of the configuration is evaluated while acquiring five simultaneous B-scans. Then the configuration is employed to demonstrate long range B-scan imaging by combining two simultaneous B-scans from a mouse head sample. PMID:23760762

  3. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    NASA Astrophysics Data System (ADS)

    Zheng, Yipeng; Tan, Wenjiang; Si, Jinhai; Ren, YuHu; Xu, Shichao; Tong, Junyi; Hou, Xun

    2016-09-01

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. This imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.

  4. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yipeng; Tan, Wenjiang, E-mail: tanwenjiang@mail.xjtu.edu.cn; Si, Jinhai

    2016-09-07

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. Thismore » imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.« less

  5. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing

    PubMed Central

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-01-01

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R2-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R2-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications. PMID:26437410

  6. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing.

    PubMed

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-09-30

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R²-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R²-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications.

  7. New Insights on Subsurface Imaging of Carbon Nanotubes in Polymer Composites via Scanning Electron Microscopy

    NASA Technical Reports Server (NTRS)

    Zhao, Minhua; Ming, Bin; Kim, Jae-Woo; Gibbons, Luke J.; Gu, Xiaohong; Nguyen, Tinh; Park, Cheol; Lillehei, Peter T.; Villarrubia, J. S.; Vladar, Andras E.; hide

    2015-01-01

    Despite many studies of subsurface imaging of carbon nanotube (CNT)-polymer composites via scanning electron microscopy (SEM), significant controversy exists concerning the imaging depth and contrast mechanisms. We studied CNT-polyimide composites and, by threedimensional reconstructions of captured stereo-pair images, determined that the maximum SEM imaging depth was typically hundreds of nanometers. The contrast mechanisms were investigated over a broad range of beam accelerating voltages from 0.3 to 30 kV, and ascribed to modulation by embedded CNTs of the effective secondary electron (SE) emission yield at the polymer surface. This modulation of the SE yield is due to non-uniform surface potential distribution resulting from current flows due to leakage and electron beam induced current. The importance of an external electric field on SEM subsurface imaging was also demonstrated. The insights gained from this study can be generally applied to SEM nondestructive subsurface imaging of conducting nanostructures embedded in dielectric matrices such as graphene-polymer composites, silicon-based single electron transistors, high resolution SEM overlay metrology or e-beam lithography, and have significant implications in nanotechnology.

  8. Characterization of a time-resolved non-contact scanning diffuse optical imaging system exploiting fast-gated single-photon avalanche diode detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Sieno, Laura, E-mail: laura.disieno@polimi.it; Dalla Mora, Alberto; Contini, Davide

    2016-03-15

    We present a system for non-contact time-resolved diffuse reflectance imaging, based on small source-detector distance and high dynamic range measurements utilizing a fast-gated single-photon avalanche diode. The system is suitable for imaging of diffusive media without any contact with the sample and with a spatial resolution of about 1 cm at 1 cm depth. In order to objectively assess its performances, we adopted two standardized protocols developed for time-domain brain imagers. The related tests included the recording of the instrument response function of the setup and the responsivity of its detection system. Moreover, by using liquid turbid phantoms with absorbingmore » inclusions, depth-dependent contrast and contrast-to-noise ratio as well as lateral spatial resolution were measured. To illustrate the potentialities of the novel approach, the characteristics of the non-contact system are discussed and compared to those of a fiber-based brain imager.« less

  9. Imaging the ascent path of fluids and partial melts at convergent plate boundaries by geophysical characteristics

    NASA Astrophysics Data System (ADS)

    Luehr, B. G.; Koulakov, I.; Kopp, H.; Rabbel, W.; Zschau, J.

    2011-12-01

    During the last decades many investigations were carried out at active continental margins to understand the link between the subduction of the fluid saturated oceanic plate and the process of ascent of fluids and partial melts forming a magmatic system that leads to volcanism at the earth surface. For this purpose structural information are needed about the slap itself, the part above it, the ascent paths as well as the storage of fluids and partial melts in the mantle and the crust above the down going slap up to the volcanoes on the surface. If we consider statistically the distance between the trench and the volcanic chain as well as the inclination angle of the down going plate, then the mean value of the depth distance down to the Wadati Benioff zone results of approximately 100 kilometers. Surprisingly, this depth range shows pronounced seismicity at most of all subduction zones. Additionally, mineralogical investigations in the lab have shown that the diving plate is maximal dehydrated around 100 km depth because of temperature and pressure conditions at this depth range. However, assuming a vertical fluid ascent there are exceptions. For instance at the Sunda Arc beneath Central Java the vertical distance results in approximately 150 km. But, in this case seismic investigations have shown that the fluids do not ascend vertically, but inclined even from a source area at around the 100 km depth. The ascent of the fluids and the appearance of partial melts as well as the distribution of these materials in the crust can be proved by seismic and seismological methods. With the seismic tomography these areas are imaged by lowered seismic velocities, high Vp/Vs ratios, as well as increased attenuation of seismic shear waves. But, to explore plate boundaries large and complex amphibious experiments are required, in which active and passive seismic investigations should be combined. They have to recover a range from before the trench to far behind the volcanic chain, to provide under favorable conditions information down to a depth of 150 km. In particular the record of the natural seismicity and its distribution allows the three-dimensional imaging of the entire crust and lithosphere structure above the Wadati Benioff zone with the help of tomographic procedures, and therewith the entire ascent path region of the fluids and melts, which are responsible for volcanism. The seismic velocity anomalies detected so far are within a range of a few per cent to more than 30% reduction. In the lecture findings of different subduction zones are compared and discussed.

  10. Depth information in natural environments derived from optic flow by insect motion detection system: a model analysis

    PubMed Central

    Schwegmann, Alexander; Lindemann, Jens P.; Egelhaaf, Martin

    2014-01-01

    Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs). It is the key result of our analysis that the absolute EMD responses, i.e., the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way. PMID:25136314

  11. Particle-Image Velocimeter Having Large Depth of Field

    NASA Technical Reports Server (NTRS)

    Bos, Brent

    2009-01-01

    An instrument that functions mainly as a particle-image velocimeter provides data on the sizes and velocities of flying opaque particles. The instrument is being developed as a means of characterizing fluxes of wind-borne dust particles in the Martian atmosphere. The instrument could also adapted to terrestrial use in measuring sizes and velocities of opaque particles carried by natural winds and industrial gases. Examples of potential terrestrial applications include monitoring of airborne industrial pollutants and airborne particles in mine shafts. The design of this instrument reflects an observation, made in field research, that airborne dust particles derived from soil and rock are opaque enough to be observable by use of bright field illumination with high contrast for highly accurate measurements of sizes and shapes. The instrument includes a source of collimated light coupled to an afocal beam expander and an imaging array of photodetectors. When dust particles travel through the collimated beam, they cast shadows. The shadows are magnified by the beam expander and relayed to the array of photodetectors. Inasmuch as the images captured by the array are of dust-particle shadows rather of the particles themselves, the depth of field of the instrument can be large: the instrument has a depth of field of about 11 mm, which is larger than the depths of field of prior particle-image velocimeters. The instrument can resolve, and measure the sizes and velocities of, particles having sizes in the approximate range of 1 to 300 m. For slowly moving particles, data from two image frames are used to calculate velocities. For rapidly moving particles, image smear lengths from a single frame are used in conjunction with particle- size measurement data to determine velocities.

  12. Seismogenic structures of the central Apennines and its implication for seismic hazard

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Riaz, M. S.; Shan, B.

    2017-12-01

    The central Apennines belt is formed during the Miocene-to-Pliocene epoch under the environment where the Adriatic Plate collides with and plunges beneath the Eurasian Plate, eventually formed a fold and thrust belt. This active fold and thrust belt has experienced relatively frequent moderate-magnitude earthquakesover, as well as strong destructive earthquakes such as the 1997 Umbira-Marche sequence, the 2009 Mw 6.3 L'Aquila earthquake sequence, and three strong earthquakes occurred in 2016. Such high seismicity makes it one of the most active tectonic zones in the world. Moreover, most of these earthquakes are normal fault events with shallow depths, and most earthquakes occurred in the central Apennines are of lower seismic energy to moment ratio. What seismogenic structure causes such kind of seismic features? and how about the potential seismic hazard in the study region? In order to make in-depth understanding about the seismogenic structures in this reion, we collected seismic data from the INGV, Italy, to model the crustal structure, and to relocate the earthquakes. To improve the spatial resolution of the tomographic images, we collected travel times from 27627 earthquakes with M>1.7 recorded at 387 seismic stations. Double Difference Tomography (hereafter as DDT) is applied to build velocity structures and earthquake locations. Checkerboard test confirms that the spatial resolution between the depths range from 5 20km is better than 10km. The travel time residual is significantly decreased from 1208 ms to 70 ms after the inversion. Horizontal Vp images show that mostly earthquakes occurred in high anomalies zones, especially between 5 10km, whereas at the deeper depths, some of the earthquakes occurred in the low Vp anomalies. For Vs images, shallow earthquakes mainly occurred in low anomalies zone, at depths range of 10 15km, earthquakes are mainly concentrated in normal velocity or relatively lower anomalies zones. Moreover, mostly earthquakes occurred in high Poisson ratio zones, especially at shallower depths. Since high Poisson's ratio anomalies are usually correspondent to weaker zones, and mostly earthquakes are occurred at the shallow depths. Due to this reason, the strength should be lower, so that the seismic energy to moment ratio is also lower.

  13. Readout models for BaFBr0.85I0.15:Eu image plates

    NASA Astrophysics Data System (ADS)

    Stoeckl, M.; Solodov, A. A.

    2018-06-01

    The linearity of the photostimulated luminescence process makes repeated image-plate scanning a viable technique to extract a more dynamic range. In order to obtain a response estimate, two semi-empirical models for the readout fading of an image plate are introduced; they relate the depth distribution of activated photostimulated luminescence centers within an image plate to the recorded signal. Model parameters are estimated from image-plate scan series with BAS-MS image plates and the Typhoon FLA 7000 scanner for the hard x-ray image-plate diagnostic over a collection of experiments providing x-ray energy spectra whose approximate shape is a double exponential.

  14. Miniature all-optical probe for photoacoustic and ultrasound dual-modality imaging

    NASA Astrophysics Data System (ADS)

    Li, Guangyao; Guo, Zhendong; Chen, Sung-Liang

    2018-02-01

    Photoacoustic (PA) imaging forms an image based on optical absorption contrasts with ultrasound (US) resolution. In contrast, US imaging is based on acoustic backscattering to provide structural information. In this study, we develop a miniature all-optical probe for high-resolution PA-US dual-modality imaging over a large imaging depth range. The probe employs three individual optical fibers (F1-F3) to achieve optical generation and detection of acoustic waves for both PA and US modalities. To offer wide-angle laser illumination, fiber F1 with a large numerical aperture (NA) is used for PA excitation. On the other hand, wide-angle US waves are generated by laser illumination on an optically absorbing composite film which is coated on the end face of fiber F2. Both the excited PA and backscattered US waves are detected by a Fabry-Pérot cavity on the tip of fiber F3 for wide-angle acoustic detection. The wide angular features of the three optical fibers make large-NA synthetic aperture focusing technique possible and thus high-resolution PA and US imaging. The probe diameter is less than 2 mm. Over a depth range of 4 mm, lateral resolutions of PA and US imaging are 104-154 μm and 64-112 μm, respectively, and axial resolutions of PA and US imaging are 72-117 μm and 31-67 μm, respectively. To show the imaging capability of the probe, phantom imaging with both PA and US contrasts is demonstrated. The results show that the probe has potential for endoscopic and intravascular imaging applications that require PA and US contrast with high resolution.

  15. Extending the depth of field with chromatic aberration for dual-wavelength iris imaging.

    PubMed

    Fitzgerald, Niamh M; Dainty, Christopher; Goncharov, Alexander V

    2017-12-11

    We propose a method of extending the depth of field to twice that achievable by conventional lenses for the purpose of a low cost iris recognition front-facing camera in mobile phones. By introducing intrinsic primary chromatic aberration in the lens, the depth of field is doubled by means of dual wavelength illumination. The lens parameters (radius of curvature, optical power) can be found analytically by using paraxial raytracing. The effective range of distances covered increases with dispersion of the glass chosen and with larger distance for the near object point.

  16. Spectral domain optical coherence tomography with extended depth-of-focus by aperture synthesis

    NASA Astrophysics Data System (ADS)

    Bo, En; Liu, Linbo

    2016-10-01

    We developed a spectral domain optical coherence tomography (SD-OCT) with an extended depth-of-focus (DOF) by synthetizing aperture. For a designated Gaussian-shape light source, the lateral resolution was determined by the numerical aperture (NA) of the objective lens and can be approximately maintained over the confocal parameter, which was defined as twice the Rayleigh range. However, the DOF was proportional to the square of the lateral resolution. Consequently, a trade-off existed between the DOF and lateral resolution, and researchers had to weigh and judge which was more important for their research reasonably. In this study, three distinct optical apertures were obtained by imbedding a circular phase spacer in the sample arm. Due to the optical path difference between three distinct apertures caused by the phase spacer, three images were aligned with equal spacing along z-axis vertically. By correcting the optical path difference (OPD) and defocus-induced wavefront curvature, three images with distinct depths were coherently summed together. This system digitally refocused the sample tissue and obtained a brand new image with higher lateral resolution over the confocal parameter when imaging the polystyrene calibration beads.

  17. Fluorescence lifetime imaging of skin cancer

    NASA Astrophysics Data System (ADS)

    Patalay, Rakesh; Talbot, Clifford; Munro, Ian; Breunig, Hans Georg; König, Karsten; Alexandrov, Yuri; Warren, Sean; Neil, Mark A. A.; French, Paul M. W.; Chu, Anthony; Stamp, Gordon W.; Dunsby, Chris

    2011-03-01

    Fluorescence intensity imaging and fluorescence lifetime imaging microscopy (FLIM) using two photon microscopy (TPM) have been used to study tissue autofluorescence in ex vivo skin cancer samples. A commercially available system (DermaInspect®) was modified to collect fluorescence intensity and lifetimes in two spectral channels using time correlated single photon counting and depth-resolved steady state measurements of the fluorescence emission spectrum. Uniquely, image segmentation has been used to allow fluorescence lifetimes to be calculated for each cell. An analysis of lifetime values obtained from a range of pigmented and non-pigmented lesions will be presented.

  18. Inverse scattering pre-stack depth imaging and it's comparison to some depth migration methods for imaging rich fault complex structure

    NASA Astrophysics Data System (ADS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Mubarok, Syahrul; Deny, Agus; Widowati, Sri; Kurniadi, Rizal

    2012-06-01

    Migration is important issue for seismic imaging in complex structure. In this decade, depth imaging becomes important tools for producing accurate image in depth imaging instead of time domain imaging. The challenge of depth migration method, however, is in revealing the complex structure of subsurface. There are many methods of depth migration with their advantages and weaknesses. In this paper, we show our propose method of pre-stack depth migration based on time domain inverse scattering wave equation. Hopefully this method can be as solution for imaging complex structure in Indonesia, especially in rich thrusting fault zones. In this research, we develop a recent advance wave equation migration based on time domain inverse scattering wave which use more natural wave propagation using scattering wave. This wave equation pre-stack depth migration use time domain inverse scattering wave equation based on Helmholtz equation. To provide true amplitude recovery, an inverse of divergence procedure and recovering transmission loss are considered of pre-stack migration. Benchmarking the propose inverse scattering pre-stack depth migration with the other migration methods are also presented, i.e.: wave equation pre-stack depth migration, waveequation depth migration, and pre-stack time migration method. This inverse scattering pre-stack depth migration could image successfully the rich fault zone which consist extremely dip and resulting superior quality of seismic image. The image quality of inverse scattering migration is much better than the others migration methods.

  19. Research on the underwater target imaging based on the streak tube laser lidar

    NASA Astrophysics Data System (ADS)

    Cui, Zihao; Tian, Zhaoshuo; Zhang, Yanchao; Bi, Zongjie; Yang, Gang; Gu, Erdan

    2018-03-01

    A high frame rate streak tube imaging lidar (STIL) for real-time 3D imaging of underwater targets is presented in this paper. The system uses 532nm pulse laser as the light source, the maximum repetition rate is 120Hz, and the pulse width is 8ns. LabVIEW platform is used in the system, the system control, synchronous image acquisition, 3D data processing and display are realized through PC. 3D imaging experiment of underwater target is carried out in a flume with attenuation coefficient of 0.2, and the images of different depth and different material targets are obtained, the imaging frame rate is 100Hz, and the maximum detection depth is 31m. For an underwater target with a distance of 22m, the high resolution 3D image real-time acquisition is realized with range resolution of 1cm and space resolution of 0.3cm, the spatial relationship of the targets can be clearly identified by the image. The experimental results show that STIL has a good application prospect in underwater terrain detection, underwater search and rescue, and other fields.

  20. Wide-Bandwidth, Wide-Beamwidth, High-Resolution, Millimeter-Wave Imaging for Concealed Weapon Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheen, David M.; Fernandes, Justin L.; Tedeschi, Jonathan R.

    2013-06-12

    Active millimeter-wave imaging is currently being used for personnel screening at airports and other high-security facilities. The lateral resolution, depth resolution, clothing penetration, and image illumination quality obtained from next-generation systems can be significantly enhanced through the selection the aperture size, antenna beamwidth, center frequency, and bandwidth. In this paper, the results of an extensive imaging trade study are presented using both planar and cylindrical three-dimensional imaging techniques at frequency ranges of 10-20 GHz, 10 – 40 GHz, 40 – 60 GHz, and 75 – 105 GHz

  1. Three-dimensional imaging of individual point defects using selective detection angles in annular dark field scanning transmission electron microscopy.

    PubMed

    Johnson, Jared M; Im, Soohyun; Windl, Wolfgang; Hwang, Jinwoo

    2017-01-01

    We propose a new scanning transmission electron microscopy (STEM) technique that can realize the three-dimensional (3D) characterization of vacancies, lighter and heavier dopants with high precision. Using multislice STEM imaging and diffraction simulations of β-Ga 2 O 3 and SrTiO 3 , we show that selecting a small range of low scattering angles can make the contrast of the defect-containing atomic columns substantially more depth-dependent. The origin of the depth-dependence is the de-channeling of electrons due to the existence of a point defect in the atomic column, which creates extra "ripples" at low scattering angles. The highest contrast of the point defect can be achieved when the de-channeling signal is captured using the 20-40mrad detection angle range. The effect of sample thickness, crystal orientation, local strain, probe convergence angle, and experimental uncertainty to the depth-dependent contrast of the point defect will also be discussed. The proposed technique therefore opens new possibilities for highly precise 3D structural characterization of individual point defects in functional materials. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. 3D sorghum reconstructions from depth images identify QTL regulating shoot architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mccormick, Ryan F.; Truong, Sandra K.; Mullet, John E.

    Dissecting the genetic basis of complex traits is aided by frequent and nondestructive measurements. Advances in range imaging technologies enable the rapid acquisition of three-dimensional (3D) data from an imaged scene. A depth camera was used to acquire images of sorghum (Sorghum bicolor), an important grain, forage, and bioenergy crop, at multiple developmental time points from a greenhouse-grown recombinant inbred line population. A semiautomated software pipeline was developed and used to generate segmented, 3D plant reconstructions from the images. Automated measurements made from 3D plant reconstructions identified quantitative trait loci for standard measures of shoot architecture, such as shoot height,more » leaf angle, and leaf length, and for novel composite traits, such as shoot compactness. The phenotypic variability associated with some of the quantitative trait loci displayed differences in temporal prevalence; for example, alleles closely linked with the sorghum Dwarf3 gene, an auxin transporter and pleiotropic regulator of both leaf inclination angle and shoot height, influence leaf angle prior to an effect on shoot height. Furthermore, variability in composite phenotypes that measure overall shoot architecture, such as shoot compactness, is regulated by loci underlying component phenotypes like leaf angle. As such, depth imaging is an economical and rapid method to acquire shoot architecture phenotypes in agriculturally important plants like sorghum to study the genetic basis of complex traits.« less

  3. 3D sorghum reconstructions from depth images identify QTL regulating shoot architecture

    DOE PAGES

    Mccormick, Ryan F.; Truong, Sandra K.; Mullet, John E.

    2016-08-15

    Dissecting the genetic basis of complex traits is aided by frequent and nondestructive measurements. Advances in range imaging technologies enable the rapid acquisition of three-dimensional (3D) data from an imaged scene. A depth camera was used to acquire images of sorghum (Sorghum bicolor), an important grain, forage, and bioenergy crop, at multiple developmental time points from a greenhouse-grown recombinant inbred line population. A semiautomated software pipeline was developed and used to generate segmented, 3D plant reconstructions from the images. Automated measurements made from 3D plant reconstructions identified quantitative trait loci for standard measures of shoot architecture, such as shoot height,more » leaf angle, and leaf length, and for novel composite traits, such as shoot compactness. The phenotypic variability associated with some of the quantitative trait loci displayed differences in temporal prevalence; for example, alleles closely linked with the sorghum Dwarf3 gene, an auxin transporter and pleiotropic regulator of both leaf inclination angle and shoot height, influence leaf angle prior to an effect on shoot height. Furthermore, variability in composite phenotypes that measure overall shoot architecture, such as shoot compactness, is regulated by loci underlying component phenotypes like leaf angle. As such, depth imaging is an economical and rapid method to acquire shoot architecture phenotypes in agriculturally important plants like sorghum to study the genetic basis of complex traits.« less

  4. Unsynchronized scanning with a low-cost laser range finder for real-time range imaging

    NASA Astrophysics Data System (ADS)

    Hatipoglu, Isa; Nakhmani, Arie

    2017-06-01

    Range imaging plays an essential role in many fields: 3D modeling, robotics, heritage, agriculture, forestry, reverse engineering. One of the most popular range-measuring technologies is laser scanner due to its several advantages: long range, high precision, real-time measurement capabilities, and no dependence on lighting conditions. However, laser scanners are very costly. Their high cost prevents widespread use in applications. Due to the latest developments in technology, now, low-cost, reliable, faster, and light-weight 1D laser range finders (LRFs) are available. A low-cost 1D LRF with a scanning mechanism, providing the ability of laser beam steering for additional dimensions, enables to capture a depth map. In this work, we present an unsynchronized scanning with a low-cost LRF to decrease scanning period and reduce vibrations caused by stop-scan in synchronized scanning. Moreover, we developed an algorithm for alignment of unsynchronized raw data and proposed range image post-processing framework. The proposed technique enables to have a range imaging system for a fraction of the price of its counterparts. The results prove that the proposed method can fulfill the need for a low-cost laser scanning for range imaging for static environments because the most significant limitation of the method is the scanning period which is about 2 minutes for 55,000 range points (resolution of 250x220 image). In contrast, scanning the same image takes around 4 minutes in synchronized scanning. Once faster, longer range, and narrow beam LRFs are available, the methods proposed in this work can produce better results.

  5. Electromagnetic behavior of spatial terahertz wave modulators based on reconfigurable micromirror gratings in Littrow configuration.

    PubMed

    Kappa, Jan; Schmitt, Klemens M; Rahm, Marco

    2017-08-21

    Efficient, high speed spatial modulators with predictable performance are a key element in any coded aperture terahertz imaging system. For spectroscopy, the modulators must also provide a broad modulation frequency range. In this study, we numerically analyze the electromagnetic behavior of a dynamically reconfigurable spatial terahertz wave modulator based on a micromirror grating in Littrow configuration. We show that such a modulator can modulate terahertz radiation over a wide frequency range from 1.7 THz to beyond 3 THz at a modulation depth of more than 0.6. As a specific example, we numerically simulated coded aperture imaging of an object with binary transmissive properties and successfully reconstructed the image.

  6. Characterization of intrabasin faulting and deformation for earthquake hazards in southern Utah Valley, Utah, from high-resolution seismic imaging

    USGS Publications Warehouse

    Stephenson, William J.; Odum, Jack K.; Williams, Robert A.; McBride, John H.; Tomlinson, Iris

    2012-01-01

    We conducted active and passive seismic imaging investigations along a 5.6-km-long, east–west transect ending at the mapped trace of the Wasatch fault in southern Utah Valley. Using two-dimensional (2D) P-wave seismic reflection data, we imaged basin deformation and faulting to a depth of 1.4 km and developed a detailed interval velocity model for prestack depth migration and 2D ground-motion simulations. Passive-source microtremor data acquired at two sites along the seismic reflection transect resolve S-wave velocities of approximately 200 m/s at the surface to about 900 m/s at 160 m depth and confirm a substantial thickening of low-velocity material westward into the valley. From the P-wave reflection profile, we interpret shallow (100–600 m) bedrock deformation extending from the surface trace of the Wasatch fault to roughly 1.5 km west into the valley. The bedrock deformation is caused by multiple interpreted fault splays displacing fault blocks downward to the west of the range front. Further west in the valley, the P-wave data reveal subhorizontal horizons from approximately 90 to 900 m depth that vary in thickness and whose dip increases with depth eastward toward the Wasatch fault. Another inferred fault about 4 km west of the mapped Wasatch fault displaces horizons within the valley to as shallow as 100 m depth. The overall deformational pattern imaged in our data is consistent with the Wasatch fault migrating eastward through time and with the abandonment of earlier synextensional faults, as part of the evolution of an inferred 20-km-wide half-graben structure within Utah Valley. Finite-difference 2D modeling suggests the imaged subsurface basin geometry can cause fourfold variation in peak ground velocity over distances of 300 m.

  7. Three-dimensional ghost imaging lidar via sparsity constraint

    NASA Astrophysics Data System (ADS)

    Gong, Wenlin; Zhao, Chengqiang; Yu, Hong; Chen, Mingliang; Xu, Wendong; Han, Shensheng

    2016-05-01

    Three-dimensional (3D) remote imaging attracts increasing attentions in capturing a target’s characteristics. Although great progress for 3D remote imaging has been made with methods such as scanning imaging lidar and pulsed floodlight-illumination imaging lidar, either the detection range or application mode are limited by present methods. Ghost imaging via sparsity constraint (GISC), enables the reconstruction of a two-dimensional N-pixel image from much fewer than N measurements. By GISC technique and the depth information of targets captured with time-resolved measurements, we report a 3D GISC lidar system and experimentally show that a 3D scene at about 1.0 km range can be stably reconstructed with global measurements even below the Nyquist limit. Compared with existing 3D optical imaging methods, 3D GISC has the capability of both high efficiency in information extraction and high sensitivity in detection. This approach can be generalized in nonvisible wavebands and applied to other 3D imaging areas.

  8. In vivo high-resolution cortical imaging with extended-focus optical coherence microscopy in the visible-NIR wavelength range

    NASA Astrophysics Data System (ADS)

    Marchand, Paul J.; Szlag, Daniel; Bouwens, Arno; Lasser, Theo

    2018-03-01

    Visible light optical coherence tomography has shown great interest in recent years for spectroscopic and high-resolution retinal and cerebral imaging. Here, we present an extended-focus optical coherence microscopy system operating from the visible to the near-infrared wavelength range for high axial and lateral resolution imaging of cortical structures in vivo. The system exploits an ultrabroad illumination spectrum centered in the visible wavelength range (λc = 650 nm, Δλ ˜ 250 nm) offering a submicron axial resolution (˜0.85 μm in water) and an extended-focus configuration providing a high lateral resolution of ˜1.4 μm maintained over ˜150 μm in depth in water. The system's axial and lateral resolution are first characterized using phantoms, and its imaging performance is then demonstrated by imaging the vasculature, myelinated axons, and neuronal cells in the first layers of the somatosensory cortex of mice in vivo.

  9. Seismic images of the Brooks Range fold and thrust belt, Arctic Alaska, from an integrated seismic reflection/refraction experiment

    USGS Publications Warehouse

    Levander, A.; Fuis, G.S.; Wissinger, E.S.; Lutter, W.J.; Oldow, J.S.; Moore, Thomas E.

    1994-01-01

    We describe results of an integrated seismic reflection/refraction experiment across the Brooks Range and flanking geologic provinces in Arctic Alaska. The seismic acquisition was unusual in that reflection and refraction data were collected simultaneously with a 700 channel seismograph system deployed numerous times along a 315 km profile. Shot records show continuous Moho reflections from 0-180 km offset, as well as numerous upper- and mid-crustal wide-angle events. Single and low-fold near-vertical incidence common midpoint (CMP) reflection images show complex upper- and middle-crustal structure across the range from the unmetamorphosed Endicott Mountains allochthon (EMA) in the north, to the metamorphic belts in the south. Lower-crustal and Moho reflections are visible across the entire reflection profile. Travel-time inversion of PmP arrivals shows that the Moho, at 33 km depth beneath the North Slope foothills, deepens abruptly beneath the EMA to a maximum of 46 km, and then shallows southward to 35 km at the southern edge of the range. Two zones of upper- and middle-crustal reflections underlie the northern Brooks Range above ~ 12-15 km depth. The upper zone, interpreted as the base of the EMA, lies at a maximum depth of 6 km and extends over 50 km from the range front to the north central Brooks Range where the base of the EMA outcrops above the metasedimentary rocks exposed in the Doonerak window. We interpret the base of the lower zone, at ~ 12 km depth, to be from carbonate rocks above the master detachment upon which the Brooks Range formed. The seismic data suggest that the master detachment is connected to the faults in the EMA by several ramps. In the highly metamorphosed terranes south of the Doonerak window, the CMP section shows numerous south-dipping events which we interpret as a crustal scale duplex involving the Doonerak window rocks. The basal detachment reflections can be traced approximately 100 km, and dip southward from about 10-12 km near the range front, to 14-18 km beneath the Doonerak window, to 26-28 km beneath the metamorphic belts in the central Brooks Range. The section documents middle- and lower-crustal involvement in the formation of the Brooks Range. ?? 1994.

  10. Spectrally resolved chromatic confocal interferometry for one-shot nano-scale surface profilometry with several tens of micrometric depth range

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Chen, Yi-Shiuan; Chang, Yi-Wei; Lin, Shyh-Tsong; Yeh, Sheng Lih

    2013-01-01

    In this research, new nano-scale measurement methodology based on spectrally-resolved chromatic confocal interferometry (SRCCI) was successfully developed by employing integration of chromatic confocal sectioning and spectrally-resolve white light interferometry (SRWLI) for microscopic three dimensional surface profilometry. The proposed chromatic confocal method (CCM) using a broad band while light in combination with a specially designed chromatic dispersion objective is capable of simultaneously acquiring multiple images at a large range of object depths to perform surface 3-D reconstruction by single image shot without vertical scanning and correspondingly achieving a high measurement depth range up to hundreds of micrometers. A Linnik-type interferometric configuration based on spectrally resolved white light interferometry is developed and integrated with the CCM to simultaneously achieve nanoscale axis resolution for the detection point. The white-light interferograms acquired at the exit plane of the spectrometer possess a continuous variation of wavelength along the chromaticity axis, in which the light intensity reaches to its peak when the optical path difference equals to zero between two optical arms. To examine the measurement accuracy of the developed system, a pre-calibrated accurate step height target with a total step height of 10.10 μm was measured. The experimental result shows that the maximum measurement error was verified to be less than 0.3% of the overall measuring height.

  11. Micromotor endoscope catheter for in vivo, ultrahigh-resolution optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Herz, P. R.; Chen, Y.; Aguirre, A. D.; Schneider, K.; Hsiung, P.; Fujimoto, J. G.; Madden, K.; Schmitt, J.; Goodnow, J.; Petersen, C.

    2004-10-01

    A distally actuated, rotational-scanning micromotor endoscope catheter probe is demonstrated for ultrahigh-resolution in vivo endoscopic optical coherence tomography (OCT) imaging. The probe permits focus adjustment for visualization of tissue morphology at varying depths with improved transverse resolution compared with standard OCT imaging probes. The distal actuation avoids nonuniform scanning motion artifacts that are present with other probe designs and can permit a wider range of imaging speeds. Ultrahigh-resolution endoscopic imaging is demonstrated in a rabbit with <4-µm axial resolution by use of a femtosecond Crforsterite laser light source. The micromotor endoscope catheter probe promises to improve OCT imaging performance in future endoscopic imaging applications.

  12. Dual-frequency transducer with a wideband PVDF receiver for contrast-enhanced, adjustable harmonic imaging

    NASA Astrophysics Data System (ADS)

    Kim, Jinwook; Lindsey, Brooks D.; Li, Sibo; Dayton, Paul A.; Jiang, Xiaoning

    2017-04-01

    Acoustic angiography is a contrast-enhanced, superharmonic microvascular imaging method. It has shown the capability of high-resolution and high-contrast-to-tissue-ratio (CTR) imaging for vascular structure near tumor. Dual-frequency ultrasound transducers and arrays are usually used for this new imaging technique. Stacked-type dual-frequency transducers have been developed for this vascular imaging method by exciting injected microbubble contrast agent (MCA) in the vessels with low-frequency (1-5 MHz), moderate power ultrasound burst waves and receiving the superharmonic responses from MCA by a high-frequency receiver (>10 MHz). The main challenge of the conventional dual-frequency transducers is a limited penetration depth (<25 mm) due to the insufficient receiving sensitivity for highfrequency harmonic signal detection. A receiver with a high receiving sensitivity spanning a wide superharmonic frequency range (3rd to 6th) enables selectable bubble harmonic detection considering the required penetration depth. Here, we develop a new dual-frequency transducer composed of a 2 MHz 1-3 composite transmitter and a polyvinylidene fluoride (PVDF) receiver with a receiving frequency range of 4-12 MHz for adjustable harmonic imaging. The developed transducer was tested for harmonic responses from a microbubble-injected vessel-mimicking tube positioned 45 mm away. Despite the long imaging distance (45 mm), the prototype transducer detected clear harmonic response with the contrast-to-noise ratio of 6-20 dB and the -6 dB axial resolution of 200-350 μm for imaging a 200 um-diameter cellulose tube filled with microbubbles.

  13. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    PubMed Central

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system. PMID:19849837

  14. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.

    PubMed

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-10-22

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  15. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    USGS Publications Warehouse

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  16. Seismic reflection images of the central California coast ranges and the tremor region of the San-Andreas-Fault system near Cholame

    NASA Astrophysics Data System (ADS)

    Gutjahr, Stine; Buske, Stefan

    2010-05-01

    The SJ-6 seismic reflection profile was acquired in 1981 over a distance of about 180 km from Morro Bay to the Sierra Nevada foothills in South Central California. The profile runs across several prominent fault systems, e.g. the Riconada Fault (RF) in the western part as well as the San Andreas Fault (SAF) in its central part. The latter includes the region of increased tremor activity near Cholame, as reported recently by several authors. We have recorrelated the original field data to 26 seconds two-way traveltime which allows us to image the crust and uppermost mantle down to approximately 40 km depth. A 3D tomographic velocity model derived from local earthquake data (Thurber et al., 2006) was used and Kirchhoff prestack depth migration as well as Fresnel-Volume-Migration were applied to the data set. Both imaging techniques were implemented in 3D by taking into account the true shot and receiver locations. The imaged subsurface volume itself was divided into three separate parts to correctly account for the significant kink in the profile line near the SAF. The most prominent features in the resulting images are areas of high reflectivity down to 30 km depth in particular in the central western part of the profile corresponding to the Salinian Block between the RF and the SAF. In the southwestern part strong reflectors can be identified that are dipping slightly to the northeast at depths of around 15-25 km. The eastern part consists of west dipping sediments at depths of 2-10 km that form a syncline structure in the west of the eastern part. The resulting images are compared to existing interpretations (Trehu and Wheeler, 1987; Wentworth and Zoback, 1989; Bloch et al., 1993) and discussed in the frame of the suggested tremor locations in that area.

  17. Detection of a concealed object

    DOEpatents

    Keller, Paul E [Richland, WA; Hall, Thomas E [Kennewick, WA; McMakin, Douglas L [Richland, WA

    2010-11-16

    Disclosed are systems, methods, devices, and apparatus to determine if a clothed individual is carrying a suspicious, concealed object. This determination includes establishing data corresponding to an image of the individual through interrogation with electromagnetic radiation in the 200 MHz to 1 THz range. In one form, image data corresponding to intensity of reflected radiation and differential depth of the reflecting surface is received and processed to detect the suspicious, concealed object.

  18. Detection of a concealed object

    DOEpatents

    Keller, Paul E [Richland, WA; Hall, Thomas E [Kennewick, WA; McMakin, Douglas L [Richland, WA

    2008-04-29

    Disclosed are systems, methods, devices, and apparatus to determine if a clothed individual is carrying a suspicious, concealed object. This determination includes establishing data corresponding to an image of the individual through interrogation with electromagnetic radiation in the 200 MHz to 1 THz range. In one form, image data corresponding to intensity of reflected radiation and differential depth of the reflecting surface is received and processed to detect the suspicious, concealed object.

  19. Determining Snow Depth Using Airborne Multi-Pass Interferometric Synthetic Aperture Radar

    DTIC Science & Technology

    2013-09-01

    relatively low resolution 10m DEM of the survey area was obtained from the USDA NAIP and then geocorrected to match the SAR image area. Centered on...Propulsion Laboratory LiDAR Light Detection and Ranging METAR Meteorological reporting observations medivac Medical Evacuation NASA National...Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X- SAR) mission was a joint National Aeronautical and Space Administration ( NASA

  20. A depth enhancement strategy for kinect depth image

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Li, Hua; Han, Cheng; Xue, Yaohong; Zhang, Chao; Hu, Hanping; Jiang, Zhengang

    2018-03-01

    Kinect is a motion sensing input device which is widely used in computer vision and other related fields. However, there are many inaccurate depth data in Kinect depth images even Kinect v2. In this paper, an algorithm is proposed to enhance Kinect v2 depth images. According to the principle of its depth measuring, the foreground and the background are considered separately. As to the background, the holes are filled according to the depth data in the neighborhood. And as to the foreground, a filling algorithm, based on the color image concerning about both space and color information, is proposed. An adaptive joint bilateral filtering method is used to reduce noise. Experimental results show that the processed depth images have clean background and clear edges. The results are better than ones of traditional Strategies. It can be applied in 3D reconstruction fields to pretreat depth image in real time and obtain accurate results.

  1. An endoscopic diffuse optical tomographic method with high resolution based on the improved FOCUSS method

    NASA Astrophysics Data System (ADS)

    Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei

    2017-02-01

    Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.

  2. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  3. Robust Fusion of Color and Depth Data for RGB-D Target Tracking Using Adaptive Range-Invariant Depth Models and Spatio-Temporal Consistency Constraints.

    PubMed

    Xiao, Jingjing; Stolkin, Rustam; Gao, Yuqing; Leonardis, Ales

    2017-09-06

    This paper presents a novel robust method for single target tracking in RGB-D images, and also contributes a substantial new benchmark dataset for evaluating RGB-D trackers. While a target object's color distribution is reasonably motion-invariant, this is not true for the target's depth distribution, which continually varies as the target moves relative to the camera. It is therefore nontrivial to design target models which can fully exploit (potentially very rich) depth information for target tracking. For this reason, much of the previous RGB-D literature relies on color information for tracking, while exploiting depth information only for occlusion reasoning. In contrast, we propose an adaptive range-invariant target depth model, and show how both depth and color information can be fully and adaptively fused during the search for the target in each new RGB-D image. We introduce a new, hierarchical, two-layered target model (comprising local and global models) which uses spatio-temporal consistency constraints to achieve stable and robust on-the-fly target relearning. In the global layer, multiple features, derived from both color and depth data, are adaptively fused to find a candidate target region. In ambiguous frames, where one or more features disagree, this global candidate region is further decomposed into smaller local candidate regions for matching to local-layer models of small target parts. We also note that conventional use of depth data, for occlusion reasoning, can easily trigger false occlusion detections when the target moves rapidly toward the camera. To overcome this problem, we show how combining target information with contextual information enables the target's depth constraint to be relaxed. Our adaptively relaxed depth constraints can robustly accommodate large and rapid target motion in the depth direction, while still enabling the use of depth data for highly accurate reasoning about occlusions. For evaluation, we introduce a new RGB-D benchmark dataset with per-frame annotated attributes and extensive bias analysis. Our tracker is evaluated using two different state-of-the-art methodologies, VOT and object tracking benchmark, and in both cases it significantly outperforms four other state-of-the-art RGB-D trackers from the literature.

  4. High bit depth infrared image compression via low bit depth codecs

    NASA Astrophysics Data System (ADS)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  5. 3D printed biomimetic vascular phantoms for assessment of hyperspectral imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Jianting; Ghassemi, Pejhman; Melchiorri, Anthony; Ramella-Roman, Jessica; Mathews, Scott A.; Coburn, James; Sorg, Brian; Chen, Yu; Pfefer, Joshua

    2015-03-01

    The emerging technique of three-dimensional (3D) printing provides a revolutionary way to fabricate objects with biologically realistic geometries. Previously we have performed optical and morphological characterization of basic 3D printed tissue-simulating phantoms and found them suitable for use in evaluating biophotonic imaging systems. In this study we assess the potential for printing phantoms with irregular, image-defined vascular networks that can be used to provide clinically-relevant insights into device performance. A previously acquired fundus camera image of the human retina was segmented, embedded into a 3D matrix, edited to incorporate the tubular shape of vessels and converted into a digital format suitable for printing. A polymer with biologically realistic optical properties was identified by spectrophotometer measurements of several commercially available samples. Phantoms were printed with the retinal vascular network reproduced as ~1.0 mm diameter channels at a range of depths up to ~3 mm. The morphology of the printed vessels was verified by volumetric imaging with μ-CT. Channels were filled with hemoglobin solutions at controlled oxygenation levels, and the phantoms were imaged by a near-infrared hyperspectral reflectance imaging system. The effect of vessel depth on hemoglobin saturation estimates was studied. Additionally, a phantom incorporating the vascular network at two depths was printed and filled with hemoglobin solution at two different saturation levels. Overall, results indicated that 3D printed phantoms are useful for assessing biophotonic system performance and have the potential to form the basis of clinically-relevant standardized test methods for assessment of medical imaging modalities.

  6. Digital holographic microscopy applied to measurement of a flow in a T-shaped micromixer

    NASA Astrophysics Data System (ADS)

    Ooms, T. A.; Lindken, R.; Westerweel, J.

    2009-12-01

    In this paper, we describe measurements of a three-dimensional (3D) flow in a T-shaped micromixer by means of digital holographic microscopy. Imaging tracer particles in a microscopic flow with conventional microscopy is accompanied by a small depth-of-field, which hinders true volumetric flow measurements. In holographic microscopy, the depth of the measurement domain does not have this limitation because any desired image plane can be reconstructed after recording. Our digital holographic microscope (DHM) consists of a conventional in-line recording system with an added magnifying optical element. The measured flow velocity and the calculated vorticity illustrate four streamwise vortices in the micromixer outflow channel. Because the investigated flow is stationary and strongly 3D, the DHM performance (i.e. accuracy and resolution) can be precisely investigated. The obtained Dynamic spatial range and Dynamic velocity range are larger than 20 and 30, respectively. High-speed multiple-frame measurements illustrate the capability to simultaneously track about 80 particles in a volumetric measurement domain.

  7. Long-range depth profiling of camouflaged targets using single-photon detection

    NASA Astrophysics Data System (ADS)

    Tobin, Rachael; Halimi, Abderrahim; McCarthy, Aongus; Ren, Ximing; McEwan, Kenneth J.; McLaughlin, Stephen; Buller, Gerald S.

    2018-03-01

    We investigate the reconstruction of depth and intensity profiles from data acquired using a custom-designed time-of-flight scanning transceiver based on the time-correlated single-photon counting technique. The system had an operational wavelength of 1550 nm and used a Peltier-cooled InGaAs/InP single-photon avalanche diode detector. Measurements were made of human figures, in plain view and obscured by camouflage netting, from a stand-off distance of 230 m in daylight using only submilliwatt average optical powers. These measurements were analyzed using a pixelwise cross correlation approach and compared to analysis using a bespoke algorithm designed for the restoration of multilayered three-dimensional light detection and ranging images. This algorithm is based on the optimization of a convex cost function composed of a data fidelity term and regularization terms, and the results obtained show that it achieves significant improvements in image quality for multidepth scenarios and for reduced acquisition times.

  8. Holographic Optical Coherence Imaging of Rat Osteogenic Sarcoma Tumor Spheroids

    NASA Astrophysics Data System (ADS)

    Yu, Ping; Mustata, Mirela; Peng, Leilei; Turek, John J.; Melloch, Michael R.; French, Paul M. W.; Nolte, David D.

    2004-09-01

    Holographic optical coherence imaging is a full-frame variant of coherence-domain imaging. An optoelectronic semiconductor holographic film functions as a coherence filter placed before a conventional digital video camera that passes coherent (structure-bearing) light to the camera during holographic readout while preferentially rejecting scattered light. The data are acquired as a succession of en face images at increasing depth inside the sample in a fly-through acquisition. The samples of living tissue were rat osteogenic sarcoma multicellular tumor spheroids that were grown from a single osteoblast cell line in a bioreactor. Tumor spheroids are nearly spherical and have radial symmetry, presenting a simple geometry for analysis. The tumors investigated ranged in diameter from several hundred micrometers to over 1 mm. Holographic features from the tumors were observed in reflection to depths of 500-600 µm with a total tissue path length of approximately 14 mean free paths. The volumetric data from the tumor spheroids reveal heterogeneous structure, presumably caused by necrosis and microcalcifications characteristic of some human avascular tumors.

  9. Optimal arrangements of fiber optic probes to enhance the spatial resolution in depth for 3D reflectance diffuse optical tomography with time-resolved measurements performed with fast-gated single-photon avalanche diodes

    NASA Astrophysics Data System (ADS)

    Puszka, Agathe; Di Sieno, Laura; Dalla Mora, Alberto; Pifferi, Antonio; Contini, Davide; Boso, Gianluca; Tosi, Alberto; Hervé, Lionel; Planat-Chrétien, Anne; Koenig, Anne; Dinten, Jean-Marc

    2014-02-01

    Fiber optic probes with a width limited to a few centimeters can enable diffuse optical tomography (DOT) in intern organs like the prostate or facilitate the measurements on extern organs like the breast or the brain. We have recently shown on 2D tomographic images that time-resolved measurements with a large dynamic range obtained with fast-gated single-photon avalanche diodes (SPADs) could push forward the imaged depth range in a diffusive medium at short source-detector separation compared with conventional non-gated approaches. In this work, we confirm these performances with the first 3D tomographic images reconstructed with such a setup and processed with the Mellin- Laplace transform. More precisely, we investigate the performance of hand-held probes with short interfiber distances in terms of spatial resolution and specifically demonstrate the interest of having a compact probe design featuring small source-detector separations. We compare the spatial resolution obtained with two probes having the same design but different scale factors, the first one featuring only interfiber distances of 15 mm and the second one, 10 mm. We evaluate experimentally the spatial resolution obtained with each probe on the setup with fast-gated SPADs for optical phantoms featuring two absorbing inclusions positioned at different depths and conclude on the potential of short source-detector separations for DOT.

  10. Oblong-Shaped-Focused Transducers for Intravascular Ultrasound Imaging.

    PubMed

    Lee, Junsu; Jang, Jihun; Chang, Jin Ho

    2017-03-01

    In intravascular ultrasound (IVUS) imaging, a transducer is inserted into a blood vessel and rotated to obtain image data. For this purpose, the transducer aperture is typically less than 0.5 mm in diameter, which causes natural focusing to occur in the imaging depth ranging from 1 to 5 mm. Due to the small aperture, however, it is not viable to conduct geometric focusing in order to enhance the spatial resolution of IVUS images. Furthermore, this hampers narrowing the slice thickness of a cross-sectional scan plane in the imaging depth, which leads to lowering spatial and contrast resolutions of IVUS images. To solve this problem, we propose an oblong-shaped-focused transducer for IVUS imaging. Unlike the conventional IVUS transducers with either a circular or a square flat aperture, the proposed transducer has an oblong aperture of which long side is positioned along a blood vessel. This unique configuration makes it possible to conduct geometric focusing at a desired depth in the elevation direction. In this study, furthermore, it is demonstrated that a spherically shaped aperture in both lateral and elevation directions also improves lateral resolution, compared to the conventional flat aperture. To ascertain this, the conventional and the proposed IVUS transducers were designed and fabricated to evaluate and to compare their imaging performances through wire phantom and tissue-mimicking phantom experiments. For the proposed 50-MHz IVUS transducer, a PZT piece of 0.5 × 1.0 mm 2 was spherically shaped for elevation focus at 3 mm by using the conventional press-focusing technique whereas the conventional one has a flat aperture of 0.5 × 0.5 mm 2 . The experimental results demonstrated that the proposed IVUS transducer is capable of improving spatial and contrast resolutions of IVUS images.

  11. Depth profile measurement with lenslet images of the plenoptic camera

    NASA Astrophysics Data System (ADS)

    Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei

    2018-03-01

    An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.

  12. Seismic imaging of a mid-lithospheric discontinuity beneath Ontong Java Plateau

    NASA Astrophysics Data System (ADS)

    Tharimena, Saikiran; Rychert, Catherine A.; Harmon, Nicholas

    2016-09-01

    Ontong Java Plateau (OJP) is a huge, completely submerged volcanic edifice that is hypothesized to have formed during large plume melting events ∼90 and 120 My ago. It is currently resisting subduction into the North Solomon trench. The size and buoyancy of the plateau along with its history of plume melting and current interaction with a subduction zone are all similar to the characteristics and hypothesized mechanisms of continent formation. However, the plateau is remote, and enigmatic, and its proto-continent potential is debated. We use SS precursors to image seismic discontinuity structure beneath Ontong Java Plateau. We image a velocity increase with depth at 28 ± 4 km consistent with the Moho. In addition, we image velocity decreases at 80 ± 5 km and 282 ± 7 km depth. Discontinuities at 60-100 km depth are frequently observed both beneath the oceans and the continents. However, the discontinuity at 282 km is anomalous in comparison to surrounding oceanic regions; in the context of previous results it may suggest a thick viscous root beneath OJP. If such a root exists, then the discontinuity at 80 km bears some similarity to the mid-lithospheric discontinuities (MLDs) observed beneath continents. One possibility is that plume melting events, similar to that which formed OJP, may cause discontinuities in the MLD depth range. Plume-plate interaction could be a mechanism for MLD formation in some continents in the Archean prior to the onset of subduction.

  13. Subjective depth of field in presence of 4th-order and 6th-order Zernike spherical aberration using adaptive optics technology.

    PubMed

    Benard, Yohann; Lopez-Gil, Norberto; Legras, Richard

    2010-12-01

    To study the impact on the subjective depth of field of 4th-order spherical aberration and its combination with 6th-order spherical aberration and analyze the accuracy of image-quality metrics in predicting the impact. Laboratoire Aimé Cotton, Centre National de la Recherche Scientifique, Université Paris-Sud, Orsay, France. Case series. Subjective depth of field was defined as the range of defocus at which the target (3 high-contrast letters at 20/50) was perceived acceptable. Depth of field was measured using 0.18 diopter (D) steps in young subjects with the addition of the following spherical aberration values: ±0.3 μm and ±0.6 μm 4th-order spherical aberration with 3.0 mm and 6.0 mm pupils and ±0.3 μm 4th-order spherical aberration with ±0.1 μm 6th-order spherical aberration for 6.0 mm pupils. The addition of ±0.3 and ±0.6 μm 4th-order spherical aberration increased depth of field by 30% and 45%, respectively. The combination of 4th-order spherical aberration and 6th-order spherical aberration of opposite signs increased depth of field more than 4th-order spherical aberration alone (ie, 63%), while the combination of 4th-order spherical aberration and 6th-order spherical aberration of the same signs did not (ie, 24%). Whereas the midpoint of the depth of field could be predicted by image-quality metrics, none was found a good predictor of objectionable depth of field. Subjective depth of field increased when 4th-order spherical aberration and 6th-order spherical aberration of opposite signs were added but could not be predicted with image-quality metrics. Copyright © 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  14. Interplay of wavelength, fluence and spot-size in free-electron laser ablation of cornea.

    PubMed

    Hutson, M Shane; Ivanov, Borislav; Jayasinghe, Aroshan; Adunas, Gilma; Xiao, Yaowu; Guo, Mingsheng; Kozub, John

    2009-06-08

    Infrared free-electron lasers ablate tissue with high efficiency and low collateral damage when tuned to the 6-microm range. This wavelength-dependence has been hypothesized to arise from a multi-step process following differential absorption by tissue water and proteins. Here, we test this hypothesis at wavelengths for which cornea has matching overall absorption, but drastically different differential absorption. We measure etch depth, collateral damage and plume images and find that the hypothesis is not confirmed. We do find larger etch depths for larger spot sizes--an effect that can lead to an apparent wavelength dependence. Plume imaging at several wavelengths and spot sizes suggests that this effect is due to increased post-pulse ablation at larger spots.

  15. Pseudo-color coding method for high-dynamic single-polarization SAR images

    NASA Astrophysics Data System (ADS)

    Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi

    2018-04-01

    A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.

  16. Bas-Relief Modeling from Normal Images with Intuitive Styles.

    PubMed

    Ji, Zhongping; Ma, Weiyin; Sun, Xianfang

    2014-05-01

    Traditional 3D model-based bas-relief modeling methods are often limited to model-dependent and monotonic relief styles. This paper presents a novel method for digital bas-relief modeling with intuitive style control. Given a composite normal image, the problem discussed in this paper involves generating a discontinuity-free depth field with high compression of depth data while preserving or even enhancing fine details. In our framework, several layers of normal images are composed into a single normal image. The original normal image on each layer is usually generated from 3D models or through other techniques as described in this paper. The bas-relief style is controlled by choosing a parameter and setting a targeted height for them. Bas-relief modeling and stylization are achieved simultaneously by solving a sparse linear system. Different from previous work, our method can be used to freely design bas-reliefs in normal image space instead of in object space, which makes it possible to use any popular image editing tools for bas-relief modeling. Experiments with a wide range of 3D models and scenes show that our method can effectively generate digital bas-reliefs.

  17. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    NASA Astrophysics Data System (ADS)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.

  18. A Three - Dimensional Receiver Function Study of the Western United States

    NASA Astrophysics Data System (ADS)

    Lindsey, C.; Gurrola, H.

    2008-12-01

    The western United States has a complex geologic history and has been the focus of many regional scale PASSCAL seismic studies that investigate depth variations to the Moho, the 410 km discontinuity, and the 660 km discontinuities. Analysis of depth variations to the Moho in relation to topography is important in understanding the isostatic compensation depth, the thermal state of the upper mantle and boundaries between tectonic provinces. Analysis of the 410 and 660 km discontinuities allow us to determine variations in mantle temperature at these depths and facilitates comparison with tectonic boundaries. This abstract summarizes results from stacking Pds phases throughout the western US using data from all available previous PASSCAL studies in the western U.S. together with data from the EarthScope Transportable array. These data sets enable us to produce an image over the entire western US from the Pacific coast to the Rocky mountain front. Common conversion point stacking of Pds phases was performed by back projecting the data through a 3-D seismic velocity model (surface wave tomography model NA04 by Van der Lee). The images produced show large variations in Moho topography with an average depth of 39.6 kilometer over the western US with ± 7.2 km standard deviation in depth. As would be expected the Moho appears to be deepest beneath the Colorado Plateau and central Montana and shallowest throughout the Basin and Raange. The Moho also appears very shallow beneath eastern Washington. There is a band oof thick crust along the Yellowstone hot spot track. The 410 km discontinuity appears to have a mean depth of 427 km with a standard deviation in depth of ± 10.2 km. At this time the images are still very noisy but in a regional sense the 410 appears deepest beneath the southern part of the image and shallower to the north. Depths to the 660 km discontinuity appear to average 675 km with standard deviation of ± 9.8 km. The 660 does not appear to have a north-south change in depth but appears deepest to the Eastern part of the image and shallower to the west. This relationship may indicate that the thermal state of the 410 is controlled by high temperatures to the south associated with the Basin and Range and cooler to the north were subduction is present. The 660 may be controlled by the transition from warm oceanic and transitional lithosphere to the west and cooler continental lithosphere to the east.

  19. Optical clearing of melanoma in vivo: characterization by diffuse reflectance spectroscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Pires, Layla; Demidov, Valentin; Vitkin, I. Alex; Bagnato, Vanderlei; Kurachi, Cristina; Wilson, Brian C.

    2016-08-01

    Melanoma is the most aggressive type of skin cancer, with significant risk of fatality. Due to its pigmentation, light-based imaging and treatment techniques are limited to near the tumor surface, which is inadequate, for example, to evaluate the microvascular density that is associated with prognosis. White-light diffuse reflectance spectroscopy (DRS) and near-infrared optical coherence tomography (OCT) were used to evaluate the effect of a topically applied optical clearing agent (OCA) in melanoma in vivo and to image the microvascular network. DRS was performed using a contact fiber optic probe in the range from 450 to 650 nm. OCT imaging was performed using a swept-source system at 1310 nm. The OCT image data were processed using speckle variance and depth-encoded algorithms. Diffuse reflectance signals decreased with clearing, dropping by ˜90% after 45 min. OCT was able to image the microvasculature in the pigmented melanoma tissue with good spatial resolution up to a depth of ˜300 μm without the use of OCA; improved contrast resolution was achieved with optical clearing to a depth of ˜750 μm in tumor. These findings are relevant to potential clinical applications in melanoma, such as assessing prognosis and treatment responses. Optical clearing may also facilitate the use of light-based treatments such as photodynamic therapy.

  20. Time-of-flight depth image enhancement using variable integration time

    NASA Astrophysics Data System (ADS)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  1. An alternative approach to depth of field which avoids the blur circle and uses the pixel pitch

    NASA Astrophysics Data System (ADS)

    Schuster, Norbert

    2015-09-01

    Modern thermal imaging systems apply more and more uncooled detectors. High volume applications work with detectors which have a reduced pixel count (typical between 200x150 and 640x480). This shrinks the application of modern image treatment procedures like wave front coding. On the other hand side, uncooled detectors demand lenses with fast F-numbers near 1.0. Which are the limits on resolution if the target to analyze changes its distance to the camera system? The aim to implement lens arrangements without any focusing mechanism demands a deeper quantification of the Depth of Field problem. The proposed Depth of Field approach avoids the classic "accepted image blur circle". It bases on a camera specific depth of focus which is transformed in the object space by paraxial relations. The traditional RAYLEIGH's -criterion bases on the unaberrated Point Spread Function and delivers a first order relation for the depth of focus. Hence, neither the actual lens resolution neither the detector impact is considered. The camera specific depth of focus respects a lot of camera properties: Lens aberrations at actual F-number, detector size and pixel pitch. The through focus MTF is the base of the camera specific depth of focus. It has a nearly symmetric course around the maximum of sharp imaging. The through focus MTF is considered at detector's Nyquist frequency. The camera specific depth of focus is this the axial distance in front and behind of sharp image plane where the through focus MTF is <0.25. This camera specific depth of focus is transferred in the object space by paraxial relations. It follows a general applicable Depth of Field diagram which could be applied to lenses realizing a lateral magnification range -0.05…0. Easy to handle formulas are provided between hyperfocal distance and the borders of the Depth of Field in dependence on sharp distances. These relations are in line with the classical Depth of Field-theory. Thermal pictures, taken by different IR-camera cores, illustrate the new approach. The quite often requested graph "MTF versus distance" choses the half Nyquist frequency as reference. The paraxial transfer of the through focus MTF in object space distorts the MTF-curve: hard drop at closer distances than sharp distance, smooth drop at further distances. The formula of a general Diffraction-Limited-Through-Focus-MTF (DLTF) is deducted. Arbitrary detector-lens combinations could be discussed. Free variables in this analysis are waveband, aperture based F-number (lens) and pixel pitch (detector). The DLTF- discussion provides physical limits and technical requirements. The detector development with pixel pitches smaller than captured wavelength in the LWIR-region generates a special challenge for optical design.

  2. Full-range k-domain linearization in spectral-domain optical coherence tomography.

    PubMed

    Jeon, Mansik; Kim, Jeehyun; Jung, Unsang; Lee, Changho; Jung, Woonggyu; Boppart, Stephen A

    2011-03-10

    A full-bandwidth k-domain linearization method for spectral-domain optical coherence tomography (SD-OCT) is demonstrated. The method uses information of the wavenumber-pixel-position provided by a translating-slit-based wavelength filter. For calibration purposes, the filter is placed either after a broadband source or at the end of the sample path, and the filtered spectrum with a narrowed line width (∼0.5 nm) is incident on a line-scan camera in the detection path. The wavelength-swept spectra are co-registered with the pixel positions according to their central wavelengths, which can be automatically measured with an optical spectrum analyzer. For imaging, the method does not require a filter or a software recalibration algorithm; it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. The accuracy of k-linearization is maximized by increasing the k-linearization order, which is known to be a crucial parameter for maintaining a narrow point-spread function (PSF) width at increasing depths. The broadening effect is studied by changing the k-linearization order by undersampling to search for the optimal value. The system provides more position information, surpassing the optimum without compromising the imaging speed. The proposed full-range k-domain linearization method can be applied to SD-OCT systems to simplify their hardware/software, increase their speed, and improve the axial image resolution. The experimentally measured width of PSF in air has an FWHM of 8 μm at the edge of the axial measurement range. At an imaging depth of 2.5 mm, the sensitivity of the full-range calibration case drops less than 10 dB compared with the uncompensated case.

  3. SU-F-J-179: Commissioning Dosimetric Data of a New 2.5 Megavoltage Imaging Beam from a TrueBeam Linear

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, G

    2016-06-15

    Purpose: Recently a new 2.5 megavoltage imaging beam has become available in a TrueBeam linear accelerator for image guidance. There is limited information available related to the beam characteristics. Commissioning dosimetric data of the new imaging is necessary for configuration of the beam in a treatment planning system in order to calculate imaging doses to patients resulting from this new imaging beam. The purpose of this study is to provide measured commissioning data recommended for a beam configuration in a treatment planning system. Methods: A recently installed TrueBeam linear accelerator is equipped with a new low energy photon beam withmore » a nominal energy of 2.5 MV which provides better image quality in addition to other therapeutic megavoltage beams. Dosimetric characteristics of the 2.5 MV are measured for commissioning. An ionization chamber was used to measure dosimetric data including depth-dose curves and dose profiles at different depths for field sizes ranging from 5×5 cm{sup 2} to 40×40 cm{sup 2}. Results: Although the new 2.5 MV beam is a flattening-filter-free (FFF) beam, its dose profiles are much flatter compared to a 6 MV FFF beam. The dose decrease at 20 cm away from the central axis is less than 30% for a 40×40 cm{sup 2} field. This moderately lower dose at off-axis distances benefits the imaging quality. The values of percentage depth-dose (PDD) curves are 53% and 63% for 10×10 cm{sup 2} and 40×40 cm{sup 2} fields respectively. The measured beam output is 0.85 cGy/MU for a reference field size at depth 5 cm obtained according to the AAPM TG-51 protocol. Conclusion: This systematically measured commissioning data is useful for configuring the new imaging beam in a treatment planning system for patient imaging dose calculations resulting from the application of this 2.5 MV beam which is commonly set as a default in imaging procedures.« less

  4. Aerial LED signage by use of crossed-mirror array

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirotsugu; Kujime, Ryousuke; Bando, Hiroki; Suyama, Shiro

    2013-03-01

    3D representation of digital signage improves its significance and rapid notification of important points. Real 3D display techniques such as volumetric 3D displays are effective for use of 3D for public signs because it provides not only binocular disparity but also motion parallax and other cues, which will give 3D impression even people with abnormal binocular vision. Our goal is to realize aerial 3D LED signs. We have specially designed and fabricated a reflective optical device to form an aerial image of LEDs with a wide field angle. The developed reflective optical device composed of crossed-mirror array (CMA). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. The depth between LED lamps is represented in the same depth in the floating 3D image. Floating image of LEDs was formed in wide range of incident angle with a peak reflectance at 35 deg. The image size of focused beam (point spread function) agreed to the apparent aperture size.

  5. Digital holography of intracellular dynamics to probe tissue physiology.

    PubMed

    Merrill, Daniel; An, Ran; Turek, John; Nolte, David D

    2015-01-01

    Digital holography provides improved capabilities for imaging through dense tissue. Using a short-coherence source, the digital hologram recorded from backscattered light performs laser ranging that maintains fidelity of information acquired from depths much greater than possible by traditional imaging techniques. Biodynamic imaging (BDI) is a developing technology for live-tissue imaging of up to a millimeter in depth that uses the hologram intensity fluctuations as label-free image contrast and can study tissue behavior in native microenvironments. In this paper BDI is used to investigate the change in adhesion-dependent tissue response in 3D cultures. The results show that increasing density of cellular adhesions slows motion inside tissue and alters the response to cytoskeletal drugs. A clear signature of membrane fluctuations was observed in mid-frequencies (0.1-1 Hz) and was enhanced by the application of cytochalasin-D that degrades the actin cortex inside the cell membrane. This enhancement feature is only observed in tissues that have formed adhesions, because cell pellets initially do not show this signature, but develop this signature only after incubation enables adhesions to form.

  6. SU-E-I-92: Accuracy Evaluation of Depth Data in Microsoft Kinect.

    PubMed

    Kozono, K; Aoki, M; Ono, M; Kamikawa, Y; Arimura, H; Toyofuku, F

    2012-06-01

    Microsoft Kinect has potential for use in real-time patient position monitoring in diagnostic radiology and radiotherapy. We evaluated the accuracy of depth image data and the device-to-device variation in various conditions simulating clinical applications in a hospital. Kinect sensor consists of infrared-ray depth camera and RGB camera. We developed a computer program using OpenNI and OpenCV for measuring quantitative distance data. The program displays depth image obtained from Kinect sensor on the screen, and the cartesian coordinates at an arbitrary point selected by mouse-clicking can be measured. A rectangular box without luster (300 × 198 × 50 mm 3 ) was used as a measuring object. The object was placed on the floor at various distances ranging from 0 to 400 cm in increments of 10 cm from the sensor, and depth data were measured for 10 points on the planar surface of the box. The measured distance data were calibrated by using the least square method. The device-to-device variations were evaluated using five Kinect sensors. There was almost linear relationship between true and measured values. Kinect sensor was unable to measure at a distance of less than 50 cm from the sensor. It was found that distance data calibration was necessary for each sensor. The device-to-device variation error for five Kinect sensors was within 0.46% at the distance range from 50 cm to 2 m from the sensor. The maximum deviation of the distance data after calibration was 1.1 mm at a distance from 50 to 150 cm. The overall average error of five Kinect sensors was 0.18 mm at a distance range of 50 to 150 cm. Kinect sensor has distance accuracy of about 1 mm if each device is properly calibrated. This sensor will be useable for positioning of patients in diagnostic radiology and radiotherapy. © 2012 American Association of Physicists in Medicine.

  7. Pre-stack depth Migration imaging of the Hellenic Subduction Zone

    NASA Astrophysics Data System (ADS)

    Hussni, S. G.; Becel, A.; Schenini, L.; Laigle, M.; Dessa, J. X.; Galve, A.; Vitard, C.

    2017-12-01

    In 365 AD, a major M>8-tsunamignic earthquake occurred along the southwestern segment of the Hellenic subduction zone. Although this is the largest seismic event ever reported in Europe, some fundamental questions remain regarding the deep geometry of the interplate megathrust, as well as other faults within the overriding plate potentially connected to it. The main objective here is to image those deep structures, whose depths range between 15 and 45 km, using leading edge seismic reflection equipment. To this end, a 210-km-long multichannel seismic profile was acquired with the 8 km-long streamer and the 6600 cu.in source of R/V Marcus Langseth. This was realized at the end of 2015, during the SISMED cruise. This survey was made possible through a collective effort gathering labs (Géoazur, LDEO, ISTEP, ENS-Paris, EOST, LDO, Dpt. Geosciences of Pau Univ). A preliminary processing sequence has first been applied using Geovation software of CGG, which yielded a post-stack time migration of collected data, as well as pre-stack time migration obtained with a model derived from velocity analyses. Using Paradigm software, a pre-stack depth migration was subsequently carried out. This step required some tuning in the pre-processing sequence in order to improve multiple removal, noise suppression and to better reveal the true geometry of reflectors in depth. This iteration of pre-processing included, the use of parabolic Radon transform, FK filtering and time variant band pass filtering. An initial velocity model was built using depth-converted RMS velocities obtained from SISMED data for the sedimentary layer, complemented at depth with a smooth version of the tomographic velocities derived from coincident wide-angle data acquired during the 2012-ULYSSE survey. Then, we performed a Kirchhoff Pre-stack depth migration with traveltimes calculated using the Eikonal equation. Velocity model were then tuned through residual velocity analyses to flatten reflections in common reflection point gathers. These new results improve the imaging of deep reflectors and even reveal some new features. We will present this work, a comparison with our previously obtained post-stack time migration, as well as some insights into the new geological structures revealed by the depth imaging.

  8. Optimal Band Ratio Analysis of WORLDVIEW-3 Imagery for Bathymetry of Shallow Rivers (case Study: Sarca River, Italy)

    NASA Astrophysics Data System (ADS)

    Niroumand-Jadidi, M.; Vitti, A.

    2016-06-01

    The Optimal Band Ratio Analysis (OBRA) could be considered as an efficient technique for bathymetry from optical imagery due to its robustness on substrate variability. This point receives more attention for very shallow rivers where different substrate types can contribute remarkably into total at-sensor radiance. The OBRA examines the total possible pairs of spectral bands in order to identify the optimal two-band ratio that its log transformation yields a strong linear relation with field measured water depths. This paper aims at investigating the effectiveness of additional spectral bands of newly launched WorldView-3 (WV-3) imagery in the visible and NIR spectrum through OBRA for retrieving water depths in shallow rivers. In this regard, the OBRA is performed on a WV-3 image as well as a GeoEye image of a small Alpine river in Italy. In-situ depths are gathered in two river reaches using a precise GPS device. In each testing scenario, 50% of the field data is used for calibration of the model and the remained as independent check points for accuracy assessment. In general, the effect of changes in water depth is highly pronounced in longer wavelengths (i.e. NIR) due to high and rapid absorption of light in this spectrum as long as it is not saturated. As the studied river is shallow, NIR portion of the spectrum has not been reduced so much not to reach the riverbed; making use of the observed radiance over this spectral range as denominator has shown a strong correlation through OBRA. More specifically, tightly focused channels of red-edge, NIR-1 and NIR-2 provide a wealth of choices for OBRA rather than a single NIR band of conventional 4-band images (e.g. GeoEye). This advantage of WV-3 images is outstanding as well for choosing the optimal numerator of the ratio model. Coastal-blue and yellow bands of WV-3 are identified as proper numerators while only green band of the GeoEye image contributed to a reliable correlation of image derived values and field measured depths. According to the results, the additional and narrow spectral bands of WV-3 image lead to an average determination coefficient of 67% in two river segments, which is 10% higher than that of obtained from the 4-band GeoEye image. In addition, RMSEs of depth estimations are calculated as 4 cm and 6 cm respectively for WV-3 and GeoEye images, considering the optimal band ratio.

  9. Micromotor endoscope catheter for in vivo, ultrahigh-resolution optical coherence tomography.

    PubMed

    Herz, P R; Chen, Y; Aguirre, A D; Schneider, K; Hsiung, P; Fujimoto, J G; Madden, K; Schmitt, J; Goodnow, J; Petersen, C

    2004-10-01

    A distally actuated, rotational-scanning micromotor endoscope catheter probe is demonstrated for ultrahigh-resolution in vivo endoscopic optical coherence tomography (OCT) imaging. The probe permits focus adjustment for visualization of tissue morphology at varying depths with improved transverse resolution compared with standard OCT imaging probes. The distal actuation avoids nonuniform scanning motion artifacts that are present with other probe designs and can permit a wider range of imaging speeds. Ultrahigh-resolution endoscopic imaging is demonstrated in a rabbit with <4-microm axial resolution by use of a femtosecond Cr:forsterite laser light source. The micromotor endoscope catheter probe promises to improve OCT imaging performance in future endoscopic imaging applications.

  10. Three-dimensional imaging of the S-velocity structure for the crust and the upper mantle beneath the Arabian Sea from Rayleigh wave analysis

    NASA Astrophysics Data System (ADS)

    Corchete, V.

    2017-04-01

    A 3D imaging of S-velocity for the Arabian Sea crust and upper mantle structure is presented in this paper, determined by means of Rayleigh wave analysis, for depths ranging from zero to 300 km. The crust and upper mantle structure of this region of the earth never has been the subject of a surface wave tomography survey. The Moho map performed in the present study is a new result, in which a crustal thickening beneath the Arabian Fan sediments can be observed. This crustal thickening can be interpreted as a quasi-continental oceanic transitional structure. A crustal thickness of up to 20 km also can be observed for the Murray Ridge system in this Moho map. This crustal thickening can be due to that the Murray Ridge System consists of Indian continental crust. This continental crust is extremely thinned to the southwest of this region, as shown in this Moho map. This area can be interpreted as oceanic in origin. In the depth range from 30 to 60 km, the S-velocity presents its lower values at the Carlsberg Ridge region, because it is the younger region of the study area. In the depth range from 60 to 105 km of depth, the S-velocity pattern is very similar to that shown for the previous depth range, except for the regions in which the asthenosphere is reached, for these regions appear a low S-velocity pattern. The lithosphere-asthenosphere boundary (LAB), or equivalently the lithosphere thickness, determined in the present study is also a new result, in which the lithosphere thickness for the Arabian Fan can be estimated in 60-70 km. The lower lithospheric thickness observed in the LAB map, for the Arabian Fan, shows that this region may be in the transition zone between continental and oceanic structure. Finally, a low-velocity zone (LVZ) has been determined, for the whole study area, located between the LAB and the boundary of the asthenosphere base (or equivalently the lithosphere-asthenosphere system thickness). The asthenosphere-base map calculated in the present study is also a new result.

  11. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  12. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  13. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    PubMed

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  14. The influence of structure depth on image blurring of micrometres-thick specimens in MeV transmission electron imaging.

    PubMed

    Wang, Fang; Sun, Ying; Cao, Meng; Nishi, Ryuji

    2016-04-01

    This study investigates the influence of structure depth on image blurring of micrometres-thick films by experiment and simulation with a conventional transmission electron microscope (TEM). First, ultra-high-voltage electron microscope (ultra-HVEM) images of nanometer gold particles embedded in thick epoxy-resin films were acquired in the experiment and compared with simulated images. Then, variations of image blurring of gold particles at different depths were evaluated by calculating the particle diameter. The results showed that with a decrease in depth, image blurring increased. This depth-related property was more apparent for thicker specimens. Fortunately, larger particle depth involves less image blurring, even for a 10-μm-thick epoxy-resin film. The quality dependence on depth of a 3D reconstruction of particle structures in thick specimens was revealed by electron tomography. The evolution of image blurring with structure depth is determined mainly by multiple elastic scattering effects. Thick specimens of heavier materials produced more blurring due to a larger lateral spread of electrons after scattering from the structure. Nevertheless, increasing electron energy to 2MeV can reduce blurring and produce an acceptable image quality for thick specimens in the TEM. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Fine crustal and uppermost mantle S-wave velocity structure beneath the Tengchong volcanic area inferred from receiver function and surface-wave dispersion: constraints on magma chamber distribution

    NASA Astrophysics Data System (ADS)

    Li, Mengkui; Zhang, Shuangxi; Wu, Tengfei; Hua, Yujin; Zhang, Bo

    2018-03-01

    The Tengchong volcanic area is located in the southeastern margin of the collision zone between the Indian and Eurasian Plates. It is one of the youngest intraplate volcano groups in mainland China. Imaging the S-wave velocity structure of the crustal and uppermost mantle beneath the Tengchong volcanic area is an important means of improving our understanding of its volcanic activity and seismicity. In this study, we analyze teleseismic data from nine broadband seismic stations in the Tengchong Earthquake Monitoring Network. We then image the crustal and uppermost mantle S-wave velocity structure by joint analysis of receiver functions and surface-wave dispersion. The results reveal widely distributed low-velocity zones. We find four possible magma chambers in the upper-to-middle crust and one in the uppermost mantle. The chamber in the uppermost mantle locates in the depth range from 55 to 70 km. The four magma chambers in the crust occur at different depths, ranging from the depth of 7 to 25 km in general. They may be the heat sources for the high geothermal activity at the surface. Based on the fine crustal and uppermost mantle S-wave velocity structure, we propose a model for the distribution of the magma chambers.

  16. Laser range profiling for small target recognition

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove; Tulldahl, Michael

    2017-03-01

    Long range identification (ID) or ID at closer range of small targets has its limitations in imaging due to the demand for very high-transverse sensor resolution. This is, therefore, a motivation to look for one-dimensional laser techniques for target ID. These include laser vibrometry and laser range profiling. Laser vibrometry can give good results, but is not always robust as it is sensitive to certain vibrating parts on the target being in the field of view. Laser range profiling is attractive because the maximum range can be substantial, especially for a small laser beam width. A range profiler can also be used in a scanning mode to detect targets within a certain sector. The same laser can also be used for active imaging when the target comes closer and is angularly resolved. Our laser range profiler is based on a laser with a pulse width of 6 ns (full width half maximum). This paper will show both experimental and simulated results for laser range profiling of small boats out to a 6 to 7-km range and a unmanned arrial vehicle (UAV) mockup at close range (1.3 km). The naval experiments took place in the Baltic Sea using many other active and passive electro-optical sensors in addition to the profiling system. The UAV experiments showed the need for a high-range resolution, thus we used a photon counting system in addition to the more conventional profiler used in the naval experiments. This paper shows the influence of target pose and range resolution on the capability of classification. The typical resolution (in our case 0.7 m) obtainable with a conventional range finder type of sensor can be used for large target classification with a depth structure over 5 to 10 m or more, but for smaller targets such as a UAV a high resolution (in our case 7.5 mm) is needed to reveal depth structures and surface shapes. This paper also shows the need for 3-D target information to build libraries for comparison of measured and simulated range profiles. At closer ranges, full 3-D images should be preferable.

  17. Ultrafast Phase Mapping of Thin-Sections from An Apollo 16 Drive Tube - a New Visualisation of Lunar Regolith

    NASA Technical Reports Server (NTRS)

    Botha, Pieter; Butcher, Alan R.; Horsch, Hana; Rickman, Doug; Wentworth, Susan J.; Schrader, Christian M.; Stoeser, Doug; Benedictus, Aukje; Gottlieb, Paul; McKay, David

    2008-01-01

    Polished thin-sections of samples extracted from Apollo drive tubes provide unique insights into the structure of the Moon's regolith at various landing sites. In particular, they allow the mineralogy and texture of the regolith to be studied as a function of depth. Much has been written about such thin-sections based on optical, SEM and EPMA studies, in terms of their essential petrographic features, but there has been little attempt to quantify these aspects from a spatial perspective. In this study, we report the findings of experimental analysis of two thin-sections (64002, 6019, depth range 5.0 - 8.0 cm & 64001, 6031, depth range 50.0 - 53.1 cm), from a single Apollo 16 drive tube using QEMSCAN . A key feature of the method is phase identification by ultrafast energy dispersive x-ray mapping on a pixel-by-pixel basis. By selecting pixel resolutions ranging from 1 - 5 microns, typically 8,500,000 individual measurement points can be collected on a thin-section. The results we present include false colour digital images of both thin-sections. From these images, information such as phase proportions (major, minor and trace phases), particle textures, packing densities, and particle geometries, has been quantified. Parameters such as porosity and average phase density, which are of geomechanical interest, can also be calculated automatically. This study is part of an on-going investigation into spatial variation of lunar regolith and NASA's ISRU Lunar Simulant Development Project.

  18. Monte Carlo simulation of the spatial resolution and depth sensitivity of two-dimensional optical imaging of the brain

    PubMed Central

    Tian, Peifang; Devor, Anna; Sakadžić, Sava; Dale, Anders M.; Boas, David A.

    2011-01-01

    Absorption or fluorescence-based two-dimensional (2-D) optical imaging is widely employed in functional brain imaging. The image is a weighted sum of the real signal from the tissue at different depths. This weighting function is defined as “depth sensitivity.” Characterizing depth sensitivity and spatial resolution is important to better interpret the functional imaging data. However, due to light scattering and absorption in biological tissues, our knowledge of these is incomplete. We use Monte Carlo simulations to carry out a systematic study of spatial resolution and depth sensitivity for 2-D optical imaging methods with configurations typically encountered in functional brain imaging. We found the following: (i) the spatial resolution is <200 μm for NA ≤0.2 or focal plane depth ≤300 μm. (ii) More than 97% of the signal comes from the top 500 μm of the tissue. (iii) For activated columns with lateral size larger than spatial resolution, changing numerical aperature (NA) and focal plane depth does not affect depth sensitivity. (iv) For either smaller columns or large columns covered by surface vessels, increasing NA and∕or focal plane depth may improve depth sensitivity at deeper layers. Our results provide valuable guidance for the optimization of optical imaging systems and data interpretation. PMID:21280912

  19. Pulse Based Time-of-Flight Range Sensing.

    PubMed

    Sarbolandi, Hamed; Plack, Markus; Kolb, Andreas

    2018-05-23

    Pulse-based Time-of-Flight (PB-ToF) cameras are an attractive alternative range imaging approach, compared to the widely commercialized Amplitude Modulated Continuous-Wave Time-of-Flight (AMCW-ToF) approach. This paper presents an in-depth evaluation of a PB-ToF camera prototype based on the Hamamatsu area sensor S11963-01CR. We evaluate different ToF-related effects, i.e., temperature drift, systematic error, depth inhomogeneity, multi-path effects, and motion artefacts. Furthermore, we evaluate the systematic error of the system in more detail, and introduce novel concepts to improve the quality of range measurements by modifying the mode of operation of the PB-ToF camera. Finally, we describe the means of measuring the gate response of the PB-ToF sensor and using this information for PB-ToF sensor simulation.

  20. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    USGS Publications Warehouse

    Legleiter, Carl

    2016-01-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  1. Volumetric 3D display with multi-layered active screens for enhanced the depth perception (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Rin; Park, Min-Kyu; Choi, Jun-Chan; Park, Ji-Sub; Min, Sung-Wook

    2016-09-01

    Three-dimensional (3D) display technology has been studied actively because it can offer more realistic images compared to the conventional 2D display. Various psychological factors such as accommodation, binocular parallax, convergence and motion parallax are used to recognize a 3D image. For glass-type 3D displays, they use only the binocular disparity in 3D depth cues. However, this method cause visual fatigue and headaches due to accommodation conflict and distorted depth perception. Thus, the hologram and volumetric display are expected to be an ideal 3D display. Holographic displays can represent realistic images satisfying the entire factors of depth perception. But, it require tremendous amount of data and fast signal processing. The volumetric 3D displays can represent images using voxel which is a physical volume. However, it is required for large data to represent the depth information on voxel. In order to simply encode 3D information, the compact type of depth fused 3D (DFD) display, which can create polarization distributed depth map (PDDM) image having both 2D color image and depth image is introduced. In this paper, a new volumetric 3D display system is shown by using PDDM image controlled by polarization controller. In order to introduce PDDM image, polarization states of the light through spatial light modulator (SLM) was analyzed by Stokes parameter depending on the gray level. Based on the analysis, polarization controller is properly designed to convert PDDM image into sectioned depth images. After synchronizing PDDM images with active screens, we can realize reconstructed 3D image. Acknowledgment This work was supported by `The Cross-Ministry Giga KOREA Project' grant from the Ministry of Science, ICT and Future Planning, Korea

  2. Using High Frequency Focused Water-Coupled Ultrasound for 3-D Surface Depression Profiling

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Whalen, Mike F.; Hendricks, J. Lynne; Bodis, James R.

    1999-01-01

    Surface topography is an important variable in the performance of many industrial components and is normally measured with diamond-tip profilometry over a small area or using optical scattering methods for larger area measurement. A prior study was performed demonstrating that focused air-coupled ultrasound at 1 MHz was capable of profiling surfaces with 25 micron depth resolution and 400 micron lateral resolution over a 1.4 mm depth range. In this article, the question of whether higher-frequency focused water-coupled ultrasound can improve on these specifications is addressed. 10 and 25 MHz focused ultrasonic transducers were employed in the water-coupled mode. Time-of-flight images of the sample surface were acquired and converted to depth / surface profile images using the simple relation (d = V*t/2) between distance (d), time-of-flight (t), and the velocity of sound in water (V). Results are compared for the two frequencies used and with those from the 1 MHz air-coupled configuration.

  3. Dual-modal three-dimensional imaging of single cells with isometric high resolution using an optical projection tomography microscope

    NASA Astrophysics Data System (ADS)

    Miao, Qin; Rahn, J. Richard; Tourovskaia, Anna; Meyer, Michael G.; Neumann, Thomas; Nelson, Alan C.; Seibel, Eric J.

    2009-11-01

    The practice of clinical cytology relies on bright-field microscopy using absorption dyes like hematoxylin and eosin in the transmission mode, while the practice of research microscopy relies on fluorescence microscopy in the epi-illumination mode. The optical projection tomography microscope is an optical microscope that can generate 3-D images of single cells with isometric high resolution both in absorption and fluorescence mode. Although the depth of field of the microscope objective is in the submicron range, it can be extended by scanning the objective's focal plane. The extended depth of field image is similar to a projection in a conventional x-ray computed tomography. Cells suspended in optical gel flow through a custom-designed microcapillary. Multiple pseudoprojection images are taken by rotating the microcapillary. After these pseudoprojection images are further aligned, computed tomography methods are applied to create 3-D reconstruction. 3-D reconstructed images of single cells are shown in both absorption and fluorescence mode. Fluorescence spatial resolution is measured at 0.35 μm in both axial and lateral dimensions. Since fluorescence and absorption images are taken in two different rotations, mechanical error may cause misalignment of 3-D images. This mechanical error is estimated to be within the resolution of the system.

  4. Books about Dads: In Celebration of Father's Day.

    ERIC Educational Resources Information Center

    Johnson, Denise

    2001-01-01

    Discusses four children's books that convey the depth of emotion and range of images universally projected by the word father: "Reading with Dad" (Richard Jorgensen); "Lots of Dads" (Shelley Rotner and Sheila M. Kelly); "In Daddy's Arms I am Tall: African Americans Celebrating Fathers" (illustrated by Javaka Steptoe);…

  5. Generalized Chirp Scaling Combined with Baseband Azimuth Scaling Algorithm for Large Bandwidth Sliding Spotlight SAR Imaging

    PubMed Central

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-01-01

    This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057

  6. The Multiwavelength Survey by Yale-Chile (MUSYC): Deep Near-Infrared Imaging and the Selection of Distant Galaxies

    NASA Astrophysics Data System (ADS)

    Quadri, Ryan; Marchesini, Danilo; van Dokkum, Pieter; Gawiser, Eric; Franx, Marijn; Lira, Paulina; Rudnick, Gregory; Urry, C. Megan; Maza, José; Kriek, Mariska; Barrientos, L. Felipe; Blanc, Guillermo A.; Castander, Francisco J.; Christlein, Daniel; Coppi, Paolo S.; Hall, Patrick B.; Herrera, David; Infante, Leopoldo; Taylor, Edward N.; Treister, Ezequiel; Willis, Jon P.

    2007-09-01

    We present deep near-infrared JHK imaging of four 10' × 10' fields. The observations were carried out as part of the Multiwavelength Survey by Yale-Chile (MUSYC) with ISPI on the CTIO 4 m telescope. The typical point-source limiting depths are J ~ 22.5, H ~ 21.5, and K ~ 21 (5 σ Vega). The effective seeing in the final images is ~1.0″. We combine these data with MUSYC UBVRIz imaging to create K-selected catalogs that are unique for their uniform size, depth, filter coverage, and image quality. We investigate the rest-frame optical colors and photometric redshifts of galaxies that are selected using common color selection techniques, including distant red galaxies (DRGs), star-forming and passive BzKs, and the rest-frame UV-selected BM, BX, and Lyman break galaxies (LBGs). These techniques are effective at isolating large samples of high-redshift galaxies, but none provide complete or uniform samples across the targeted redshift ranges. The DRG and BM/BX/LBG criteria identify populations of red and blue galaxies, respectively, as they were designed to do. The star-forming BzKs have a very wide redshift distribution, extending down to z ~ 1, a wide range of colors, and may include galaxies with very low specific star formation rates. In comparison, the passive BzKs are fewer in number, have a different distribution of K magnitudes, and have a somewhat different redshift distribution. By combining either the DRG and BM/BX/LBG criteria, or the star-forming and passive BzK criteria, it appears possible to define a reasonably complete sample of galaxies to our flux limit over specific redshift ranges. However, the redshift dependence of both the completeness and sampled range of rest-frame colors poses an ultimate limit to the usefulness of these techniques.

  7. Direct visual observations of nanoparticles in the Celtic Sea

    NASA Astrophysics Data System (ADS)

    Rusiecka, D.; Gledhill, M.; Achterberg, E. P.; Elgy, C.; Connelly, D.

    2016-02-01

    Shelf seas are a substantial source of dissolved iron and other biologically essential dissolved trace metals (dTM) to the open ocean. The concentration of dTM in seawater is strongly influenced by their physico-chemical forms. The role of submicron colloids on the stabilization and transport of dTM in the soil porewaters has already been recognized. However, the influence of nanoparticles (NP) on dTM stabilization in marine systems and consequently on their long range off-shelf transport is still very poorly constrained. The characterization of marine NP is fundamental to understand their chemical behaviour. Here, we report the first direct visual investigation into the formation, water column size distribution and seasonal variation of NP in the Celtic Sea with supportive examination of particle morphology. Samples were collected from surface (depth range), intermediate (depth range) and deep (depth range) waters in December 2014, April 2015 and July 2015. Nanoparticles (>3 KDa) were concentrated by stirred cell ultrafiltration and imaged using Atomic Force Microscopy and Transmission Electron Microscopy. NP size distributions from the spring cruise showed that they mainly existed in the smallest 0.4-1 nm fraction in surface- and bottom-waters, whereas the summer season was dominated by 0.4-1 nm fraction at all depths. In winter NP in bottom-waters were found predominantly in bigger 1-2 nm fraction.

  8. Depth resolved hyperspectral imaging spectrometer based on structured light illumination and Fourier transform interferometry

    PubMed Central

    Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.

    2014-01-01

    A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367

  9. Development and Applications of Laminar Optical Tomography for In Vivo Imaging

    NASA Astrophysics Data System (ADS)

    Burgess, Sean A.

    Laminar optical tomography (LOT) is an optical imaging technique capable of making depth-resolved measurements of absorption and fluorescence contrast in scattering tissue. LOT was first demonstrated in 2004 by Hillman et al [1]. The technique combines a non-contact laser scanning geometry, similar to a low magnification confocal microscope, with the imaging principles of diffuse optical tomography (DOT). This thesis describes the development and application of a second generation LOT system, which acquires both fluorescence and multi-wavelength measurements simultaneously and is better suited for in vivo measurements. Chapter 1 begins by reviewing the interactions of light with tissue that form the foundation of optical imaging. A range of related optical imaging techniques and the basic principles of LOT imaging are then described. In Chapter 2, the development of the new LOT imaging system is described including the implementation of a series of interfaces to allow clinical imaging. System performance is then evaluated on a range of imaging phantoms. Chapter 3 describes two in vivo imaging applications explored using the second generation LOT system, first in a clinical setting where skin lesions were imaged, and then in a laboratory setting where LOT imaging was performed on exposed rat cortex. The final chapter provides a brief summary and describes future directions for LOT. LOT has the potential to find applications in medical diagnostics, surgical guidance, and in-situ monitoring owing to its sensitivity to absorption and fluorescence contrast as well as its ability to provide depth sensitive measures. Optical techniques can characterize blood volume and oxygenation, two important biological parameters, through measurements at different wavelengths. Fluorescence measurements, either from autofluorescence or fluorescent dyes, have shown promise for identifying and analyzing lesions in various epithelial tissues including skin [2, 3], colon [4], esophagus [5, 6], oral mucosa [7, 8], and cervix [9]. The desire to capture these types of measurements with LOT motivated much of the work presented here.

  10. Structure of the California Coast Ranges and San Andreas Fault at SAFOD from seismic waveform inversion and reflection imaging

    USGS Publications Warehouse

    Bleibinhaus, F.; Hole, J.A.; Ryberg, T.; Fuis, G.S.

    2007-01-01

    A seismic reflection and refraction survey across the San Andreas Fault (SAF) near Parkfield provides a detailed characterization of crustal structure across the location of the San Andreas Fault Observatory at Depth (SAFOD). Steep-dip prestack migration and frequency domain acoustic waveform tomography were applied to obtain highly resolved images of the upper 5 km of the crust for 15 km on either side of the SAF. The resulting velocity model constrains the top of the Salinian granite with great detail. Steep-dip reflection seismic images show several strong-amplitude vertical reflectors in the uppermost crust near SAFOD that define an ???2-km-wide zone comprising the main SAF and two or more local faults. Another prominent subvertical reflector at 2-4 km depth ???9 km to the northeast of the SAF marks the boundary between the Franciscan terrane and the Great Valley Sequence. A deep seismic section of low resolution shows several reflectors in the Salinian crust west of the SAF. Two horizontal reflectors around 10 km depth correlate with strains of seismicity observed along-strike of the SAF. They represent midcrustal shear zones partially decoupling the ductile lower crust from the brittle upper crust. The deepest reflections from ???25 km depth are interpreted as crust-mantle boundary. Copyright 2007 by the American Geophysical Union.

  11. Visual saliency detection based on in-depth analysis of sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Shen, Siqiu; Ning, Chen

    2018-03-01

    Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.

  12. Optical mesoscopy without the scatter: broadband multispectral optoacoustic mesoscopy

    PubMed Central

    Chekkoury, Andrei; Gateau, Jérôme; Driessen, Wouter; Symvoulidis, Panagiotis; Bézière, Nicolas; Feuchtinger, Annette; Walch, Axel; Ntziachristos, Vasilis

    2015-01-01

    Optical mesoscopy extends the capabilities of biological visualization beyond the limited penetration depth achieved by microscopy. However, imaging of opaque organisms or tissues larger than a few hundred micrometers requires invasive tissue sectioning or chemical treatment of the specimen for clearing photon scattering, an invasive process that is regardless limited with depth. We developed previously unreported broadband optoacoustic mesoscopy as a tomographic modality to enable imaging of optical contrast through several millimeters of tissue, without the need for chemical treatment of tissues. We show that the unique combination of three-dimensional projections over a broad 500 kHz–40 MHz frequency range combined with multi-wavelength illumination is necessary to render broadband multispectral optoacoustic mesoscopy (2B-MSOM) superior to previous optical or optoacoustic mesoscopy implementations. PMID:26417486

  13. Spatiotemporal closure of fractional laser-ablated channels imaged by optical coherence tomography and reflectance confocal microscopy.

    PubMed

    Banzhaf, Christina A; Wind, Bas S; Mogensen, Mette; Meesters, Arne A; Paasch, Uwe; Wolkerstorfer, Albert; Haedersdal, Merete

    2016-02-01

    Optical coherence tomography (OCT) and reflectance confocal microscopy (RCM) offer high-resolution optical imaging of the skin, which may provide benefit in the context of laser-assisted drug delivery. We aimed to characterize postoperative healing of ablative fractional laser (AFXL)-induced channels and dynamics in their spatiotemporal closure using in vivo OCT and RCM techniques. The inner forearm of healthy subjects (n = 6) was exposed to 10,600 nm fractional CO2 laser using 5 and 25% densities, 120 μm beam diameter, 5, 15, and 25 mJ/microbeam. Treatment sites were scanned with OCT to evaluate closure of AFXL-channels and RCM to evaluate subsequent re-epithelialization. OCT and RCM identified laser channels in epidermis and upper dermis as black, ablated tissue defects surrounded by characteristic hyper-and hyporeflective zones. OCT imaged individual laser channels of the entire laser grid, and RCM imaged epidermal cellular and structural changes around a single laser channel to the depth of the dermoepidermal junction (DEJ) and upper papillary dermis. OCT images visualized a heterogeneous material in the lower part of open laser channels, indicating tissue fluid. By OCT the median percentage of open channels was evaluated at several time points within the first 24 hours and laser channels were found to gradually close, depending on the used energy level. Thus, at 5 mJ/microbeam, 87% (range 73-100%) of channels were open one hour after laser exposure, which declined to 27% (range 20-100%) and 20% (range 7-93%) at 12 and 24 hours after laser exposure, respectively. At 25 mJ/microbeam, 100% (range 100-100%) of channels were open 1 hour after laser exposure while 53% (range 33-100%) and 40% (range 0-100%) remained open at 12 and 24 hours after exposure. Median depth and width of open channels decreased over time depending of applied energy. RCM verified initial re-epithelialization from day 2 for all energy levels used. Morphology of ablation defects by OCT and RCM corresponded to histological assessments. OCT and RCM enabled imaging of AFXL-channels and their spatiotemporal closure. Laser channels remained open up to 24 hours post laser, which may be important for the time perspective to deliver topical substances through AFXL channels. © 2015 Wiley Periodicals, Inc.

  14. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  15. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  16. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  17. Large depth high-precision FMCW tomography using a distributed feedback laser array

    NASA Astrophysics Data System (ADS)

    DiLazaro, Thomas; Nehmetallah, George

    2018-02-01

    Swept-source optical coherence tomography (SS-OCT) has been widely employed in the medical industry for the high resolution imaging of subsurface biological structures. SS-OCT typically exhibits axial resolutions on the order of tens of microns at speeds of hundreds of kilohertz. Using the same coherent heterodyne detection technique, frequency modulated continuous wave (FMCW) ladar has been used for highly precise ranging for distances up to kilometers. Distributed feedback lasers (DFBs) have been used as a simple and inexpensive source for FMCW ranging. Here, we use a bandwidth-combined DFB array for sub-surface volume imaging at a 27 μm axial resolution over meters of distance. 2D and 3D tomographic images of several semi-transparent and diffuse objects at distances up to 10 m will be presented.

  18. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  19. Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties

    PubMed Central

    Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Joshua Pfefer, T.

    2016-01-01

    Abstract. Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison. PMID:26886681

  20. Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.

    PubMed

    Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso

    2018-07-01

    There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.

  1. Development of a high-speed VCSEL OCT system for real-time imaging of conscious patients larynx using a hand-held probe (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rangarajan, Swathi; Chou, Li-Dek; Coughlan, Carolyn; Sharma, Giriraj; Wong, Brian J. F.; Ramalingam, Tirunelveli S.

    2016-02-01

    Fourier domain optical coherence tomography (FD-OCT) is a noninvasive imaging modality that has previously been used to image the human larynx. However, differences in anatomical geometry and short imaging range of conventional OCT limits its application in a clinical setting. In order to address this issue, we have developed a gradient-index (GRIN) lens rod-based hand-held probe in conjunction with a long imaging range 200 kHz Vertical-Cavity Surface Emitting Lasers (VCSEL) swept-source optical coherence tomography (SS-OCT) system for high speed real-time imaging of the human larynx in an office setting. This hand-held probe is designed to have a long and dynamically tunable working distance to accommodate the differences in anatomical geometry of human test subjects. A nominal working distance (~6 cm) of the probe is selected to have a lateral resolution <100 um within a depth of focus of 6.4 mm, which covers more than half of the 12 mm imaging range of the VCSEL laser. The maximum lateral scanning range of the probe at 6 cm working distance is approximately 8.4 mm, and imaging an area of 8.5 mm by 8.5 mm is accomplished within a second. Using the above system, we will demonstrate real-time cross-sectional OCT imaging of larynx during phonation in vivo in human and ex-vivo in pig vocal folds.

  2. Shaping the light for the investigation of depth-extended scattering media

    NASA Astrophysics Data System (ADS)

    Osten, W.; Frenner, K.; Pedrini, G.; Singh, A. K.; Schindler, J.; Takeda, M.

    2018-02-01

    Scattering media are an ongoing challenge for all kind of imaging technologies including coherent and incoherent principles. Inspired by new approaches of computational imaging and supported by the availability of powerful computers, spatial light modulators, light sources and detectors, a variety of new methods ranging from holography to time-of-flight imaging, phase conjugation, phase recovery using iterative algorithms and correlation techniques have been introduced and applied to different types of objects. However, considering the obvious progress in this field, several problems are still matter of investigation and their solution could open new doors for the inspection and application of scattering media as well. In particular, these open questions include the possibility of extending the 2d-approach to the inspection of depth-extended objects, the direct use of a scattering media as a simple tool for imaging of complex objects and the improvement of coherent inspection techniques for the dimensional characterization of incoherently radiating spots embedded in scattering media. In this paper we show our recent findings in coping with these challenges. First we describe how to explore depth-extended objects by means of a scattering media. Afterwards, we extend this approach by implementing a new type of microscope making use of a simple scatter plate as a kind of flat and unconventional imaging lens. Finally, we introduce our shearing interferometer in combination with structured illumination for retrieving the axial position of fluorescent light emitting spots embedded in scattering media.

  3. Rotational imaging optical coherence tomography for full-body mouse embryonic imaging

    PubMed Central

    Wu, Chen; Sudheendran, Narendran; Singh, Manmohan; Larina, Irina V.; Dickinson, Mary E.; Larin, Kirill V.

    2016-01-01

    Abstract. Optical coherence tomography (OCT) has been widely used to study mammalian embryonic development with the advantages of high spatial and temporal resolutions and without the need for any contrast enhancement probes. However, the limited imaging depth of traditional OCT might prohibit visualization of the full embryonic body. To overcome this limitation, we have developed a new methodology to enhance the imaging range of OCT in embryonic day (E) 9.5 and 10.5 mouse embryos using rotational imaging. Rotational imaging OCT (RI-OCT) enables full-body imaging of mouse embryos by performing multiangle imaging. A series of postprocessing procedures was performed on each cross-section image, resulting in the final composited image. The results demonstrate that RI-OCT is able to improve the visualization of internal mouse embryo structures as compared to conventional OCT. PMID:26848543

  4. Confocal spectroscopic imaging measurements of depth dependent hydration dynamics in human skin in-vivo

    NASA Astrophysics Data System (ADS)

    Behm, P.; Hashemi, M.; Hoppe, S.; Wessel, S.; Hagens, R.; Jaspers, S.; Wenck, H.; Rübhausen, M.

    2017-11-01

    We present confocal spectroscopic imaging measurements applied to in-vivo studies to determine the depth dependent hydration profiles of human skin. The observed spectroscopic signal covers the spectral range from 810 nm to 2100 nm allowing to probe relevant absorption signals that can be associated with e.g. lipid and water-absorption bands. We employ a spectrally sensitive autofocus mechanism that allows an ultrafast focusing of the measurement spot on the skin and subsequently probes the evolution of the absorption bands as a function of depth. We determine the change of the water concentration in m%. The water concentration follows a sigmoidal behavior with an increase of the water content of about 70% within 5 μm in a depth of about 14 μm. We have applied our technique to study the hydration dynamics of skin before and after treatment with different concentrations of glycerol indicating that an increase of the glycerol concentration leads to an enhanced water concentration in the stratum corneum. Moreover, in contrast to traditional corneometry we have found that the application of Aluminium Chlorohydrate has no impact to the hydration of skin.

  5. Multispectral photoacoustic tomography for detection of small tumors inside biological tissues

    NASA Astrophysics Data System (ADS)

    Hirasawa, Takeshi; Okawa, Shinpei; Tsujita, Kazuhiro; Kushibiki, Toshihiro; Fujita, Masanori; Urano, Yasuteru; Ishihara, Miya

    2018-02-01

    Visualization of small tumors inside biological tissue is important in cancer treatment because that promotes accurate surgical resection and enables therapeutic effect monitoring. For sensitive detection of tumor, we have been developing photoacoustic (PA) imaging technique to visualize tumor-specific contrast agents, and have already succeeded to image a subcutaneous tumor of a mouse using the contrast agents. To image tumors inside biological tissues, extension of imaging depth and improvement of sensitivity were required. In this study, to extend imaging depth, we developed a PA tomography (PAT) system that can image entire cross section of mice. To improve sensitivity, we discussed the use of the P(VDF-TrFE) linear array acoustic sensor that can detect PA signals with wide ranges of frequencies. Because PA signals produced from low absorbance optical absorbers shifts to low frequency, we hypothesized that the detection of low frequency PA signals improves sensitivity to low absorbance optical absorbers. We developed a PAT system with both a PZT linear array acoustic sensor and the P(VDF-TrFE) sensor, and performed experiment using tissue-mimicking phantoms to evaluate lower detection limits of absorbance. As a result, PAT images calculated from low frequency components of PA signals detected by the P(VDF-TrFE) sensor could visualize optical absorbers with lower absorbance.

  6. In-plane ultrasonic needle tracking using a fiber-optic hydrophone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Wenfeng, E-mail: wenfeng.xia@ucl.ac.uk; Desjardins, Adrien E.; Mari, Jean Martial

    Purpose: Accurate and efficient guidance of needles to procedural targets is critically important during percutaneous interventional procedures. Ultrasound imaging is widely used for real-time image guidance in a variety of clinical contexts, but with this modality, uncertainties about the location of the needle tip within the image plane lead to significant complications. Whilst several methods have been proposed to improve the visibility of the needle, achieving accuracy and compatibility with current clinical practice is an ongoing challenge. In this paper, the authors present a method for directly visualizing the needle tip using an integrated fiber-optic ultrasound receiver in conjunction withmore » the imaging probe used to acquire B-mode ultrasound images. Methods: Needle visualization and ultrasound imaging were performed with a clinical ultrasound imaging system. A miniature fiber-optic ultrasound hydrophone was integrated into a 20 gauge injection needle tip to receive transmissions from individual transducer elements of the ultrasound imaging probe. The received signals were reconstructed to create an image of the needle tip. Ultrasound B-mode imaging was interleaved with needle tip imaging. A first set of measurements was acquired in water and tissue ex vivo with a wide range of insertion angles (15°–68°) to study the accuracy and sensitivity of the tracking method. A second set was acquired in an in vivo swine model, with needle insertions to the brachial plexus. A third set was acquired in an in vivo ovine model for fetal interventions, with insertions to different locations within the uterine cavity. Two linear ultrasound imaging probes were used: a 14–5 MHz probe for the first and second sets, and a 9–4 MHz probe for the third. Results: During insertions in tissue ex vivo and in vivo, the imaged needle tip had submillimeter axial and lateral dimensions. The signal-to-noise (SNR) of the needle tip was found to depend on the insertion angle. With the needle tip in water, the SNR of the needle tip varied with insertion angle, attaining values of 284 at 27° and 501 at 68°. In swine tissue ex vivo, the SNR decreased from 80 at 15° to 16 at 61°. In swine tissue in vivo, the SNR varied with depth, from 200 at 17.5 mm to 48 at 26 mm, with a constant insertion angle of 40°. In ovine tissue in vivo, within the uterine cavity, the SNR varied from 46.4 at 25 mm depth to 18.4 at 32 mm depth, with insertion angles in the range of 26°–65°. Conclusions: A fiber-optic ultrasound receiver integrated into the needle cannula in combination with single-element transmissions from the imaging probe allows for direct visualization of the needle tip within the ultrasound imaging plane. Visualization of the needle tip was achieved at depths and insertion angles that are encountered during nerve blocks and fetal interventions. The method presented in this paper has strong potential to improve the safety and efficiency of ultrasound-guided needle insertions.« less

  7. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images.

    PubMed

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer's own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.

  8. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793

  9. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.

  10. Sentinel lymph nodes and lymphatic vessels: noninvasive dual-modality in vivo mapping by using indocyanine green in rats--volumetric spectroscopic photoacoustic imaging and planar fluorescence imaging.

    PubMed

    Kim, Chulhong; Song, Kwang Hyun; Gao, Feng; Wang, Lihong V

    2010-05-01

    To noninvasively map sentinel lymph nodes (SLNs) and lymphatic vessels in rats in vivo by using dual-modality nonionizing imaging-volumetric spectroscopic photoacoustic imaging, which measures optical absorption, and planar fluorescence imaging, which measures fluorescent emission-of indocyanine green (ICG). Institutional animal care and use committee approval was obtained. Healthy Sprague-Dawley rats weighing 250-420 g (age range, 60-120 days) were imaged by using volumetric photoacoustic imaging (n = 5) and planar fluorescence imaging (n = 3) before and after injection of 1 mmol/L ICG. Student paired t tests based on a logarithmic scale were performed to evaluate the change in photoacoustic signal enhancement of SLNs and lymphatic vessels before and after ICG injection. The spatial resolutions of both imaging systems were compared at various imaging depths (2-8 mm) by layering additional biologic tissues on top of the rats in vivo. Spectroscopic photoacoustic imaging was applied to identify ICG-dyed SLNs. In all five rats examined with photoacoustic imaging, SLNs were clearly visible, with a mean signal enhancement of 5.9 arbitrary units (AU) + or - 1.8 (standard error of the mean) (P < .002) at 0.2 hour after injection, while lymphatic vessels were seen in four of the five rats, with a signal enhancement of 4.3 AU + or - 0.6 (P = .001). In all three rats examined with fluorescence imaging, SLNs and lymphatic vessels were seen. The average full width at half maximum (FWHM) of the SLNs in the photoacoustic images at three imaging depths (2, 6, and 8 mm) was 2.0 mm + or - 0.2 (standard deviation), comparable to the size of a dissected lymph node as measured with a caliper. However, the FWHM of the SLNs in fluorescence images widened from 8 to 22 mm as the imaging depth increased, owing to strong light scattering. SLNs were identified spectroscopically in photoacoustic images. These two modalities, when used together with ICG, have the potential to help map SLNs in axillary staging and to help evaluate tumor metastasis in patients with breast cancer.

  11. Use Of Vertical Electrical Sounding Survey For Study Groundwater In NISSAH Region, SAUDI ARABIA

    NASA Astrophysics Data System (ADS)

    Alhenaki, Bander; Alsoma, Ali

    2015-04-01

    The aim of this research is to investigate groundwater depth in desert and dry environmental conditions area . The study site located in Wadi Nisah-eastern part of Najd province (east-central of Saudi Arabia), Generally, the study site is underlain by Phanerozoic sedimentary rocks of the western edge of the Arabian platform, which rests on Proterozoic basement at depths ranged between 5-8km. Another key objective of this research is to assess the water-table and identify the bearing layers structures study area by using Vertical Electrical Sounding (VES) 1D imaging technique. We have been implemented and acquired a sections of 315 meter vertical electrical soundings using Schlumberger field arrangements . These dataset were conducted along 9 profiles. The resistivity Schlumberger sounding was carried with half-spacing in the range 500 . The VES survey intend to cover several locations where existing wells information may be used for correlations. also location along the valley using the device Syscal R2 The results of this study concluded that there are at least three sedimentary layers to a depth of 130 meter. First layer, extending from the surface to a depth of about 3 meter characterized by dry sandy layer and high resistivity value. The second layer, underlain the first layer to a depth of 70 meter. This layer has less resistant compare to the first layer. Last layer, has low resistivity values of 20 ohm .m to a depth of 130 meter blow ground surface. We have observed a complex pattern of groundwater depth (ranging from 80 meter to 120 meter) which may reflect the lateral heterogeneity of study site. The outcomes of this research has been used to locate the suitable drilling locations.

  12. Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers.

    PubMed

    Kim, Yunhee; Choi, Heejin; Kim, Joohwan; Cho, Seong-Woo; Kim, Youngmin; Park, Gilbae; Lee, Byoungho

    2007-06-20

    A depth-enhanced three-dimensional integral imaging system with electrically variable image planes is proposed. For implementing the variable image planes, polymer-dispersed liquid-crystal (PDLC) films and a projector are adopted as a new display system in the integral imaging. Since the transparencies of PDLC films are electrically controllable, we can make each film diffuse the projected light successively with a different depth from the lens array. As a result, the proposed method enables control of the location of image planes electrically and enhances the depth. The principle of the proposed method is described, and experimental results are also presented.

  13. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    NASA Astrophysics Data System (ADS)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  14. Uncertainty characterization of particle location from refocused plenoptic images.

    PubMed

    Hall, Elise M; Guildenbecher, Daniel R; Thurow, Brian S

    2017-09-04

    Plenoptic imaging is a 3D imaging technique that has been applied for quantification of 3D particle locations and sizes. This work experimentally evaluates the accuracy and precision of such measurements by investigating a static particle field translated to known displacements. Measured 3D displacement values are determined from sharpness metrics applied to volumetric representations of the particle field created using refocused plenoptic images, corrected using a recently developed calibration technique. Comparison of measured and known displacements for many thousands of particles allows for evaluation of measurement uncertainty. Mean displacement error, as a measure of accuracy, is shown to agree with predicted spatial resolution over the entire measurement domain, indicating robustness of the calibration methods. On the other hand, variation in the error, as a measure of precision, fluctuates as a function of particle depth in the optical direction. Error shows the smallest variation within the predicted depth of field of the plenoptic camera, with a gradual increase outside this range. The quantitative uncertainty values provided here can guide future measurement optimization and will serve as useful metrics for design of improved processing algorithms.

  15. Depth-Resolved Multispectral Sub-Surface Imaging Using Multifunctional Upconversion Phosphors with Paramagnetic Properties

    PubMed Central

    Ovanesyan, Zaven; Mimun, L. Christopher; Kumar, Gangadharan Ajith; Yust, Brian G.; Dannangoda, Chamath; Martirosyan, Karen S.; Sardar, Dhiraj K.

    2015-01-01

    Molecular imaging is very promising technique used for surgical guidance, which requires advancements related to properties of imaging agents and subsequent data retrieval methods from measured multispectral images. In this article, an upconversion material is introduced for subsurface near-infrared imaging and for the depth recovery of the material embedded below the biological tissue. The results confirm significant correlation between the analytical depth estimate of the material under the tissue and the measured ratio of emitted light from the material at two different wavelengths. Experiments with biological tissue samples demonstrate depth resolved imaging using the rare earth doped multifunctional phosphors. In vitro tests reveal no significant toxicity, whereas the magnetic measurements of the phosphors show that the particles are suitable as magnetic resonance imaging agents. The confocal imaging of fibroblast cells with these phosphors reveals their potential for in vivo imaging. The depth-resolved imaging technique with such phosphors has broad implications for real-time intraoperative surgical guidance. PMID:26322519

  16. Extended depth of field imaging for high speed object analysis

    NASA Technical Reports Server (NTRS)

    Frost, Keith (Inventor); Ortyn, William (Inventor); Basiji, David (Inventor); Bauer, Richard (Inventor); Liang, Luchuan (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)

    2011-01-01

    A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.

  17. Design and technical evaluation of fibre-coupled Raman probes for the image-guided discrimination of cancerous skin

    NASA Astrophysics Data System (ADS)

    Schleusener, J.; Reble, C.; Helfmann, J.; Gersonde, I.; Cappius, H.-J.; Glanert, M.; Fluhr, J. W.; Meinke, M. C.

    2014-03-01

    Two different designs for fibre-coupled Raman probes are presented that are optimized for discriminating cancerous and normal skin by achieving high epithelial sensitivity to detect a major component of the Raman signal from the depth range of the epithelium. This is achieved by optimizing Raman spot diameters to the range of ≈200 µm, which distinguishes this approach from the common applications of either Raman microspectroscopy (1-5 µm) or measurements on larger sampling volume using spot sizes of a few mm. Video imaging with a depicted area in the order of a few cm, to allow comparing Raman measurements to the location of the histo-pathologic findings, is integrated in both designs. This is important due to the inhomogeneity of cancerous lesions. Video image acquisition is achieved using white light LED illumination, which avoids ambient light artefacts. The design requirements focus either on a compact light-weight configuration, for pen-like handling, or on a video-visible measurement spot to enable increased positioning accuracy. Both probes are evaluated with regard to spot size, Rayleigh suppression, background fluorescence, depth sensitivity, clinical handling and ambient light suppression. Ex vivo measurements on porcine ear skin correlates well with findings of other groups.

  18. Sialic acids regulate microvessel permeability, revealed by novel in vivo studies of endothelial glycocalyx structure and function

    PubMed Central

    Betteridge, Kai B.; Arkill, Kenton P.; Neal, Christopher R.; Harper, Steven J.; Foster, Rebecca R.; Satchell, Simon C.; Bates, David O.

    2017-01-01

    Key points We have developed novel techniques for paired, direct, real‐time in vivo quantification of endothelial glycocalyx structure and associated microvessel permeability.Commonly used imaging and analysis techniques yield measurements of endothelial glycocalyx depth that vary by over an order of magnitude within the same vessel.The anatomical distance between maximal glycocalyx label and maximal endothelial cell plasma membrane label provides the most sensitive and reliable measure of endothelial glycocalyx depth.Sialic acid residues of the endothelial glycocalyx regulate glycocalyx structure and microvessel permeability to both water and albumin. Abstract The endothelial glycocalyx forms a continuous coat over the luminal surface of all vessels, and regulates multiple vascular functions. The contribution of individual components of the endothelial glycocalyx to one critical vascular function, microvascular permeability, remains unclear. We developed novel, real‐time, paired methodologies to study the contribution of sialic acids within the endothelial glycocalyx to the structural and functional permeability properties of the same microvessel in vivo. Single perfused rat mesenteric microvessels were perfused with fluorescent endothelial cell membrane and glycocalyx labels, and imaged with confocal microscopy. A broad range of glycocalyx depth measurements (0.17–3.02 μm) were obtained with different labels, imaging techniques and analysis methods. The distance between peak cell membrane and peak glycocalyx label provided the most reliable measure of endothelial glycocalyx anatomy, correlating with paired, numerically smaller values of endothelial glycocalyx depth (0.078 ± 0.016 μm) from electron micrographs of the same portion of the same vessel. Disruption of sialic acid residues within the endothelial glycocalyx using neuraminidase perfusion decreased endothelial glycocalyx depth and increased apparent solute permeability to albumin in the same vessels in a time‐dependent manner, with changes in all three true vessel wall permeability coefficients (hydraulic conductivity, reflection coefficient and diffusive solute permeability). These novel technologies expand the range of techniques that permit direct studies of the structure of the endothelial glycocalyx and dependent microvascular functions in vivo, and demonstrate that sialic acid residues within the endothelial glycocalyx are critical regulators of microvascular permeability to both water and albumin. PMID:28524373

  19. A taxonomy of three species of negative velocity arrivals in the lithospheric mantle beneath the United States using Sp receiver functions

    NASA Astrophysics Data System (ADS)

    Foster, K.; Dueker, K.; McClenahan, J.; Hansen, S. M.; Schmandt, B.

    2012-12-01

    The Transportable Array, with significant supplement from past PASSCAL experiments, provides an unprecedented opportunity for a holistic view over the geologically and tectonically diverse continent. New images from 34,000 Sp Receiver Functions image lithospheric and upper mantle structure that has not previously been well constrained, significant to our understanding of upper mantle processes and continental evolution. The negative velocity gradient (NVG) found beneath the Moho has been elusive and is often loosely termed the "Lithosphere-Asthenosphere Boundary" (LAB).This label is used by some researchers to indicate a rheological boundary, a thermal gradient, an anisotropic velocity contrast, or a compositional boundary, and much confusion has arisen around what observed NVG arrivals manifest. Deconvolution across up to 400 stations simultaneously has enhanced the source wavelet estimation and allowed for more accurate receiver functions. In addition, Sdp converted phases are precursory to the direct S phase arrival, eliminating the issue of contamination from reverberated phases that add noise to Ps receiver functions in this lower-lithospheric and upper mantle depth range. We present taxonomy of the NVG arrivals beneath the Moho across the span of the Transportable Array (125° - 85° W). The NVG is classified into three different categories, primarily distinguished by the estimated temperature at the depth of the arrival. The first species of Sp NVG arrivals is found to be in the region west of the Precambrian rift hinge line, at a depth range of 70 - 90 km, corresponding to a temperature of >1150° C. This temperature and depth is predicted to be supersolidus for a 0.02% weight H2O Peridotite (Katz et al., 2004), supporting the theory that these arrivals are due to a melt-staging area (MSA), which could be correlated with the base of the thermal lithosphere. The current depth estimate of the cratonic US thermal LAB ranges from 150-220 km (Yuan and Romanowitz, 2010), and yet a pervasive arrival in our Sp and Ps images shows a NVG ranging from 80 - 110 km depth, with temperature estimates of ~800° C. Clearly internal to the lithosphere, this signal cannot be a LAB arrival. Hence, our second species of NVG is a Mid-Lithospheric Discontinuity (MLD) that we interpret as a layer of sub-solidus metasomatic minerals that have solidus in the 1000-1100°C range near three Gpa. These low solidus minerals are amphibole, phlogophite, and carbon-bearing phases. A freezing front (solidus) near three Gpa freezing front would concentrate these low velocity minerals to make a metasomatic layer over Ga time-scales to explain our NVG MLD arrivals. A third species of NVG, in the "warm" category of 950-1150° C, exists beneath the intermountain west region of Laramide shortening that extends from Montana to New Mexico. This region has experienced abundant post-Eocene alkaline magmatism. Mantle xenoliths from this region provide temperature at depth measurements which are in agreement with our surface wave velocity based temperature estimates. Thus, this NVG arrival is interpreted as a near to super-solidus metasomatic layer. Noteworthy is that a deeper arrival (150-190 km) is intermittently observed which would be more relative to the base of the thermal lithosphere.

  20. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  1. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.

    PubMed

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  2. Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics.

    PubMed

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao

    2016-11-25

    For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known "extended DoF" (EDoF) technique, or "wavefront coding," by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.

  3. A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion

    NASA Astrophysics Data System (ADS)

    Yang, Yizhong; Hu, Xionglou; Wu, Nengju; Wang, Pengfei; Xu, Dong; Rong, Shen

    2017-09-01

    In recent years, 3D movies attract people's attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.

  4. Depth enhancement of S3D content and the psychological effects

    NASA Astrophysics Data System (ADS)

    Hirahara, Masahiro; Shiraishi, Saki; Kawai, Takashi

    2012-03-01

    Stereoscopic 3D (S3D) imaging technologies are widely used recently to create content for movies, TV programs, games, etc. Although S3D content differs from 2D content by the use of binocular parallax to induce depth sensation, the relationship between depth control and the user experience remains unclear. In this study, the user experience was subjectively and objectively evaluated in order to determine the effectiveness of depth control, such as an expansion or reduction or a forward or backward shift in the range of maximum parallactic angles in the cross and uncross directions (depth bracket). Four types of S3D content were used in the subjective and objective evaluations. The depth brackets of comparison stimuli were modified in order to enhance the depth sensation corresponding to the content. Interpretation Based Quality (IBQ) methodology was used for the subjective evaluation and the heart rate was measured to evaluate the physiological effect. The results of the evaluations suggest the following two points. (1) Expansion/reduction of the depth bracket affects preference and enhances positive emotions to the S3D content. (2) Expansion/reduction of the depth bracket produces above-mentioned effects more notable than shifting the cross/uncross directions.

  5. Interface quality of different corneal lamellar–cut depths for femtosecond laser–assisted lamellar anterior keratoplasty

    PubMed Central

    Zhang, Chenxing; Bald, Matthew; Tang, Maolong; Li, Yan; Huang, David

    2015-01-01

    PURPOSE To evaluate interface quality of different corneal lamellar–cut depths with the femtosecond laser and determine a feasible range of depth for femtosecond laser–assisted lamellar anterior keratoplasty. SETTING Casey Eye Institute, Portland, Oregon, USA. DESIGN Experimental study. METHODS Full lamellar cuts were made on 20 deepithelialized human cadaver corneas using the femtosecond laser. The cut depth was 17% to 21% (100 μm), 31%, 35%, 38% to 40%, and 45% to 48% of the central stromal thickness. Scanning electron microscopy images of cap and bed surfaces were subjectively graded for ridge and roughness using a scale of 1 to 5 (1 = best). The graft–host match was evaluated by photography and optical coherence tomography in a simulated procedure. RESULTS The ridge score was correlated with the cut depth (P = .0078, R = 0.58) and better correlated with the percentage cut depth (P = .00024, R = 0.73). The shallowest cuts had the least ridges (score 1.25). The 31% cut depth produced significantly less ridges (score 2.15) than deeper cuts. The roughness score ranged from 2.19 to 3.08 for various depths. A simulated procedure using a 100 μm host cut and a 177 μm (31%) graft had a smooth interface and flush anterior junction using an inverted side-cut design. CONCLUSIONS The femtosecond laser produced more ridges in deeper lamellar cuts. A depth setting of 31% stromal thickness might produce adequate surface quality for femtosecond laser–assisted lamellar anterior keratoplasty. The inverted side-cut design produced good edge apposition even when the graft was thicker than the host lamellar–cut depth. PMID:25747165

  6. Volumetric full-range magnetomotive optical coherence tomography

    PubMed Central

    Ahmad, Adeel; Kim, Jongsik; Shemonski, Nathan D.; Marjanovic, Marina; Boppart, Stephen A.

    2014-01-01

    Abstract. Magnetomotive optical coherence tomography (MM-OCT) can be utilized to spatially localize the presence of magnetic particles within tissues or organs. These magnetic particle-containing regions are detected by using the capability of OCT to measure small-scale displacements induced by the activation of an external electromagnet coil typically driven by a harmonic excitation signal. The constraints imposed by the scanning schemes employed and tissue viscoelastic properties limit the speed at which conventional MM-OCT data can be acquired. Realizing that electromagnet coils can be designed to exert MM force on relatively large tissue volumes (comparable or larger than typical OCT imaging fields of view), we show that an order-of-magnitude improvement in three-dimensional (3-D) MM-OCT imaging speed can be achieved by rapid acquisition of a volumetric scan during the activation of the coil. Furthermore, we show volumetric (3-D) MM-OCT imaging over a large imaging depth range by combining this volumetric scan scheme with full-range OCT. Results with tissue equivalent phantoms and a biological tissue are shown to demonstrate this technique. PMID:25472770

  7. Methods for reverberation suppression utilizing dual frequency band imaging.

    PubMed

    Rau, Jochen M; Måsøy, Svein-Erik; Hansen, Rune; Angelsen, Bjørn; Tangen, Thor Andreas

    2013-09-01

    Reverberations impair the contrast resolution of diagnostic ultrasound images. Tissue harmonic imaging is a common method to reduce these artifacts, but does not remove all reverberations. Dual frequency band imaging (DBI), utilizing a low frequency pulse which manipulates propagation of the high frequency imaging pulse, has been proposed earlier for reverberation suppression. This article adds two different methods for reverberation suppression with DBI: the delay corrected subtraction (DCS) and the first order content weighting (FOCW) method. Both methods utilize the propagation delay of the imaging pulse of two transmissions with alternating manipulation pressure to extract information about its depth of first scattering. FOCW further utilizes this information to estimate the content of first order scattering in the received signal. Initial evaluation is presented where both methods are applied to simulated and in vivo data. Both methods yield visual and measurable substantial improvement in image contrast. Comparing DCS with FOCW, DCS produces sharper images and retains more details while FOCW achieves best suppression levels and, thus, highest image contrast. The measured improvement in contrast ranges from 8 to 27 dB for DCS and from 4 dB up to the dynamic range for FOCW.

  8. Combined photoacoustic and magneto-acoustic imaging.

    PubMed

    Qu, Min; Mallidi, Srivalleesha; Mehrmohammadi, Mohammad; Ma, Li Leo; Johnston, Keith P; Sokolov, Konstantin; Emelianov, Stanislav

    2009-01-01

    Ultrasound is a widely used modality with excellent spatial resolution, low cost, portability, reliability and safety. In clinical practice and in the biomedical field, molecular ultrasound-based imaging techniques are desired to visualize tissue pathologies, such as cancer. In this paper, we present an advanced imaging technique - combined photoacoustic and magneto-acoustic imaging - capable of visualizing the anatomical, functional and biomechanical properties of tissues or organs. The experiments to test the combined imaging technique were performed using dual, nanoparticle-based contrast agents that exhibit the desired optical and magnetic properties. The results of our study demonstrate the feasibility of the combined photoacoustic and magneto-acoustic imaging that takes the advantages of each imaging techniques and provides high sensitivity, reliable contrast and good penetrating depth. Therefore, the developed imaging technique can be used in wide range of biomedical and clinical application.

  9. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    PubMed

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  10. Three-dimensional DNA image cytometry by optical projection tomographic microscopy for early cancer diagnosis.

    PubMed

    Agarwal, Nitin; Biancardi, Alberto M; Patten, Florence W; Reeves, Anthony P; Seibel, Eric J

    2014-04-01

    Aneuploidy is typically assessed by flow cytometry (FCM) and image cytometry (ICM). We used optical projection tomographic microscopy (OPTM) for assessing cellular DNA content using absorption and fluorescence stains. OPTM combines some of the attributes of both FCM and ICM and generates isometric high-resolution three-dimensional (3-D) images of single cells. Although the depth of field of the microscope objective was in the submicron range, it was extended by scanning the objective's focal plane. The extended depth of field image is similar to a projection in a conventional x-ray computed tomography. These projections were later reconstructed using computed tomography methods to form a 3-D image. We also present an automated method for 3-D nuclear segmentation. Nuclei of chicken, trout, and triploid trout erythrocyte were used to calibrate OPTM. Ratios of integrated optical densities extracted from 50 images of each standard were compared to ratios of DNA indices from FCM. A comparison of mean square errors with thionin, hematoxylin, Feulgen, and SYTOX green was done. Feulgen technique was preferred as it showed highest stoichiometry, least variance, and preserved nuclear morphology in 3-D. The addition of this quantitative biomarker could further strengthen existing classifiers and improve early diagnosis of cancer using 3-D microscopy.

  11. Concept for tremor compensation for a handheld OCT-laryngoscope

    NASA Astrophysics Data System (ADS)

    Donner, Sabine; Deutsch, Stefanie; Bleeker, Sebastian; Ripken, Tammo; Krüger, Alexander

    2013-06-01

    Optical coherence tomography (OCT) is a non-invasive imaging technique which can create optical tissue sections, enabling diagnosis of vocal cord tissue. To take full advantage from the non-contact imaging technique, OCT was adapted to an indirect laryngoscope to work on awake patients. Using OCT in a handheld diagnostic device the challenges of rapid working distance adjustment and tracking of axial motion arise. The optical focus of the endoscopic sample arm and the reference-arm length can be adjusted in a range of 40 mm to 90 mm. Automatic working distance adjustment is based on image analysis of OCT B-scans which identifies off depth images as well as position errors. The movable focal plane and reference plane are used to adjust working distance to match the sample depth and stabilise the sample in the desired axial position of the OCT scans. The autofocus adjusts the working distance within maximum 2.7 seconds for the maximum initial displacement of 40 mm. The amplitude of hand tremor during 60 s handheld scanning was reduced to 50 % and it was shown that the image stabilisation keeps the position error below 0.5 mm. Fast automatic working distance adjustment is crucial to minimise the duration of the diagnostic procedure. The image stabilisation compensates relative axial movements during handheld scanning.

  12. Deep Tissue Fluorescent Imaging in Scattering Specimens Using Confocal Microscopy

    PubMed Central

    Clendenon, Sherry G.; Young, Pamela A.; Ferkowicz, Michael; Phillips, Carrie; Dunn, Kenneth W.

    2015-01-01

    In scattering specimens, multiphoton excitation and nondescanned detection improve imaging depth by a factor of 2 or more over confocal microscopy; however, imaging depth is still limited by scattering. We applied the concept of clearing to deep tissue imaging of highly scattering specimens. Clearing is a remarkably effective approach to improving image quality at depth using either confocal or multiphoton microscopy. Tissue clearing appears to eliminate the need for multiphoton excitation for deep tissue imaging. PMID:21729357

  13. Super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging

    NASA Astrophysics Data System (ADS)

    Wei, Lu; Zhu, Xinxin; Chen, Zhixing; Min, Wei

    2014-02-01

    Two-photon excited fluorescence microscopy (TPFM) offers the highest penetration depth with subcellular resolution in light microscopy, due to its unique advantage of nonlinear excitation. However, a fundamental imaging-depth limit, accompanied by a vanishing signal-to-background contrast, still exists for TPFM when imaging deep into scattering samples. Formally, the focusing depth, at which the in-focus signal and the out-of-focus background are equal to each other, is defined as the fundamental imaging-depth limit. To go beyond this imaging-depth limit of TPFM, we report a new class of super-nonlinear fluorescence microscopy for high-contrast deep tissue imaging, including multiphoton activation and imaging (MPAI) harnessing novel photo-activatable fluorophores, stimulated emission reduced fluorescence (SERF) microscopy by adding a weak laser beam for stimulated emission, and two-photon induced focal saturation imaging with preferential depletion of ground-state fluorophores at focus. The resulting image contrasts all exhibit a higher-order (third- or fourth- order) nonlinear signal dependence on laser intensity than that in the standard TPFM. Both the physical principles and the imaging demonstrations will be provided for each super-nonlinear microscopy. In all these techniques, the created super-nonlinearity significantly enhances the imaging contrast and concurrently extends the imaging depth-limit of TPFM. Conceptually different from conventional multiphoton processes mediated by virtual states, our strategy constitutes a new class of fluorescence microscopy where high-order nonlinearity is mediated by real population transfer.

  14. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  15. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging

    PubMed Central

    Manley, Suliana

    2015-01-01

    Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope’s pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467

  16. Full-Depth Coadds of the WISE and First-Year NEOWISE-Reactivation Images

    DOE PAGES

    Meisner, Aaron M.; Lang, Dustin; Schlegel, David J.

    2017-01-03

    The Near Earth Object Wide-field Infrared Survey Explorer (NEOWISE) Reactivation mission released data from its first full year of observations in 2015. This data set includes ~2.5 million exposures in each of W1 and W2, effectively doubling the amount of WISE imaging available at 3.4 μm and 4.6 μm relative to the AllWISE release. In this paper, we have created the first ever full-sky set of coadds combining all publicly available W1 and W2 exposures from both the AllWISE and NEOWISE-Reactivation (NEOWISER) mission phases. We employ an adaptation of the unWISE image coaddition framework, which preserves the native WISE angularmore » resolution and is optimized for forced photometry. By incorporating two additional scans of the entire sky, we not only improve the W1/W2 depths, but also largely eliminate time-dependent artifacts such as off-axis scattered moonlight. We anticipate that our new coadds will have a broad range of applications, including target selection for upcoming spectroscopic cosmology surveys, identification of distant/massive galaxy clusters, and discovery of high-redshift quasars. In particular, our full-depth AllWISE+NEOWISER coadds will be an important input for the Dark Energy Spectroscopic Instrument selection of luminous red galaxy and quasar targets. Our full-depth W1/W2 coadds are already in use within the DECam Legacy Survey (DECaLS) and Mayall z-band Legacy Survey (MzLS) reduction pipelines. Finally, much more work still remains in order to fully leverage NEOWISER imaging for astrophysical applications beyond the solar system.« less

  17. Airborne imaging spectrometer data of the Ruby Mountains, Montana: Mineral discrimination using relative absorption band-depth images

    USGS Publications Warehouse

    Crowley, J.K.; Brickey, D.W.; Rowan, L.C.

    1989-01-01

    Airborne imaging spectrometer data collected in the near-infrared (1.2-2.4 ??m) wavelength range were used to study the spectral expression of metamorphic minerals and rocks in the Ruby Mountains of southwestern Montana. The data were analyzed by using a new data enhancement procedure-the construction of relative absorption band-depth (RBD) images. RBD images, like bandratio images, are designed to detect diagnostic mineral absorption features, while minimizing reflectance variations related to topographic slope and albedo differences. To produce an RBD image, several data channels near an absorption band shoulder are summed and then divided by the sum of several channels located near the band minimum. RBD images are both highly specific and sensitive to the presence of particular mineral absorption features. Further, the technique does not distort or subdue spectral features as sometimes occurs when using other data normalization methods. By using RBD images, a number of rock and soil units were distinguished in the Ruby Mountains including weathered quartz - feldspar pegmatites, marbles of several compositions, and soils developed over poorly exposed mica schists. The RBD technique is especially well suited for detecting weak near-infrared spectral features produced by soils, which may permit improved mapping of subtle lithologic and structural details in semiarid terrains. The observation of soils rich in talc, an important industrial commodity in the study area, also indicates that RBD images may be useful for mineral exploration. ?? 1989.

  18. Exploiting chromatic aberration to spectrally encode depth in reflectance confocal microscopy

    NASA Astrophysics Data System (ADS)

    Carrasco-Zevallos, Oscar; Shelton, Ryan L.; Olsovsky, Cory; Saldua, Meagan; Applegate, Brian E.; Maitland, Kristen C.

    2011-06-01

    We present chromatic confocal microscopy as a technique to axially scan the sample by spectrally encoding depth information to avoid mechanical scanning of the lens or sample. We have achieved an 800 μm focal shift over a range of 680-1080 nm using a hyperchromat lens as the imaging lens. A more complex system that incorporates a water immersion objective to improve axial resolution was built and tested. We determined that increasing objective magnification decreases chromatic shift while improving axial resolution. Furthermore, collimating after the hyperchromat at longer wavelengths yields an increase in focal shift.

  19. Wide tuning range wavelength-swept laser with a single SOA at 1020 nm for ultrahigh resolution Fourier-domain optical coherence tomography.

    PubMed

    Lee, Sang-Won; Song, Hyun-Woo; Jung, Moon-Youn; Kim, Seung-Hwan

    2011-10-24

    In this study, we demonstrated a wide tuning range wavelength-swept laser with a single semiconductor optical amplifier (SOA) at 1020 nm for ultrahigh resolution, Fourier-domain optical coherence tomography (UHR, FD-OCT). The wavelength-swept laser was constructed with an external line-cavity based on a Littman configuration. An optical wavelength selection filter consisted of a grating, a telescope, and a polygon scanner. Before constructing the optical wavelength selection filter, we observed that the optical power, the spectrum bandwidth, and the center wavelength of the SOA were affected by the temperature of the thermoelectric (TE) cooler in the SOA mount as well as the applied current. Therefore, to obtain a wide wavelength tuning range, we adjusted the temperature of the TE cooler in the SOA mount. When the temperature in the TE cooler was 9 °C, our swept source had a tuning range of 142 nm and a full-width at half-maximum (FWHM) of 121.5 nm at 18 kHz. The measured instantaneous spectral bandwidth (δλ) is 0.085 nm, which was measured by an optical spectrum analyzer with a resolution bandwidth of 0.06 nm. This value corresponds to an imaging depth of 3.1 mm in air. Additionally, the averaged optical power of our swept source was 8.2 mW. In UHR, FD/SS-OCT using our swept laser, the measured axial resolution was 4.0 μm in air corresponding to 2.9 μm in tissue (n = 1.35). The sensitivity was measured to be 93.1 dB at a depth of 100 μm. Finally, we obtained retinal images (macular and optic disk) and a corneal image. © 2011 Optical Society of America

  20. Depth image enhancement using perceptual texture priors

    NASA Astrophysics Data System (ADS)

    Bang, Duhyeon; Shim, Hyunjung

    2015-03-01

    A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.

  1. Monoplane 3D-2D registration of cerebral angiograms based on multi-objective stratified optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, T.; Špiclin, Ž.; Pernuš, F.; Unal, G.

    2017-12-01

    Registration of 3D pre-interventional to 2D intra-interventional medical images has an increasingly important role in surgical planning, navigation and treatment, because it enables the physician to co-locate depth information given by pre-interventional 3D images with the live information in intra-interventional 2D images such as x-ray. Most tasks during image-guided interventions are carried out under a monoplane x-ray, which is a highly ill-posed problem for state-of-the-art 3D to 2D registration methods. To address the problem of rigid 3D-2D monoplane registration we propose a novel multi-objective stratified parameter optimization, wherein a small set of high-magnitude intensity gradients are matched between the 3D and 2D images. The stratified parameter optimization matches rotation templates to depth templates, first sampled from projected 3D gradients and second from the 2D image gradients, so as to recover 3D rigid-body rotations and out-of-plane translation. The objective for matching was the gradient magnitude correlation coefficient, which is invariant to in-plane translation. The in-plane translations are then found by locating the maximum of the gradient phase correlation between the best matching pair of rotation and depth templates. On twenty pairs of 3D and 2D images of ten patients undergoing cerebral endovascular image-guided intervention the 3D to monoplane 2D registration experiments were setup with a rather high range of initial mean target registration error from 0 to 100 mm. The proposed method effectively reduced the registration error to below 2 mm, which was further refined by a fast iterative method and resulted in a high final registration accuracy (0.40 mm) and high success rate (> 96%). Taking into account a fast execution time below 10 s, the observed performance of the proposed method shows a high potential for application into clinical image-guidance systems.

  2. Phase calibration target for quantitative phase imaging with ptychography.

    PubMed

    Godden, T M; Muñiz-Piniella, A; Claverley, J D; Yacoot, A; Humphry, M J

    2016-04-04

    Quantitative phase imaging (QPI) utilizes refractive index and thickness variations that lead to optical phase shifts. This gives contrast to images of transparent objects. In quantitative biology, phase images are used to accurately segment cells and calculate properties such as dry mass, volume and proliferation rate. The fidelity of the measured phase shifts is of critical importance in this field. However to date, there has been no standardized method for characterizing the performance of phase imaging systems. Consequently, there is an increasing need for protocols to test the performance of phase imaging systems using well-defined phase calibration and resolution targets. In this work, we present a candidate for a standardized phase resolution target, and measurement protocol for the determination of the transfer of spatial frequencies, and sensitivity of a phase imaging system. The target has been carefully designed to contain well-defined depth variations over a broadband range of spatial frequencies. In order to demonstrate the utility of the target, we measure quantitative phase images on a ptychographic microscope, and compare the measured optical phase shifts with Atomic Force Microscopy (AFM) topography maps and surface profile measurements from coherence scanning interferometry. The results show that ptychography has fully quantitative nanometer sensitivity in optical path differences over a broadband range of spatial frequencies for feature sizes ranging from micrometers to hundreds of micrometers.

  3. Soil-Geomorphic and Paleoclimatic Characteristics of the Fort Bliss Maneuver Areas, Southern New Mexico and Western Texas

    DTIC Science & Technology

    1994-03-07

    archaeological investigations of buried structures, and locating underground pipelines (Teng 1985; Young et al. 1988; Mellett 1990). Detailed subsurface ... imaging with radar usually is done with a portable ground-based system that is designed to differentiate media at depths ranging from 0.5 m to 30 m

  4. Spectrally based mapping of riverbed composition

    USGS Publications Warehouse

    Legleiter, Carl; Stegman, Tobin K.; Overstreet, Brandon T.

    2016-01-01

    Remote sensing methods provide an efficient means of characterizing fluvial systems. This study evaluated the potential to map riverbed composition based on in situ and/or remote measurements of reflectance. Field spectra and substrate photos from the Snake River, Wyoming, USA, were used to identify different sediment facies and degrees of algal development and to quantify their optical characteristics. We hypothesized that accounting for the effects of depth and water column attenuation to isolate the reflectance of the streambed would enhance distinctions among bottom types and facilitate substrate classification. A bottom reflectance retrieval algorithm adapted from coastal research yielded realistic spectra for the 450 to 700 nm range; but bottom reflectance-based substrate classifications, generated using a random forest technique, were no more accurate than classifications derived from above-water field spectra. Additional hypothesis testing indicated that a combination of reflectance magnitude (brightness) and indices of spectral shape provided the most accurate riverbed classifications. Convolving field spectra to the response functions of a multispectral satellite and a hyperspectral imaging system did not reduce classification accuracies, implying that high spectral resolution was not essential. Supervised classifications of algal density produced from hyperspectral data and an inferred bottom reflectance image were not highly accurate, but unsupervised classification of the bottom reflectance image revealed distinct spectrally based clusters, suggesting that such an image could provide additional river information. We attribute the failure of bottom reflectance retrieval to yield more reliable substrate maps to a latent correlation between depth and bottom type. Accounting for the effects of depth might have eliminated a key distinction among substrates and thus reduced discriminatory power. Although further, more systematic study across a broader range of fluvial environments is needed to substantiate our initial results, this case study suggests that bed composition in shallow, clear-flowing rivers potentially could be mapped remotely.

  5. Integrated Dual Imaging Detector

    NASA Technical Reports Server (NTRS)

    Rust, David M.

    1999-01-01

    A new type of image detector was designed to simultaneously analyze the polarization of light at all picture elements in a scene. The integrated Dual Imaging detector (IDID) consists of a lenslet array and a polarizing beamsplitter bonded to a commercial charge coupled device (CCD). The IDID simplifies the design and operation of solar vector magnetographs and the imaging polarimeters and spectroscopic imagers used, for example, in atmosphere and solar research. When used in a solar telescope, the vector magnetic fields on the solar surface. Other applications include environmental monitoring, robot vision, and medical diagnoses (through the eye). Innovations in the IDID include (1) two interleaved imaging arrays (one for each polarization plane); (2) large dynamic range (well depth of 10(exp 5) electrons per pixel); (3) simultaneous readout and display of both images; and (4) laptop computer signal processing to produce polarization maps in field situations.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luczkovich, J.J.; Wagner, T.W.; Michalek, J.L.

    In order to monitor changes caused by local and global human actions to a coral reef ecosystem, we sea-truthed a natural color Landsat TM image prepared for a coastal region of the northwestern Dominican Republic and recorded average water depth, precise geographical positions, and bottom types (seagrass, 15 sites; coral reef, ten sites; and sand, six sites). There were no significant differences in depth for the bottom type groups. The depths ranged from 0 to 16.1 m. Mean digital counts of seagrass and coral reef sites did not differ significantly in any band. A multivariate analysis of variance using allmore » three bands gave similar results. A ratio of the green/blue bands (TM 2/TM 1) showed there was a spectral shift associated with increasing depth, but not bottom type. Due to small-scale patchiness, seagrass and coral areas were difficult to distinguish, but sandy areas can be distinguished using Landsat TM imagery and our methods. 12 refs.« less

  7. Multispectral near-IR reflectance imaging of simulated early occlusal lesions: Variation of lesion contrast with lesion depth and severity

    PubMed Central

    Simon, Jacob C.; Chan, Kenneth H.; Darling, Cynthia L.; Fried, Daniel

    2014-01-01

    Background and Objectives Early demineralization appears with high contrast at near-IR wavelengths due to a ten to twenty fold difference in the magnitude of light scattering between sound and demineralized enamel. Water absorption in the near-IR has a significant effect on the lesion contrast and the highest contrast has been measured in spectral regions with higher water absorption. The purpose of this study was to determine how the lesion contrast changes with lesion severity and depth for different spectral regions in the near-IR and compare that range of contrast with visible reflectance and fluorescence. Materials and Methods Forty-four human molars were used in this in vitro study. Teeth were painted with an acid-resistant varnish, leaving a 4×4 mm window on the occlusal surface of each tooth exposed for demineralization. Artificial lesions were produced in the unprotected windows after 12–48 hr exposure to a demineralizing solution at pH-4.5. Near-IR reflectance images were acquired over several near-IR spectral distributions, visible light reflectance, and fluorescence with 405-nm excitation and detection at wavelengths greater than 500-nm. Crossed polarizers were used for reflectance measurements to reduce interference from specular reflectance. Cross polarization optical coherence tomography (CP-OCT) was used to non-destructively assess the depth and severity of demineralization in each sample window. Matching two dimensional CP-OCT images of the lesion depth and integrated reflectivity were compared with the reflectance and fluorescence images to determine how accurately the variation in the lesion contrast represents the variation in the lesion severity. Results Artificial lesions appear more uniform on tooth surfaces exposed to an acid challenge at visible wavelengths than they do in the near-IR. Measurements of the lesion depth and severity using CP-OCT show that the lesion severity varies markedly across the sample windows and that the lesion contrast in the visible does not accurately reflect the large variation in the lesion severity. Reflectance measurements at certain near-IR wavelengths more accurately reflect variation in the depth and severity of the lesions. Conclusion The results of the study suggest that near-IR reflectance measurements at longer wavelengths coincident with higher water absorption are better suited for imaging early caries lesions. PMID:24375543

  8. Distance-based over-segmentation for single-frame RGB-D images

    NASA Astrophysics Data System (ADS)

    Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao

    2017-11-01

    Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.

  9. Depth image super-resolution via semi self-taught learning framework

    NASA Astrophysics Data System (ADS)

    Zhao, Furong; Cao, Zhiguo; Xiao, Yang; Zhang, Xiaodi; Xian, Ke; Li, Ruibo

    2017-06-01

    Depth images have recently attracted much attention in computer vision and high-quality 3D content for 3DTV and 3D movies. In this paper, we present a new semi self-taught learning application framework for enhancing resolution of depth maps without making use of ancillary color images data at the target resolution, or multiple aligned depth maps. Our framework consists of cascade random forests reaching from coarse to fine results. We learn the surface information and structure transformations both from a small high-quality depth exemplars and the input depth map itself across different scales. Considering that edge plays an important role in depth map quality, we optimize an effective regularized objective that calculates on output image space and input edge space in random forests. Experiments show the effectiveness and superiority of our method against other techniques with or without applying aligned RGB information

  10. IRAS images of nearby dark clouds

    NASA Technical Reports Server (NTRS)

    Wood, Douglas O. S.; Myers, Philip C.; Daugherty, Debra A.

    1994-01-01

    We have investigated approximately 100 nearby molecular clouds using the extensive, all-sky database of IRAS. The clouds in this study cover a wide range of physical properties including visual extinction, size, mass, degree of isolation, homogeneity and morphology. IRAS 100 and 60 micron co-added images were used to calculate the 100 micron optical depth of dust in the clouds. These images of dust optical depth compare very well with (12)CO and (13)CO observations, and can be related to H2 column density. From the optical depth images we locate the edges of dark clouds and the dense cores inside them. We have identified a total of 43 `IRAS clouds' (regions with A(sub v) greater than 2) which contain a total of 255 `IRAS cores' (regions with A(sub v) greater than 4) and we catalog their physical properties. We find that the clouds are remarkably filamentary, and that the cores within the clouds are often distributed along the filaments. The largest cores are usually connected to other large cores by filaments. We have developed selection criteria to search the IRAS Point Source Catalog for stars that are likely to be associated with the clouds and we catalog the IRAS sources in each cloud or core. Optically visible stars associated with the clouds have been identified from the Herbig and Bell catalog. From these data we characterize the physical properties of the clouds including their star-formation efficiency.

  11. Measurement of body joint angles for physical therapy based on mean shift tracking using two low cost Kinect images.

    PubMed

    Chen, Y C; Lee, H J; Lin, K H

    2015-08-01

    Range of motion (ROM) is commonly used to assess a patient's joint function in physical therapy. Because motion capture systems are generally very expensive, physical therapists mostly use simple rulers to measure patients' joint angles in clinical diagnosis, which will suffer from low accuracy, low reliability, and subjective. In this study we used color and depth image feature from two sets of low-cost Microsoft Kinect to reconstruct 3D joint positions, and then calculate moveable joint angles to assess the ROM. A Gaussian background model is first used to segment the human body from the depth images. The 3D coordinates of the joints are reconstructed from both color and depth images. To track the location of joints throughout the sequence more precisely, we adopt the mean shift algorithm to find out the center of voxels upon the joints. The two sets of Kinect are placed three meters away from each other and facing to the subject. The joint moveable angles and the motion data are calculated from the position of joints frame by frame. To verify the results of our system, we take the results from a motion capture system called VICON as golden standard. Our 150 test results showed that the deviation of joint moveable angles between those obtained by VICON and our system is about 4 to 8 degree in six different upper limb exercises, which are acceptable in clinical environment.

  12. Shadow analysis via the C+K Visioline: A technical note.

    PubMed

    Houser, T; Zerweck, C; Grove, G; Wickett, R

    2017-11-01

    This research investigated the ability of shadow analysis (via the Courage + Khazaka Visioline and Image Pro Premiere 9.0 software) to accurately assess the differences in skin topography associated with photo aging. Analyses were performed on impressions collected from a microfinish comparator scale (GAR Electroforming) as well a series of impressions collected from the crow's feet region of 9 women who represent each point on the Zerweck Crow's Feet classification scale. Analyses were performed using a Courage + Khazaka Visioline VL 650 as well as Image Pro Premiere 9.0 software. Shadow analysis showed an ability to accurately measure the groove depth when measuring impressions collected from grooves of known depth. Several shadow analysis parameters showed a correlation with the expert grader ratings of crow's feet when averaging measurements taken from the North and South directions. The Max Depth parameter in particular showed a strong correlation with the expert grader's ratings which improved when a more sophisticated analysis was performed using Image Pro Premiere. When used properly, shadow analysis is effective at accurately measuring skin surface impressions for differences in skin topography. Shadow analysis is shown to accurately assess the differences across a range of crow's feet severity correlating to a 0-8 grader scale. The Visioline VL 650 is a good tool for this measurement, with room for improvement in analysis which can be achieved through third party image analysis software. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. SU-F-J-201: Validation Study of Proton Radiography Against CT Data for Quantitative Imaging of Anatomical Changes in Head and Neck Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammi, A; Weber, D; Lomax, A

    2016-06-15

    Purpose: In clinical pencil-beam-scanned (PBS) proton therapy, the advantage of the characteristic sharp dose fall-off after the Bragg Peak (BP) becomes a disadvantage if the BP positions of a plan’s constituent pencil beams are shifted, eg.due to anatomical changes. Thus, for fractionated PBS proton therapy, accurate knowledge of the water equivalent path length (WEPL) of the traversed anatomy is critical. In this work we investigate the feasibility of using 2D proton range maps (proton radiography, PR) with the active-scanning gantry at PSI. Methods: We simulated our approach using Monte Carlo methods (MC) to simulate proton beam interactions in patients usingmore » clinical imaging data. We selected six head and neck cases having significant anatomical changes detected in per-treatment CTs.PRs (two at 0°/90°) were generated from MC simulations of low-dose pencil beams at 230MeV. Each beam’s residual depth-dose was propagated through the patient geometry (from CT) and detected on exiting the patient anatomy in an ideal depth-resolved detector (eg. range telescope). Firstly, to validate the technique, proton radiographs were compared to the ground truth, which was the WEPL from ray-tracing in the patient CT at the pencil beam location. Secondly, WEPL difference maps (per-treatment – planning imaging timepoints) were then generated to locate the anatomical changes, both in the CT (ground truth) and in the PRs. Binomial classification was performed to evaluate the efficacy of the technique relative to CT. Results: Over the projections simulated over all six patients, 70%, 79% and 95% of the grid points agreed with the ground truth proton range to within ±0.5%, ±1%, and ±3% respectively. The sensitivity, specificity, precision and accuracy were high (mean±1σ, 83±8%, 87±13%, 95±10%, 83±7% respectively). Conclusion: We show that proton-based radiographic images can accurately monitor patient positioning and in vivo range verification, while providing equivalent WEPL information to a CT scan, with the advantage of a much lower imaging dose.« less

  14. The selection of the optimal baseline in the front-view monocular vision system

    NASA Astrophysics Data System (ADS)

    Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.

  15. Depth map generation using a single image sensor with phase masks.

    PubMed

    Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki

    2016-06-13

    Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.

  16. Light field imaging and application analysis in THz

    NASA Astrophysics Data System (ADS)

    Zhang, Hongfei; Su, Bo; He, Jingsuo; Zhang, Cong; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin

    2018-01-01

    The light field includes the direction information and location information. Light field imaging can capture the whole light field by single exposure. The four-dimensional light field function model represented by two-plane parameter, which is proposed by Levoy, is adopted in the light field. Acquisition of light field is based on the microlens array, camera array and the mask. We calculate the dates of light-field to synthetize light field image. The processing techniques of light field data include technology of refocusing rendering, technology of synthetic aperture and technology of microscopic imaging. Introducing the technology of light field imaging into THz, the efficiency of 3D imaging is higher than that of conventional THz 3D imaging technology. The advantages compared with visible light field imaging include large depth of field, wide dynamic range and true three-dimensional. It has broad application prospects.

  17. Characterization of the angular memory effect of scattered light in biological tissues.

    PubMed

    Schott, Sam; Bertolotti, Jacopo; Léger, Jean-Francois; Bourdieu, Laurent; Gigan, Sylvain

    2015-05-18

    High resolution optical microscopy is essential in neuroscience but suffers from scattering in biological tissues and therefore grants access to superficial brain layers only. Recently developed techniques use scattered photons for imaging by exploiting angular correlations in transmitted light and could potentially increase imaging depths. But those correlations ('angular memory effect') are of a very short range and should theoretically be only present behind and not inside scattering media. From measurements on neural tissues and complementary simulations, we find that strong forward scattering in biological tissues can enhance the memory effect range and thus the possible field-of-view by more than an order of magnitude compared to isotropic scattering for ∼1 mm thick tissue layers.

  18. Crustal structure beneath the Blue Mountains terranes and cratonic North America, eastern Oregon, and Idaho, from teleseismic receiver functions

    NASA Astrophysics Data System (ADS)

    Christian Stanciu, A.; Russo, Raymond M.; Mocanu, Victor I.; Bremner, Paul M.; Hongsresawat, Sutatcha; Torpey, Megan E.; VanDecar, John C.; Foster, David A.; Hole, John A.

    2016-07-01

    We present new images of lithospheric structure obtained from P-to-S conversions defined by receiver functions at the 85 broadband seismic stations of the EarthScope IDaho-ORegon experiment. We resolve the crustal thickness beneath the Blue Mountains province and the former western margin of cratonic North America, the geometry of the western Idaho shear zone (WISZ), and the boundary between the Grouse Creek and Farmington provinces. We calculated P-to-S receiver functions using the iterative time domain deconvolution method, and we used the H-k grid search method and common conversion point stacking to image the lithospheric structure. Moho depths beneath the Blue Mountains terranes range from 24 to 34 km, whereas the crust is 32-40 km thick beneath the Idaho batholith and the regions of extended crust of east-central Idaho. The Blue Mountains group Olds Ferry terrane is characterized by the thinnest crust in the study area, 24 km thick. There is a clear break in the continuity of the Moho across the WISZ, with depths increasing from 28 km west of the shear zone to 36 km just east of its surface expression. The presence of a strong midcrustal converting interface at 18 km depth beneath the Idaho batholith extending 20 km east of the WISZ indicates tectonic wedging in this region. A north striking 7 km offset in Moho depth, thinning to the east, is present beneath the Lost River Range and Pahsimeroi Valley; we identify this sharp offset as the boundary that juxtaposes the Archean Grouse Creek block with the Paleoproterozoic Farmington zone.

  19. Dense real-time stereo matching using memory efficient semi-global-matching variant based on FPGAs

    NASA Astrophysics Data System (ADS)

    Buder, Maximilian

    2012-06-01

    This paper presents a stereo image matching system that takes advantage of a global image matching method. The system is designed to provide depth information for mobile robotic applications. Typical tasks of the proposed system are to assist in obstacle avoidance, SLAM and path planning. Mobile robots pose strong requirements about size, energy consumption, reliability and output quality of the image matching subsystem. Current available systems either rely on active sensors or on local stereo image matching algorithms. The first are only suitable in controlled environments while the second suffer from low quality depth-maps. Top ranking quality results are only achieved by an iterative approach using global image matching and color segmentation techniques which are computationally demanding and therefore difficult to be executed in realtime. Attempts were made to still reach realtime performance with global methods by simplifying the routines. The depth maps are at the end almost comparable to local methods. An equally named semi-global algorithm was proposed earlier that shows both very good image matching results and relatively simple operations. A memory efficient variant of the Semi-Global-Matching algorithm is reviewed and adopted for an implementation based on reconfigurable hardware. The implementation is suitable for realtime execution in the field of robotics. It will be shown that the modified version of the efficient Semi-Global-Matching method is delivering equivalent result compared to the original algorithm based on the Middlebury dataset. The system has proven to be capable of processing VGA sized images with a disparity resolution of 64 pixel at 33 frames per second based on low cost to mid-range hardware. In case the focus is shifted to a higher image resolution, 1024×1024-sized stereo frames may be processed with the same hardware at 10 fps. The disparity resolution settings stay unchanged. A mobile system that covers preprocessing, matching and interfacing operations is also presented.

  20. Real-time motion artifacts compensation of ToF sensors data on GPU

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Hoegg, Thomas; Kolb, Andreas

    2013-05-01

    Over the last decade, ToF sensors attracted many computer vision and graphics researchers. Nevertheless, ToF devices suffer from severe motion artifacts for dynamic scenes as well as low-resolution depth data which strongly justifies the importance of a valid correction. To counterbalance this effect, a pre-processing approach is introduced to greatly improve range image data on dynamic scenes. We first demonstrate the robustness of our approach using simulated data to finally validate our method using sensor range data. Our GPU-based processing pipeline enhances range data reliability in real-time.

  1. Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics

    PubMed Central

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao

    2016-01-01

    For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF) technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy. PMID:27897976

  2. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  3. Optical coherence tomography using the Niris system in otolaryngology

    NASA Astrophysics Data System (ADS)

    Rubinstein, Marc; Armstrong, William B.; Djalilian, Hamid R.; Crumley, Roger L.; Kim, Jason H.; Nguyen, Quoc A.; Foulad, Allen I.; Ghasri, Pedram E.; Wong, Brian J. F.

    2009-02-01

    Objectives: To determine the feasibility and accuracy of the Niris Optical Coherence Tomography (OCT) system in imaging of the mucosal abnormalities of the head and neck. The Niris system is the first commercially available OCT device for applications outside ophthalmology. Methods: We obtained OCT images of benign, premalignant and malignant lesions throughout the head and neck, using the Niris OCT imaging system (Imalux, Cleveland, OH). This imaging system has a tissue penetration depth of approximately 1-2mm, a scanning range of 2mm and a spatial depth resolution of approximately 10-20μm. Imaging was performed in the outpatient setting and in the operating room using a flexible probe. Results: High-resolution cross-sectional images from the oral cavity, nasal cavity, ears and larynx showed distinct layers and structures such as mucosa layer, basal membrane and lamina propria, were clearly identified. In the pathology images disruption of the basal membrane was clearly shown. Device set-up took approximately 5 minutes and the image acquisition was rapid. The system can be operated by the person performing the exam. Conclusions: The Niris system is non invasive and easy to incorporate into the operating room and the clinic. It requires minimal set-up and requires only one person to operate. The unique ability of the OCT offers high-resolution images showing the microanatomy of different sites. OCT imaging with the Niris device potentially offers an efficient, quick and reliable imaging modality in guiding surgical biopsies, intra-operative decision making, and therapeutic options for different otolaryngologic pathologies and premalignant disease.

  4. Mapping the Moho with seismic surface waves: Sensitivity, resolution, and recommended inversion strategies

    NASA Astrophysics Data System (ADS)

    Lebedev, Sergei; Adam, Joanne; Meier, Thomas

    2013-04-01

    Seismic surface waves have been used to study the Earth's crust since the early days of modern seismology. In the last decade, surface-wave crustal imaging has been rejuvenated by the emergence of new, array techniques (ambient-noise and teleseismic interferometry). The strong sensitivity of both Rayleigh and Love waves to the Moho is evident from a mere visual inspection of their dispersion curves or waveforms. Yet, strong trade-offs between the Moho depth and crustal and mantle structure in surface-wave inversions have prompted doubts regarding their capacity to resolve the Moho. Although the Moho depth has been an inversion parameter in numerous surface-wave studies, the resolution of Moho properties yielded by a surface-wave inversion is still somewhat uncertain and controversial. We use model-space mapping in order to elucidate surface waves' sensitivity to the Moho depth and the resolution of their inversion for it. If seismic wavespeeds within the crust and upper mantle are known, then Moho-depth variations of a few kilometres produce large (over 1 per cent) perturbations in phase velocities. However, in inversions of surface-wave data with no a priori information (wavespeeds not known), strong Moho-depth/shear-speed trade-offs will mask about 90 per cent of the Moho-depth signal, with remaining phase-velocity perturbations 0.1-0.2 per cent only. In order to resolve the Moho with surface waves alone, errors in the data must thus be small (up to 0.2 per cent for resolving continental Moho). If the errors are larger, Moho-depth resolution is not warranted and depends on error distribution with period, with errors that persist over broad period ranges particularly damaging. An effective strategy for the inversion of surface-wave data alone for the Moho depth is to, first, constrain the crustal and upper-mantle structure by inversion in a broad period range and then determine the Moho depth in inversion in a narrow period range most sensitive to it, with the first-step results used as reference. We illustrate this strategy with an application to data from the Kaapvaal Craton. Prior information on crustal and mantle structure reduces the trade-offs and thus enables resolving the Moho depth with noisier data; such information should be sought and used whenever available (as has been done, explicitly or implicitly, in many previous studies). Joint analysis or inversion of surface-wave and other data (receiver functions, topography, gravity) can reduce uncertainties further and facilitate Moho mapping. Alone or as a part of multi-disciplinary datasets, surface-wave data offer unique sensitivity to the crustal and upper-mantle structure and are becoming increasingly important in the seismic imaging of the crust and the Moho. Reference Lebedev, S., J. Adam, T. Meier. Mapping the Moho with seismic surface waves: A review, resolution analysis, and recommended inversion strategies. Tectonophysics, "Moho" special issue, 10.1016/j.tecto.2012.12.030, 2013.

  5. Real-time optically sectioned wide-field microscopy employing structured light illumination and a CMOS detector

    NASA Astrophysics Data System (ADS)

    Mitic, Jelena; Anhut, Tiemo; Serov, Alexandre; Lasser, Theo; Bourquin, Stephane

    2003-07-01

    Real-time optically sectioned microscopy is demonstrated using an AC-sensitive detection concept realized with smart CMOS image sensor and structured light illumination by a continuously moving periodic pattern. We describe two different detection systems based on CMOS image sensors for the detection and on-chip processing of the sectioned images in real time. A region-of-interest is sampled at high frame rate. The demodulated signal delivered by the detector corresponds to the depth discriminated image of the sample. The measured FWHM of the axial response depends on the spatial frequency of the projected grid illumination and is in the μm-range. The effect of using broadband incoherent illumination is discussed. The performance of these systems is demonstrated by imaging technical as well as biological samples.

  6. Photon-efficient super-resolution laser radar

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2017-08-01

    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.

  7. Feasibility of spatial frequency-domain imaging for monitoring palpable breast lesions

    NASA Astrophysics Data System (ADS)

    Robbins, Constance M.; Raghavan, Guruprasad; Antaki, James F.; Kainerstorfer, Jana M.

    2017-12-01

    In breast cancer diagnosis and therapy monitoring, there is a need for frequent, noninvasive disease progression evaluation. Breast tumors differ from healthy tissue in mechanical stiffness as well as optical properties, which allows optical methods to detect and monitor breast lesions noninvasively. Spatial frequency-domain imaging (SFDI) is a reflectance-based diffuse optical method that can yield two-dimensional images of absolute optical properties of tissue with an inexpensive and portable system, although depth penetration is limited. Since the absorption coefficient of breast tissue is relatively low and the tissue is quite flexible, there is an opportunity for compression of tissue to bring stiff, palpable breast lesions within the detection range of SFDI. Sixteen breast tissue-mimicking phantoms were fabricated containing stiffer, more highly absorbing tumor-mimicking inclusions of varying absorption contrast and depth. These phantoms were imaged with an SFDI system at five levels of compression. An increase in absorption contrast was observed with compression, and reliable detection of each inclusion was achieved when compression was sufficient to bring the inclusion center within ˜12 mm of the phantom surface. At highest compression level, contrasts achieved with this system were comparable to those measured with single source-detector near-infrared spectroscopy.

  8. Extended depth of field system for long distance iris acquisition

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Lin; Hsieh, Sheng-Hsun; Hung, Kuo-En; Yang, Shi-Wen; Li, Yung-Hui; Tien, Chung-Hao

    2012-10-01

    Using biometric signatures for identity recognition has been practiced for centuries. Recently, iris recognition system attracts much attention due to its high accuracy and high stability. The texture feature of iris provides a signature that is unique for each subject. Currently most commercial iris recognition systems acquire images in less than 50 cm, which is a serious constraint that needs to be broken if we want to use it for airport access or entrance that requires high turn-over rate . In order to capture the iris patterns from a distance, in this study, we developed a telephoto imaging system with image processing techniques. By using the cubic phase mask positioned front of the camera, the point spread function was kept constant over a wide range of defocus. With adequate decoding filter, the blurred image was restored, where the working distance between the subject and the camera can be achieved over 3m associated with 500mm focal length and aperture F/6.3. The simulation and experimental results validated the proposed scheme, where the depth of focus of iris camera was triply extended over the traditional optics, while keeping sufficient recognition accuracy.

  9. High-spatial-resolution sub-surface imaging using a laser-based acoustic microscopy technique.

    PubMed

    Balogun, Oluwaseyi; Cole, Garrett D; Huber, Robert; Chinn, Diane; Murray, Todd W; Spicer, James B

    2011-01-01

    Scanning acoustic microscopy techniques operating at frequencies in the gigahertz range are suitable for the elastic characterization and interior imaging of solid media with micrometer-scale spatial resolution. Acoustic wave propagation at these frequencies is strongly limited by energy losses, particularly from attenuation in the coupling media used to transmit ultrasound to a specimen, leading to a decrease in the depth in a specimen that can be interrogated. In this work, a laser-based acoustic microscopy technique is presented that uses a pulsed laser source for the generation of broadband acoustic waves and an optical interferometer for detection. The use of a 900-ps microchip pulsed laser facilitates the generation of acoustic waves with frequencies extending up to 1 GHz which allows for the resolution of micrometer-scale features in a specimen. Furthermore, the combination of optical generation and detection approaches eliminates the use of an ultrasonic coupling medium, and allows for elastic characterization and interior imaging at penetration depths on the order of several hundred micrometers. Experimental results illustrating the use of the laser-based acoustic microscopy technique for imaging micrometer-scale subsurface geometrical features in a 70-μm-thick single-crystal silicon wafer with a (100) orientation are presented.

  10. A 100-200 MHz ultrasound biomicroscope.

    PubMed

    Knspik, D A; Starkoski, B; Pavlin, C J; Foster, F S

    2000-01-01

    The development of higher frequency ultrasound imaging systems affords a unique opportunity to visualize living tissue at the microscopic level. This work was undertaken to assess the potential of ultrasound imaging in vivo using the 100-200 MHz range. Spherically focused lithium niobate transducers were fabricated. The properties of a 200 MHz center frequency device are described in detail. This transducer showed good sensitivity with an insertion loss of 18 dB at 200 MHz. Resolution of 14 /spl mu/m in the lateral direction and 12 /spl mu/m in the axial direction was achieved with f/1.14 focusing. A linear mechanical scan system and a scan converter were used to generate B-scan images at a frame rate up to 12 frames per second. System performance in B-mode imaging is limited by frequency dependent attenuation in tissues. An alternative technique, zone-focus image collection, was investigated to extend depth of field. Images of coronary arteries, the eye, and skin are presented along with some preliminary correlations with histology. These results demonstrate the feasibility of ultrasound biomicroscopy In the 100-200 MHz range. Further development of ultrasound backscatter imaging at frequencies up to and above 200 MHz will contribute valuable information about tissue microstructure.

  11. Time-resolved multi-mass ion imaging: Femtosecond UV-VUV pump-probe spectroscopy with the PImMS camera.

    PubMed

    Forbes, Ruaridh; Makhija, Varun; Veyrinas, Kévin; Stolow, Albert; Lee, Jason W L; Burt, Michael; Brouard, Mark; Vallance, Claire; Wilkinson, Iain; Lausten, Rune; Hockett, Paul

    2017-07-07

    The Pixel-Imaging Mass Spectrometry (PImMS) camera allows for 3D charged particle imaging measurements, in which the particle time-of-flight is recorded along with (x, y) position. Coupling the PImMS camera to an ultrafast pump-probe velocity-map imaging spectroscopy apparatus therefore provides a route to time-resolved multi-mass ion imaging, with both high count rates and large dynamic range, thus allowing for rapid measurements of complex photofragmentation dynamics. Furthermore, the use of vacuum ultraviolet wavelengths for the probe pulse allows for an enhanced observation window for the study of excited state molecular dynamics in small polyatomic molecules having relatively high ionization potentials. Herein, preliminary time-resolved multi-mass imaging results from C 2 F 3 I photolysis are presented. The experiments utilized femtosecond VUV and UV (160.8 nm and 267 nm) pump and probe laser pulses in order to demonstrate and explore this new time-resolved experimental ion imaging configuration. The data indicate the depth and power of this measurement modality, with a range of photofragments readily observed, and many indications of complex underlying wavepacket dynamics on the excited state(s) prepared.

  12. SU-E-T-451: Accuracy and Application of the Standard Imaging W1 Scintillator Dosimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalski, M; McEwen, M

    2014-06-01

    Purpose: To evaluate the Standard Imaging W1 scintillator dosimeter in a range of clinical radiation beams to determine its range of possible applications. Methods: The W1 scintillator is a small perturbation-free dosimeter which is of interest in absolute and relative clinical dosimetry due to its small size and water equivalence. A single version of this detector was evaluated in Co-60 and linac photon and electron beams to investigate the following: linearity, sensitivity, precision, and dependence on electrometer type. In addition, depth-dose and cross-plane profiles were obtained in both photon and electron beams and compared with data obtained with wellbehaved ionizationmore » chambers. Results: In linac beams the precision and linearity was very impressive, with typical values of 0.3% and 0.1% respectively. Performance in a Co-60 beam was much poorer (approximately three times worse) and it is not clear whether this is due to the lower signal current or the effect of the continuous beam (rather than pulsed beam of the linac measurements). There was no significant difference in the detector reading when using either the recommended SI Supermax electrometer or two independent high-quality electrometers, except for low signal levels, where the Supermax exhibited an apparent threshold effect, preventing the measurement of the bremsstrahlung background in electron depth-dose curves. Comparisons with ion chamber measurements in linac beams were somewhat variable: good agreement was seen for cross-profiles (photon and electron beams) and electron beam depth-dose curves, generally within the 0.3% precision of the scintillator but systematic differences were observed as a function of measurement depth in photon beam depth-dose curves. Conclusion: A first look would suggest that the W1 scintillator has applications beyond small field dosimetry but performance appears to be limited to higher doserate and/or pulsed radiation beams. Further work is required to resolve discrepancies compared to ion chambers.« less

  13. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  14. Needle-based polarization-sensitive OCT of breast tumor (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Villiger, Martin; Lorenser, Dirk; McLaughlin, Robert A.; Quirk, Bryden C.; Kirk, Rodney W.; Bouma, Brett E.; Sampson, David D.

    2016-03-01

    OCT imaging through miniature needle probes has extended the range of OCT and enabled structural imaging deep inside breast tissue, with the potential to assist in the intraoperative assessment of tumor margins. However, in many situations, scattering contrast alone is insufficient to clearly identify and delineate malignant areas. Here, we present a portable, depth-encoded polarization-sensitive OCT system, connected to a miniature needle probe. From the measured polarization states we constructed the tissue Mueller matrix at each sample location and improved the accuracy of the measured polarization states through incoherent averaging before retrieving the depth-resolved tissue birefringence. With the Mueller matrix at hand, additional polarization properties such as depolarization are readily available. We then imaged freshly excised breast tissue from a patient undergoing lumpectomy. The reconstructed local retardation highlighted regions of connective tissue, which exhibited birefringence due to the abundance of collagen fibers, and offered excellent contrast to areas of malignant tissue, which exhibited less birefringence due to their different tissue composition. Results were validated against co-located histology sections. The combination of needle-based imaging with the complementary contrast provided by polarization-sensitive analysis offers a powerful instrument for advanced tissue imaging and has potential to aid in the assessment of tumor margins during the resection of breast cancer.

  15. Optical coherence tomography use in the diagnosis of enamel defects

    NASA Astrophysics Data System (ADS)

    Al-Azri, Khalifa; Melita, Lucia N.; Strange, Adam P.; Festy, Frederic; Al-Jawad, Maisoon; Cook, Richard; Parekh, Susan; Bozec, Laurent

    2016-03-01

    Molar incisor hypomineralization (MIH) affects the permanent incisors and molars, whose undermineralized matrix is evidenced by lesions ranging from white to yellow/brown opacities to crumbling enamel lesions incapable of withstanding normal occlusal forces and function. Diagnosing the condition involves clinical and radiographic examination of these teeth, with known limitations in determining the depth extent of the enamel defects in particular. Optical coherence tomography (OCT) is an emerging hard and soft tissue imaging technique, which was investigated as a new potential diagnostic method in dentistry. A comparison between the diagnostic potential of the conventional methods and OCT was conducted. Compared to conventional imaging methods, OCT gave more information on the structure of the enamel defects as well as the depth extent of the defects into the enamel structure. Different types of enamel defects were compared, each type presenting a unique identifiable pattern when imaged using OCT. Additionally, advanced methods of OCT image analysis including backscattered light intensity profile analysis and enface reconstruction were performed. Both methods confirmed the potential of OCT in enamel defects diagnosis. In conclusion, OCT imaging enabled the identification of the type of enamel defect and the determination of the extent of the enamel defects in MIH with the advantage of being a radiation free diagnostic technique.

  16. Method to optimize patch size based on spatial frequency response in image rendering of the light field

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Wang, Yanan; Zhu, Zhenhao; Su, Jinhui

    2018-05-01

    A focused plenoptic camera can effectively transform angular and spatial information to yield a refocused rendered image with high resolution. However, choosing a proper patch size poses a significant problem for the image-rendering algorithm. By using a spatial frequency response measurement, a method to obtain a suitable patch size is presented. By evaluating the spatial frequency response curves, the optimized patch size can be obtained quickly and easily. Moreover, the range of depth over which images can be rendered without artifacts can be estimated. Experiments show that the results of the image rendered based on frequency response measurement are in accordance with the theoretical calculation, which indicates that this is an effective way to determine the patch size. This study may provide support to light-field image rendering.

  17. X-ray-induced acoustic computed tomography of concrete infrastructure

    NASA Astrophysics Data System (ADS)

    Tang, Shanshan; Ramseyer, Chris; Samant, Pratik; Xiang, Liangzhong

    2018-02-01

    X-ray-induced Acoustic Computed Tomography (XACT) takes advantage of both X-ray absorption contrast and high ultrasonic resolution in a single imaging modality by making use of the thermoacoustic effect. In XACT, X-ray absorption by defects and other structures in concrete create thermally induced pressure jumps that launch ultrasonic waves, which are then received by acoustic detectors to form images. In this research, XACT imaging was used to non-destructively test and identify defects in concrete. For concrete structures, we conclude that XACT imaging allows multiscale imaging at depths ranging from centimeters to meters, with spatial resolutions from sub-millimeter to centimeters. XACT imaging also holds promise for single-side testing of concrete infrastructure and provides an optimal solution for nondestructive inspection of existing bridges, pavement, nuclear power plants, and other concrete infrastructure.

  18. Extending the fundamental imaging-depth limit of multi-photon microscopy by imaging with photo-activatable fluorophores.

    PubMed

    Chen, Zhixing; Wei, Lu; Zhu, Xinxin; Min, Wei

    2012-08-13

    It is highly desirable to be able to optically probe biological activities deep inside live organisms. By employing a spatially confined excitation via a nonlinear transition, multiphoton fluorescence microscopy has become indispensable for imaging scattering samples. However, as the incident laser power drops exponentially with imaging depth due to scattering loss, the out-of-focus fluorescence eventually overwhelms the in-focal signal. The resulting loss of imaging contrast defines a fundamental imaging-depth limit, which cannot be overcome by increasing excitation intensity. Herein we propose to significantly extend this depth limit by multiphoton activation and imaging (MPAI) of photo-activatable fluorophores. The imaging contrast is drastically improved due to the created disparity of bright-dark quantum states in space. We demonstrate this new principle by both analytical theory and experiments on tissue phantoms labeled with synthetic caged fluorescein dye or genetically encodable photoactivatable GFP.

  19. Compensation method for the influence of angle of view on animal temperature measurement using thermal imaging camera combined with depth image.

    PubMed

    Jiao, Leizi; Dong, Daming; Zhao, Xiande; Han, Pengcheng

    2016-12-01

    In the study, we proposed an animal surface temperature measurement method based on Kinect sensor and infrared thermal imager to facilitate the screening of animals with febrile diseases. Due to random motion and small surface temperature variation of animals, the influence of the angle of view on temperature measurement is significant. The method proposed in the present study could compensate the temperature measurement error caused by the angle of view. Firstly, we analyzed the relationship between measured temperature and angle of view and established the mathematical model for compensating the influence of the angle of view with the correlation coefficient above 0.99. Secondly, the fusion method of depth and infrared thermal images was established for synchronous image capture with Kinect sensor and infrared thermal imager and the angle of view of each pixel was calculated. According to experimental results, without compensation treatment, the temperature image measured in the angle of view of 74° to 76° showed the difference of more than 2°C compared with that measured in the angle of view of 0°. However, after compensation treatment, the temperature difference range was only 0.03-1.2°C. This method is applicable for real-time compensation of errors caused by the angle of view during the temperature measurement process with the infrared thermal imager. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Wide-bandwidth, wide-beamwidth, high-resolution, millimeter-wave imaging for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Fernandes, Justin L.; Tedeschi, Jonathan R.; McMakin, Douglas L.; Jones, A. Mark; Lechelt, Wayne M.; Severtsen, Ronald H.

    2013-05-01

    Active millimeter-wave imaging is currently being used for personnel screening at airports and other high-security facilities. The cylindrical imaging techniques used in the deployed systems are based on licensed technology developed at the Pacific Northwest National Laboratory. The cylindrical and a related planar imaging technique form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images of the person being screened. The resolution, clothing penetration, and image illumination quality obtained with these techniques can be significantly enhanced through the selection of the aperture size, antenna beamwidth, center frequency, and bandwidth. The lateral resolution can be improved by increasing the center frequency, or it can be increased with a larger antenna beamwidth. The wide beamwidth approach can significantly improve illumination quality relative to a higher frequency system. Additionally, a wide antenna beamwidth allows for operation at a lower center frequency resulting in less scattering and attenuation from the clothing. The depth resolution of the system can be improved by increasing the bandwidth. Utilization of extremely wide bandwidths of up to 30 GHz can result in depth resolution as fine as 5 mm. This wider bandwidth operation may allow for improved detection techniques based on high range resolution. In this paper, the results of an extensive imaging study that explored the advantages of using extremely wide beamwidth and bandwidth are presented, primarily for 10-40 GHz frequency band.

  1. Enhancing Deep-Water Low-Resolution Gridded Bathymetry Using Single Image Super-Resolution

    NASA Astrophysics Data System (ADS)

    Elmore, P. A.; Nock, K.; Bonanno, D.; Smith, L.; Ferrini, V. L.; Petry, F. E.

    2017-12-01

    We present research to employ single-image super-resolution (SISR) algorithms to enhance knowledge of the seafloor using the 1-minute GEBCO 2014 grid when 100m grids from high-resolution sonar systems are available for training. Our numerical upscaling experiments of x15 upscaling of the GEBCO grid along three areas of the Eastern Pacific Ocean along mid-ocean ridge systems where we have these 100m gridded bathymetry data sets, which we accept as ground-truth. We show that four SISR algorithms can enhance this low-resolution knowledge of bathymetry versus bicubic or Spline-In-Tension algorithms through upscaling under these conditions: 1) rough topography is present in both training and testing areas and 2) the range of depths and features in the training area contains the range of depths in the enhancement area. We quantitatively judged successful SISR enhancement versus bicubic interpolation when Student's hypothesis testing show significant improvement of the root-mean squared error (RMSE) between upscaled bathymetry and 100m gridded ground-truth bathymetry at p < 0.05. In addition, we found evidence that random forest based SISR methods may provide more robust enhancements versus non-forest based SISR algorithms.

  2. Tactile Imaging of an Imbedded Palpable Structure for Breast Cancer Screening

    PubMed Central

    2015-01-01

    Apart from texture, the human finger can sense palpation. The detection of an imbedded structure is a fine balance between the relative stiffness of the matrix, the object, and the device. If the device is too soft, its high responsiveness will limit the depth to which the imbedded structure can be detected. The sensation of palpation is an effective procedure for a physician to examine irregularities. In a clinical breast examination (CBE), by pressing over 1 cm2 area, at a contact pressure in the 70–90 kPa range, the physician feels cancerous lumps that are 8- to 18-fold stiffer than surrounding tissue. Early detection of a lump in the 5–10 mm range leads to an excellent prognosis. We describe a thin-film tactile device that emulates human touch to quantify CBE by imaging the size and shape of 5–10 mm objects at 20 mm depth in a breast model using ∼80 kPa pressure. The linear response of the device allows quantification where the greyscale corresponds to the relative local stiffness. The (background) signal from <2.5-fold stiffer objects at a size below 2 mm is minimal. PMID:25148477

  3. Three-photon tissue imaging using moxifloxacin.

    PubMed

    Lee, Seunghun; Lee, Jun Ho; Wang, Taejun; Jang, Won Hyuk; Yoon, Yeoreum; Kim, Bumju; Jun, Yong Woong; Kim, Myoung Joon; Kim, Ki Hean

    2018-06-20

    Moxifloxacin is an antibiotic used in clinics and has recently been used as a clinically compatible cell-labeling agent for two-photon (2P) imaging. Although 2P imaging with moxifloxacin labeling visualized cells inside tissues using enhanced fluorescence, the imaging depth was quite limited because of the relatively short excitation wavelength (<800 nm) used. In this study, the feasibility of three-photon (3P) excitation of moxifloxacin using a longer excitation wavelength and moxifloxacin-based 3P imaging were tested to increase the imaging depth. Moxifloxacin fluorescence via 3P excitation was detected at a >1000 nm excitation wavelength. After obtaining the excitation and emission spectra of moxifloxacin, moxifloxacin-based 3P imaging was applied to ex vivo mouse bladder and ex vivo mouse small intestine tissues and compared with moxifloxacin-based 2P imaging by switching the excitation wavelength of a Ti:sapphire oscillator between near 1030 and 780 nm. Both moxifloxacin-based 2P and 3P imaging visualized cellular structures in the tissues via moxifloxacin labeling, but the image contrast was better with 3P imaging than with 2P imaging at the same imaging depths. The imaging speed and imaging depth of moxifloxacin-based 3P imaging using a Ti:sapphire oscillator were limited by insufficient excitation power. Therefore, we constructed a new system for moxifloxacin-based 3P imaging using a high-energy Yb fiber laser at 1030 nm and used it for in vivo deep tissue imaging of a mouse small intestine. Moxifloxacin-based 3P imaging could be useful for clinical applications with enhanced imaging depth.

  4. Image translation for single-shot focal tomography

    DOE PAGES

    Llull, Patrick; Yuan, Xin; Carin, Lawrence; ...

    2015-01-01

    Focus and depth of field are conventionally addressed by adjusting longitudinal lens position. More recently, combinations of deliberate blur and computational processing have been used to extend depth of field. Here we show that dynamic control of transverse and longitudinal lens position can be used to decode focus and extend depth of field without degrading static resolution. Our results suggest that optical image stabilization systems may be used for autofocus, extended depth of field, and 3D imaging.

  5. Multispectral near-infrared reflectance and transillumination imaging of occlusal carious lesions: variations in lesion contrast with lesion depth

    NASA Astrophysics Data System (ADS)

    Simon, Jacob C.; Curtis, Donald A.; Darling, Cynthia L.; Fried, Daniel

    2018-02-01

    In vivo and in vitro studies have demonstrated that near-infrared (NIR) light at λ=1300-1700-nm can be used to acquire high contrast images of enamel demineralization without interference of stains. The objective of this study was to determine if a relationship exists between the NIR image contrast of occlusal lesions and the depth of the lesion. Extracted teeth with varying amounts of natural occlusal decay were measured using a multispectral-multimodal NIR imaging system which captures λ=1300-nm occlusal transillumination, and λ=1500-1700-nm cross-polarized reflectance images. Image analysis software was used to calculate the lesion contrast detected in both images from matched positions of each imaging modality. Samples were serially sectioned across the lesion with a precision saw, and polarized light microscopy was used to measure the respective lesion depth relative to the dentinoenamel junction. Lesion contrast measured from NIR crosspolarized reflectance images positively correlated (p<0.05) with increasing lesion depth and a statistically significant difference between inner enamel and dentin lesions was observed. The lateral width of pit and fissures lesions measured in both NIR cross-polarized reflectance and NIR transillumination positively correlated with lesion depth.

  6. Spectrally-Based Bathymetric Mapping of a Dynamic, Sand-Bedded Channel: Niobrara River, Nebraska, USA

    NASA Astrophysics Data System (ADS)

    Dilbone, Elizabeth K.

    Methods for spectrally-based bathymetric mapping of rivers mainly have been developed and tested on clear-flowing, gravel bedded channels, with limited application to turbid, sand-bedded rivers. Using hyperspectral images of the Niobrara River, Nebraska, and field-surveyed depth data, this study evaluated three methods of retrieving depth from remotely sensed data in a dynamic, sand-bedded channel. The first regression-based approach paired in situ depth measurements and image pixel values to predict depth via Optimal Band Ratio Analysis (OBRA). The second approach used ground-based reflectance measurements to calibrate an OBRA relationship. For this approach, CASI images were atmospherically corrected to units of apparent surface reflectance using an empirical line calibration. For the final technique, we used Image-to-Depth Quantile Transformation (IDQT) to predict depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image derived variable. OBRA yielded the lowest overall depth retrieval error (0.0047 m) and highest observed versus predicted R2 (0.81). Although misalignment between field and image data were not problematic to OBRA's performance in this study, such issues present potential limitations to standard regression-based approaches like OBRA in dynamic, sand-bedded rivers. Field spectroscopy-based maps exhibited a slight shallow bias (0.0652 m) but provided reliable depth estimates for most of the study reach. IDQT had a strong deep bias, but still provided informative relative depth maps that portrayed general patterns of shallow and deep areas of the channel. The over-prediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the CDF of depth. While each of the techniques tested in this study demonstrated the potential to provide accurate depth estimates in sand-bedded rivers, each method also was subject to certain constraints and limitations.

  7. Soil and surface temperatures at the Viking landing sites

    NASA Technical Reports Server (NTRS)

    Kieffer, H. H.

    1976-01-01

    The annual temperature range for the Martian surface at the Viking lander sites is computed on the basis of thermal parameters derived from observations made with the infrared thermal mappers. The Viking lander 1 (VL1) site has small annual variations in temperature, whereas the Viking lander 2 (VL2) site has large annual changes. With the Viking lander images used to estimate the rock component of the thermal emission, the daily temperature behavior of the soil alone is computed over the range of depths accessible to the lander; when the VL1 and VL2 sites were sampled, the daily temperature ranges at the top of the soil were 183 to 263 K and 183 to 268 K, respectively. The diurnal variation decreases with depth with an exponential scale of about 5 centimeters. The maximum temperature of the soil sampled from beneath rocks at the VL2 site is calculated to be 230 K. These temperature calculations should provide a reference for study of the active chemistry reported for the Martian soil.

  8. Soil and surface temperatures at the viking landing sites.

    PubMed

    Kieffer, H H

    1976-12-11

    The annual temperature range for the martian surface at the Viking lander sites is computed on the basis of thermal parameters derived from observations made with the infrared thermal mappers. The Viking lander 1 (VL1) site has small annual variations in temperature, whereas the Viking lander 2 (VL2) site has large annual changes. With the Viking lander images used to estimate the rock component of the thermal emission, the daily temperature behavior of the soil alone is computed over the range of depths accessible to the lander; when the VL1 and VL2 sites were sampled, the daily temperature ranges at the top of the soil were 183 to 263 K and 183 to 268 K, respectively. The diurnal variation decreases with depth with an exponential scale of about 5 centimeters. The maximum temperature of the soil sampled from beneath rocks at the VL2 site is calculated to be 230 K. These temperature calculations should provide a reference for study of the active chemistry reported for the martian soil.

  9. The history of imaging in obstetrics.

    PubMed

    Benson, Carol B; Doubilet, Peter M

    2014-11-01

    During the past century, imaging of the pregnant patient has been performed with radiography, scintigraphy, computed tomography, magnetic resonance imaging, and ultrasonography (US). US imaging has emerged as the primary imaging modality, because it provides real-time images at relatively low cost without the use of ionizing radiation. This review begins with a discussion of the history and current status of imaging modalities other than US for the pregnant patient. The discussion then turns to an in-depth description of how US technology advanced to become such a valuable diagnostic tool in the obstetric patient. Finally, the broad range of diagnostic uses of US in these patients is presented, including its uses for distinguishing an intrauterine pregnancy from a failed or ectopic pregnancy in the first trimester; assigning gestational age and assessing fetal weight; evaluating the fetus for anomalies and aneuploidy; examining the uterus, cervix, placenta, and amniotic fluid; and guiding obstetric interventional procedures.

  10. Modeling of Composite Scenes Using Wires, Plates and Dielectric Parallelized (WIPL-DP)

    DTIC Science & Technology

    2006-06-01

    formation and solves the data communications problem. The ability to perform subsurface imaging to depths of 200’ have already been demonstrated by...perform subsurface imaging to depths of 200’ have already been demonstrated by Brown in [3] and presented in Figure 3 above. Furthermore, reference [3...transmitter platform for use in image formation and solves the data communications problem. The ability to perform subsurface imaging to depths of 200

  11. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. Application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.

  12. Evaluating the potential for remote bathymetric mapping of a turbid, sand-bed river: 2. application to hyperspectral image data from the Platte River

    USGS Publications Warehouse

    Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.

    2011-01-01

    This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.

  13. Long axial imaging range using conventional swept source lasers in optical coherence tomography via re-circulation loops

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Jackson, David A.; Podoleanu, Adrian

    2018-03-01

    Typically, swept source optical coherence tomography (SS-OCT) imaging instruments are capable of a longer axial range than their camera based (CB) counterpart. However, there are still various applications that would take advantage for an extended axial range. In this paper, we propose an interferometer configuration that can be used to extend the axial range of the OCT instruments equipped with conventional swept-source lasers up to a few cm. In this configuration, the two arms of the interferometer are equipped with adjustable optical path length rings. The use of semiconductor optical amplifiers in the two rings allows for compensating optical losses hence, multiple paths depth reflectivity profiles (Ascans) can be combined axially. In this way, extremely long overall axial ranges are possible. The use of the recirculation loops produces an effect equivalent to that of extending the coherence length of the swept source laser. Using this approach, the achievable axial imaging range in SS-OCT can reach values well beyond the limit imposed by the coherence length of the laser, to exceed in principle many centimeters. In the present work, we demonstrate axial ranges exceeding 4 cm using a commercial swept source laser and reaching 6 cm using an "in-house" swept source laser. When used in a conventional set-up alone, both these lasers can provide less than a few mm axial range.

  14. Extended depth of focus tethered capsule OCT endomicroscopy for upper gastrointestinal tract imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Vuong, Barry; Yin, Biwei; Beaulieu-Ouellet, Emilie; Liang, Chia Pin; Beatty, Matthew; Singh, Kanwarpal; Dong, Jing; Grant, Catriona N.; Rosenberg, Mireille; Tearney, Guillermo J.

    2017-02-01

    Endoscopy, the current standard of care for the diagnosis of upper gastrointestinal (GI) diseases, is not ideal as a screening tool because it is costly, necessitates a team of medically trained personnel, and typically requires that the patient be sedated. Endoscopy is also a superficial macroscopic imaging modality and therefore is unable to provide detailed information on subsurface microscopic structure that is required to render a precise tissue diagnosis. We have overcome these limitations through the development of an optical coherence tomography tethered capsule endomicroscopy (OCT-TCE) imaging device. The OCT-TCE device has a pill-like form factor with an optically clear wall to allow the contained opto-mechanical components to scan the OCT beam along the circumference of the esophagus. Once swallowed, the OCT-TCE device traverses the esophagus naturally via peristalsis and multiple cross-sectional OCT images are obtained at 30-40 μm lateral resolution by 7 μm axial resolution. While this spatial resolution enables differentiation of squamous vs columnar mucosa, crucial microstructural features such as goblet cells ( 10 μm), which signify intestinal metaplasia in BE, and enlarged nuclei that are indicative of dysplasia cannot be resolved with the current OCT-TCE technology. In this work we demonstrate a novel design of a high lateral resolution OCT-TCE device with an extended depth of focus (EDOF). The EDOF is created by use of self-imaging wavefront division multiplexing that produces multiple focused modes at different depths into the sample. The overall size of the EDOF TCE is similar to that of the previous OCT-TCE device ( 11 mm by 26 mm) but with a lateral resolution of 8 μm over a depth range of 2 mm. Preliminary esophageal and intestinal imaging using these EDOF optics demonstrates an improvement in the ability to resolve tissue morphology including individual glands and cells. These results suggest that the use of EDOF optics may be a promising avenue for increasing the accuracy of OCT-TCE for the diagnosis of upper GI diseases.

  15. Feasibility study of proton-based quality assurance of proton range compensator

    NASA Astrophysics Data System (ADS)

    Park, S.; Jeong, C.; Min, B. J.; Kwak, J.; Lee, J.; Cho, S.; Shin, D.; Lim, Y. K.; Park, S. Y.; Lee, S. B.

    2013-06-01

    All patient specific range compensators (RCs) are customized for achieving distal dose conformity of target volume in passively scattered proton therapy. Compensators are milled precisely using a computerized machine. In proton therapy, precision of the compensator is critical and quality assurance (QA) is required to protect normal tissues and organs from radiation damage. This study aims to evaluate the precision of proton-based quality assurance of range compensator. First, the geometry information of two compensators was extracted from the DICOM Radiotherapy (RT) plan. Next, RCs were irradiated on the EBT film individually by proton beam which is modulated to have a photon-like percent depth dose (PDD). Step phantoms were also irradiated on the EBT film to generate calibration curve which indicates relationship between optical density of irradiated film and perpendicular depth of compensator. Comparisons were made using the mean absolute difference (MAD) between coordinate information from DICOM RT and converted depth information from the EBT film. MAD over the whole region was 1.7, and 2.0 mm. However, MAD over the relatively flat regions on each compensator selected for comparison was within 1 mm. These results shows that proton-based quality assurance of range compensator is feasible and it is expected to achieve MAD over the whole region less than 1 mm with further correction about scattering effect of proton imaging.

  16. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †

    PubMed Central

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-01

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434

  17. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.

    PubMed

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-10

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.

  18. Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.

    2018-04-01

    RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.

  19. Approach for scene reconstruction from the analysis of a triplet of still images

    NASA Astrophysics Data System (ADS)

    Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle

    1997-03-01

    Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.

  20. Walker Ranch 3D seismic images

    DOE Data Explorer

    Robert J. Mellors

    2016-03-01

    Amplitude images (both vertical and depth slices) extracted from 3D seismic reflection survey over area of Walker Ranch area (adjacent to Raft River). Crossline spacing of 660 feet and inline of 165 feet using a Vibroseis source. Processing included depth migration. Micro-earthquake hypocenters on images. Stratigraphic information and nearby well tracks added to images. Images are embedded in a Microsoft Word document with additional information. Exact location and depth restricted for proprietary reasons. Data collection and processing funded by Agua Caliente. Original data remains property of Agua Caliente.

  1. Infrared imaging of subcutaneous veins.

    PubMed

    Zharov, Vladimir P; Ferguson, Scott; Eidt, John F; Howard, Paul C; Fink, Louis M; Waner, Milton

    2004-01-01

    Imaging of subcutaneous veins is important in many applications, such as gaining venous access and vascular surgery. Despite a long history of medical infrared (IR) photography and imaging, this technique is not widely used for this purpose. Here we revisited and explored the capability of near-IR imaging to visualize subcutaneous structures, with a focus on diagnostics of superficial veins. An IR device comprising a head-mounted IR LED array (880 nm), a small conventional CCD camera (Toshiba Ik-mui, Tokyo, Japan), virtual-reality optics, polarizers, filters, and diffusers was used in vivo to obtain images of different subcutaneous structures. The same device was used to estimate the IR image quality as a function of wavelength produced by a tunable xenon lamp-based monochrometer in the range of 500-1,000 nm and continuous-wave Nd:YAG (1.06 microm) and diode (805 nm) lasers. The various modes of optical illumination were compared in vivo. Contrast of the IR images in the reflectance mode was measured in the near-IR spectral range of 650-1,060 nm. Using the LED array, various IR images were obtained in vivo, including images of vein structure in a pigmented, fatty forearm, varicose leg veins, and vascular lesions of the tongue. Imaging in the near-IR range (880-930 nm) provides relatively good contrast of subcutaneous veins, underscoring its value for diagnosis. This technique has the potential for the diagnosis of varicose veins with a diameter of 0.5-2 mm at a depth of 1-3 mm, guidance of venous access, podiatry, phlebotomy, injection sclerotherapy, and control of laser interstitial therapy. Copyright 2004 Wiley-Liss, Inc.

  2. Comparison of the depth of an optic nerve head obtained using stereo retinal images and HRT

    NASA Astrophysics Data System (ADS)

    Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Aoyama, Akira; Hara, Takeshi; Kakogawa, Masakatsu; Fujita, Hiroshi; Yamamoto, Tetsuya

    2007-03-01

    The analysis of the optic nerve head (ONH) in the retinal fundus is important for the early detection of glaucoma. In this study, we investigate an automatic reconstruction method for producing the 3-D structure of the ONH from a stereo retinal image pair; the depth value of the ONH measured by using this method was compared with the measurement results determined from the Heidelberg Retina Tomograph (HRT). We propose a technique to obtain the depth value from the stereo image pair, which mainly consists of four steps: (1) cutout of the ONH region from the retinal images, (2) registration of the stereo pair, (3) disparity detection, and (4) depth calculation. In order to evaluate the accuracy of this technique, the shape of the depression of an eyeball phantom that had a circular dent as generated from the stereo image pair and used to model the ONH was compared with a physically measured quantity. The measurement results obtained when the eyeball phantom was used were approximately consistent. The depth of the ONH obtained using the stereo retinal images was in accordance with the results obtained using the HRT. These results indicate that the stereo retinal images could be useful for assessing the depth of the ONH for the diagnosis of glaucoma.

  3. Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0.

    PubMed

    Jin, Xin; Liu, Li; Chen, Yanqin; Dai, Qionghai

    2017-05-01

    This paper derives a mathematical point spread function (PSF) and a depth-invariant focal sweep point spread function (FSPSF) for plenoptic camera 2.0. Derivation of PSF is based on the Fresnel diffraction equation and image formation analysis of a self-built imaging system which is divided into two sub-systems to reflect the relay imaging properties of plenoptic camera 2.0. The variations in PSF, which are caused by changes of object's depth and sensor position variation, are analyzed. A mathematical model of FSPSF is further derived, which is verified to be depth-invariant. Experiments on the real imaging systems demonstrate the consistency between the proposed PSF and the actual imaging results.

  4. Depth extraction method with high accuracy in integral imaging based on moving array lenslet technique

    NASA Astrophysics Data System (ADS)

    Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing

    2018-03-01

    In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.

  5. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  6. Matrix Approach of Seismic Wave Imaging: Application to Erebus Volcano

    NASA Astrophysics Data System (ADS)

    Blondel, T.; Chaput, J.; Derode, A.; Campillo, M.; Aubry, A.

    2017-12-01

    This work aims at extending to seismic imaging a matrix approach of wave propagation in heterogeneous media, previously developed in acoustics and optics. More specifically, we will apply this approach to the imaging of the Erebus volcano in Antarctica. Volcanoes are actually among the most challenging media to explore seismically in light of highly localized and abrupt variations in density and wave velocity, extreme topography, extensive fractures, and the presence of magma. In this strongly scattering regime, conventional imaging methods suffer from the multiple scattering of waves. Our approach experimentally relies on the measurement of a reflection matrix associated with an array of geophones located at the surface of the volcano. Although these sensors are purely passive, a set of Green's functions can be measured between all pairs of geophones from ice-quake coda cross-correlations (1-10 Hz) and forms the reflection matrix. A set of matrix operations can then be applied for imaging purposes. First, the reflection matrix is projected, at each time of flight, in the ballistic focal plane by applying adaptive focusing at emission and reception. It yields a response matrix associated with an array of virtual geophones located at the ballistic depth. This basis allows us to get rid of most of the multiple scattering contribution by applying a confocal filter to seismic data. Iterative time reversal is then applied to detect and image the strongest scatterers. Mathematically, it consists in performing a singular value decomposition of the reflection matrix. The presence of a potential target is assessed from a statistical analysis of the singular values, while the corresponding eigenvectors yield the corresponding target images. When stacked, the results obtained at each depth give a three-dimensional image of the volcano. While conventional imaging methods lead to a speckle image with no connection to the actual medium's reflectivity, our method enables to highlight a chimney-shaped structure inside Erebus volcano with true positive rates ranging from 80% to 95%. Although computed independently, the results at each depth are spatially consistent, substantiating their physical reliability. The identified structure is therefore likely to describe accurately the internal structure of the Erebus volcano.

  7. EPR Imaging at a Few Megahertz Using SQUID Detectors

    NASA Technical Reports Server (NTRS)

    Hahn, Inseob; Day, Peter; Penanen, Konstantin; Eom, Byeong Ho

    2010-01-01

    An apparatus being developed for electron paramagnetic resonance (EPR) imaging operates in the resonance-frequency range of about 1 to 2 MHz well below the microwave frequencies used in conventional EPR. Until now, in order to obtain sufficient signal-to-noise radios (SNRs) in conventional EPR, it has been necessary to place both detectors and objects to be imaged inside resonant microwave cavities. EPR imaging has much in common with magnetic resonance imaging (MRI), which is described briefly in the immediately preceding article. In EPR imaging as in MRI, one applies a magnetic pulse to make magnetic moments (in this case, of electrons) precess in an applied magnetic field having a known gradient. The magnetic moments precess at a resonance frequency proportional to the strength of the local magnetic field. One detects the decaying resonance-frequency magnetic- field component associated with the precession. Position is encoded by use of the known relationship between the resonance frequency and the position dependence of the magnetic field. EPR imaging has recently been recognized as an important tool for non-invasive, in vivo imaging of free radicals and reduction/oxidization metabolism. However, for in vivo EPR imaging of humans and large animals, the conventional approach is not suitable because (1) it is difficult to design and construct resonant cavities large enough and having the required shapes; (2) motion, including respiration and heartbeat, can alter the resonance frequency; and (3) most microwave energy is absorbed in the first few centimeters of tissue depth, thereby potentially endangering the subject and making it impossible to obtain adequate signal strength for imaging at greater depth. To obtain greater penetration depth, prevent injury to the subject, and avoid the difficulties associated with resonant cavities, it is necessary to use lower resonance frequencies. An additional advantage of using lower resonance frequencies is that one can use weaker applied magnetic fields: For example, for a resonance frequency of 1.4 MHz, one needs a magnetic flux density of 0.5 Gauss approximately the flux density of the natural magnetic field of the Earth.

  8. Snow Depth Depicted on Mt. Lyell by NASA Airborne Snow Observatory

    NASA Image and Video Library

    2013-05-02

    A natural color image of Mt. Lyell, the highest point in the Tuolumne River Basin top image is compared with a three-dimensional color composite image of Mt. Lyell from NASA Airborne Snow Observatory depicting snow depth bottom image.

  9. Hierarchical image-based rendering using texture mapping hardware

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, N

    1999-01-15

    Multi-layered depth images containing color and normal information for subobjects in a hierarchical scene model are precomputed with standard z-buffer hardware for six orthogonal views. These are adaptively selected according to the proximity of the viewpoint, and combined using hardware texture mapping to create ''reprojected'' output images for new viewpoints. (If a subobject is too close to the viewpoint, the polygons in the original model are rendered.) Specific z-ranges are selected from the textures with the hardware alpha test to give accurate 3D reprojection. The OpenGL color matrix is used to transform the precomputed normals into their orientations in themore » final view, for hardware shading.« less

  10. Use of thermal inertia determined by HCMM to predict nocturnal cold prone areas in Florida

    NASA Technical Reports Server (NTRS)

    Allen, L. H., Jr. (Principal Investigator); Chen, E.; Martsolf, J. D.; Jones, P. H.

    1981-01-01

    The HCMM transparency scenes for the available winter of 1978-1979 were evaluated; scenes were identified on processed magnetic tapes; other remote sensing information was identified; and a soil heat flux model with variable-depth thermal profile was developed. The Image 100 system was used to compare HCMM and GOES transparent images of surface thermal patterns. Excellent correspondence of patterns was found, with HCMM giving the greater resolution. One image shows details of thermal patterns in Florida that are attributable to difference in near surface water contents. The wide range of surface temperatures attributable to surface thermal inertia that exist in the relatively flat Florida topography is demonstrated.

  11. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.

    PubMed

    Sohn, Bong-Soo

    2017-03-11

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.

  12. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones

    PubMed Central

    Sohn, Bong-Soo

    2017-01-01

    This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487

  13. True 3D digital holographic tomography for virtual reality applications

    NASA Astrophysics Data System (ADS)

    Downham, A.; Abeywickrema, U.; Banerjee, P. P.

    2017-09-01

    Previously, a single CCD camera has been used to record holograms of an object while the object is rotated about a single axis to reconstruct a pseudo-3D image, which does not show detailed depth information from all perspectives. To generate a true 3D image, the object has to be rotated through multiple angles and along multiple axes. In this work, to reconstruct a true 3D image including depth information, a die is rotated along two orthogonal axes, and holograms are recorded using a Mach-Zehnder setup, which are subsequently numerically reconstructed. This allows for the generation of multiple images containing phase (i.e., depth) information. These images, when combined, create a true 3D image with depth information which can be exported to a Microsoft® HoloLens for true 3D virtual reality.

  14. Fiber bundle endomicroscopy with multi-illumination for three-dimensional reflectance image reconstruction

    NASA Astrophysics Data System (ADS)

    Ando, Yoriko; Sawahata, Hirohito; Kawano, Takeshi; Koida, Kowa; Numano, Rika

    2018-02-01

    Bundled fiber optics allow in vivo imaging at deep sites in a body. The intrinsic optical contrast detects detailed structures in blood vessels and organs. We developed a bundled-fiber-coupled endomicroscope, enabling stereoscopic three-dimensional (3-D) reflectance imaging with a multipositional illumination scheme. Two illumination sites were attached to obtain reflectance images with left and right illumination. Depth was estimated by the horizontal disparity between the two images under alternative illuminations and was calibrated by the targets with known depths. This depth reconstruction was applied to an animal model to obtain the 3-D structure of blood vessels of the cerebral cortex (Cereb cortex) and preputial gland (Pre gla). The 3-D endomicroscope could be instrumental to microlevel reflectance imaging, improving the precision in subjective depth perception, spatial orientation, and identification of anatomical structures.

  15. Axial resolution improvement in spectral domain optical coherence tomography using a depth-adaptive maximum-a-posterior framework

    NASA Astrophysics Data System (ADS)

    Boroomand, Ameneh; Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka

    2015-03-01

    The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.

  16. Geodetic imaging: Reservoir monitoring using satellite interferometry

    USGS Publications Warehouse

    Vasco, D.W.; Wicks, C.; Karasaki, K.; Marques, O.

    2002-01-01

    Fluid fluxes within subsurface reservoirs give rise to surface displacements, particularly over periods of a year or more. Observations of such deformation provide a powerful tool for mapping fluid migration within the Earth, providing new insights into reservoir dynamics. In this paper we use Interferometric Synthetic Aperture Radar (InSAR) range changes to infer subsurface fluid volume strain at the Coso geothermal field. Furthermore, we conduct a complete model assessment, using an iterative approach to compute model parameter resolution and covariance matrices. The method is a generalization of a Lanczos-based technique which allows us to include fairly general regularization, such as roughness penalties. We find that we can resolve quite detailed lateral variations in volume strain both within the reservoir depth range (0.4-2.5 km) and below the geothermal production zone (2.5-5.0 km). The fractional volume change in all three layers of the model exceeds the estimated model parameter uncertainly by a factor of two or more. In the reservoir depth interval (0.4-2.5 km), the predominant volume change is associated with northerly and westerly oriented faults and their intersections. However, below the geothermal production zone proper [the depth range 2.5-5.0 km], there is the suggestion that both north- and northeast-trending faults may act as conduits for fluid flow.

  17. Investigation of a Space Delta Technology Facility (SDTF) for Spacelab

    NASA Technical Reports Server (NTRS)

    Welch, J. D.

    1977-01-01

    The Space Data Technology Facility (SDTF) would have the role of supporting a wide range of data technology related demonstrations which might be performed on Spacelab. The SDTF design is incorporated primarily in one single width standardized Spacelab rack. It consists of various display, control and data handling components together with interfaces with the demonstration-specific equipment and Spacelab. To arrive at this design a wide range of data related technologies and potential demonstrations were also investigated. One demonstration concerned with online image rectification and registration was developed in some depth.

  18. Audiomagnetotellurics-Magnetotelluric (AMT-MT) survey of the Campi Flegrei inner caldera

    NASA Astrophysics Data System (ADS)

    Siniscalchi, Agata; Tripaldi, Simona; Romano, Gerardo; D'Auria, Luca; Improta, Luigi; Petrillo, Zaccaria

    2017-04-01

    In the framework of the EU project MED-SUV, an audiomagnetotellurics-magnetotelluric (AMT-MT) survey in the frequency band 0.1-100kHz was performed in the eastern border of the Campi Flegrei inner caldera comprising the area where seismicity is concentred in the last decade. This survey was aimed to provide new insights on the electrical resistivity structure of the subsoil. Among all the collected MT soundings, twenty-two, on a total of forty-three, were selected along a WSW-ENE alignment that crosses the main fumarole emissions (Solfatara, Pisciarelli and Agnano) and used for 2D regularized inversion. The obtained model is characterized by a quite narrow resistivity range that well matches typical range of enhanced geothermal environment as largely documented in the international literature. In particular focusing on the Solfatara and Pisciarelli districts the resistivity distribution clearly calls to mind the behavior of a high temperature geothermal system with a very conductive cap in the shallower part. Here the presence of gaps in this conductor just in correspondence of the main superficial emissions describes the inflow and outflow pathway of the shallow fluids circulation. A high resistive reservoir appearing at a depth of about 500 m b.s.l.. WithinWithin this region we selected a vertical resistivity profile just in correspondence of a Vp/Vs profile versus depth coming from a passive seismic tomography (Vanorio et al., 2005). The comparison of the two behaviors shows a clear anti-correlation between the two physical parameters (high resistivity and low Vp/Vs) in the depth range 500-1000 m supporting the interpretation that an over-pressurized gas bearing rocks under supercritical conditions constituting the reservoir of the enhanced geothermal system. On the eastern side of this resistive plume up to 2.5 km of depth is present a local relative conductive unit underneath the Pisciarelli area. In the same volume most of the recent (from 2005 up to date) micro-earthquake hypocenters are confined suggesting that in this volume geothermal fluid, pushed by the reservoir pressure and mixed with the powerful aquifer (testified in the well CF23), propagates in widespread pores and cracks triggering microseismicity. The present resistivity model is limited to 3 km of depth due to the adopted frequency range, thus does not investigate the magma feeding system of the Plegrean Field caldera that seismic imaging suggest to be a large magmatic sill within the basement formations at about 7.5 km of depth (Zollo et al., 2008). On the contrary it well image for the first time with higher resolution than in the past the geothermal system underneath Solfatara-Pisciarelli districts giving insights of the whole hydro-geothermal circulation.

  19. Automation and uncertainty analysis of a method for in-vivo range verification in particle therapy.

    PubMed

    Frey, K; Unholtz, D; Bauer, J; Debus, J; Min, C H; Bortfeld, T; Paganetti, H; Parodi, K

    2014-10-07

    We introduce the automation of the range difference calculation deduced from particle-irradiation induced β(+)-activity distributions with the so-called most-likely-shift approach, and evaluate its reliability via the monitoring of algorithm- and patient-specific uncertainty factors. The calculation of the range deviation is based on the minimization of the absolute profile differences in the distal part of two activity depth profiles shifted against each other. Depending on the workflow of positron emission tomography (PET)-based range verification, the two profiles under evaluation can correspond to measured and simulated distributions, or only measured data from different treatment sessions. In comparison to previous work, the proposed approach includes an automated identification of the distal region of interest for each pair of PET depth profiles and under consideration of the planned dose distribution, resulting in the optimal shift distance. Moreover, it introduces an estimate of uncertainty associated to the identified shift, which is then used as weighting factor to 'red flag' problematic large range differences. Furthermore, additional patient-specific uncertainty factors are calculated using available computed tomography (CT) data to support the range analysis. The performance of the new method for in-vivo treatment verification in the clinical routine is investigated with in-room PET images for proton therapy as well as with offline PET images for proton and carbon ion therapy. The comparison between measured PET activity distributions and predictions obtained by Monte Carlo simulations or measurements from previous treatment fractions is performed. For this purpose, a total of 15 patient datasets were analyzed, which were acquired at Massachusetts General Hospital and Heidelberg Ion-Beam Therapy Center with in-room PET and offline PET/CT scanners, respectively. Calculated range differences between the compared activity distributions are reported in a 2D map in beam-eye-view. In comparison to previously proposed approaches, the new most-likely-shift method shows more robust results for assessing in-vivo the range from strongly varying PET distributions caused by differing patient geometry, ion beam species, beam delivery techniques, PET imaging concepts and counting statistics. The additional visualization of the uncertainties and the dedicated weighting strategy contribute to the understanding of the reliability of observed range differences and the complexity in the prediction of activity distributions. The proposed method promises to offer a feasible technique for clinical routine of PET-based range verification.

  20. Automation and uncertainty analysis of a method for in-vivo range verification in particle therapy

    NASA Astrophysics Data System (ADS)

    Frey, K.; Unholtz, D.; Bauer, J.; Debus, J.; Min, C. H.; Bortfeld, T.; Paganetti, H.; Parodi, K.

    2014-10-01

    We introduce the automation of the range difference calculation deduced from particle-irradiation induced β+-activity distributions with the so-called most-likely-shift approach, and evaluate its reliability via the monitoring of algorithm- and patient-specific uncertainty factors. The calculation of the range deviation is based on the minimization of the absolute profile differences in the distal part of two activity depth profiles shifted against each other. Depending on the workflow of positron emission tomography (PET)-based range verification, the two profiles under evaluation can correspond to measured and simulated distributions, or only measured data from different treatment sessions. In comparison to previous work, the proposed approach includes an automated identification of the distal region of interest for each pair of PET depth profiles and under consideration of the planned dose distribution, resulting in the optimal shift distance. Moreover, it introduces an estimate of uncertainty associated to the identified shift, which is then used as weighting factor to ‘red flag’ problematic large range differences. Furthermore, additional patient-specific uncertainty factors are calculated using available computed tomography (CT) data to support the range analysis. The performance of the new method for in-vivo treatment verification in the clinical routine is investigated with in-room PET images for proton therapy as well as with offline PET images for proton and carbon ion therapy. The comparison between measured PET activity distributions and predictions obtained by Monte Carlo simulations or measurements from previous treatment fractions is performed. For this purpose, a total of 15 patient datasets were analyzed, which were acquired at Massachusetts General Hospital and Heidelberg Ion-Beam Therapy Center with in-room PET and offline PET/CT scanners, respectively. Calculated range differences between the compared activity distributions are reported in a 2D map in beam-eye-view. In comparison to previously proposed approaches, the new most-likely-shift method shows more robust results for assessing in-vivo the range from strongly varying PET distributions caused by differing patient geometry, ion beam species, beam delivery techniques, PET imaging concepts and counting statistics. The additional visualization of the uncertainties and the dedicated weighting strategy contribute to the understanding of the reliability of observed range differences and the complexity in the prediction of activity distributions. The proposed method promises to offer a feasible technique for clinical routine of PET-based range verification.

  1. Magmatic activity beneath the quiescent Three Sisters volcanic center, central Oregon Cascade Range, USA

    USGS Publications Warehouse

    Wicks, Charles W.; Dzurisin, Daniel; Ingebritsen, Steven E.; Thatcher, Wayne R.; Lu, Zhong; Iverson, Justin

    2002-01-01

    Images from satellite interferometric synthetic aperture radar (InSAR) reveal uplift of a broad ~10 km by 20 km area in the Three Sisters volcanic center of the central Oregon Cascade Range, ~130 km south of Mt. St. Helens. The last eruption in the volcanic center occurred ~1500 years ago. Multiple satellite images from 1992 through 2000 indicate that most if not all of ~100 mm of observed uplift occurred between September 1998 and October 2000. Geochemical (water chemistry) anomalies, first noted during 1990, coincide with the area of uplift and suggest the existence of a crustal magma reservoir prior to the uplift. We interpret the uplift as inflation caused by an ongoing episode of magma intrusion at a depth of ~6.5 km.

  2. Revealing sub-μm and μm-scale textures in H2O ice at megabar pressures by time-domain Brillouin scattering

    PubMed Central

    Nikitin, Sergey M.; Chigarev, Nikolay; Tournat, Vincent; Bulou, Alain; Gasteau, Damien; Castagnede, Bernard; Zerr, Andreas; Gusev, Vitalyi E.

    2015-01-01

    The time-domain Brillouin scattering technique, also known as picosecond ultrasonic interferometry, allows monitoring of the propagation of coherent acoustic pulses, having lengths ranging from nanometres to fractions of a micrometre, in samples with dimension of less than a micrometre to tens of micrometres. In this study, we applied this technique to depth-profiling of a polycrystalline aggregate of ice compressed in a diamond anvil cell to megabar pressures. The method allowed examination of the characteristic dimensions of ice texturing in the direction normal to the diamond anvil surfaces with sub-micrometre spatial resolution via time-resolved measurements of the propagation velocity of the acoustic pulses travelling in the compressed sample. The achieved imaging of ice in depth and in one of the lateral directions indicates the feasibility of three-dimensional imaging and quantitative characterisation of the acoustical, optical and acousto-optical properties of transparent polycrystalline aggregates in a diamond anvil cell with tens of nanometres in-depth resolution and a lateral spatial resolution controlled by pump laser pulses focusing, which could approach hundreds of nanometres. PMID:25790808

  3. A depth-of-interaction PET detector using mutual gain-equalized silicon photomultiplier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    W. Xi, A.G, Weisenberger, H. Dong, Brian Kross, S. Lee, J. McKisson, Carl Zorn

    We developed a prototype high resolution, high efficiency depth-encoding detector for PET applications based on dual-ended readout of LYSO array with two silicon photomultipliers (SiPMs). Flood images, energy resolution, and depth-of-interaction (DOI) resolution were measured for a LYSO array - 0.7 mm in crystal pitch and 10 mm in thickness - with four unpolished parallel sides. Flood images were obtained such that individual crystal element in the array is resolved. The energy resolution of the entire array was measured to be 33%, while individual crystal pixel elements utilizing the signal from both sides ranged from 23.3% to 27%. By applyingmore » a mutual-gain equalization method, a DOI resolution of 2 mm for the crystal array was obtained in the experiments while simulations indicate {approx}1 mm DOI resolution could possibly be achieved. The experimental DOI resolution can be further improved by obtaining revised detector supporting electronics with better energy resolutions. This study provides a detailed detector calibration and DOI response characterization of the dual-ended readout SiPM-based PET detectors, which will be important in the design and calibration of a PET scanner in the future.« less

  4. Integration time for the perception of depth from motion parallax.

    PubMed

    Nawrot, Mark; Stroyan, Keith

    2012-04-15

    The perception of depth from relative motion is believed to be a slow process that "builds-up" over a period of observation. However, in the case of motion parallax, the potential accuracy of the depth estimate suffers as the observer translates during the viewing period. Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image motion (dθ/dt), pursuit eye movement (dα/dt), and fixation distance (f) by the formula: d/f≈dθ/dα. Given the model's dynamics, it is important to know the integration time required by the visual system to recover dα and dθ, and then estimate d. Knowing the minimum integration time reveals the incumbent error in this process. A depth-phase discrimination task was used to determine the time necessary to perceive depth-sign from motion parallax. Observers remained stationary and viewed a briefly translating random-dot motion parallax stimulus. Stimulus duration varied between trials. Fixation on the translating stimulus was monitored and enforced with an eye-tracker. The study found that relative depth discrimination can be performed with presentations as brief as 16.6 ms, with only two stimulus frames providing both retinal image motion and the stimulus window motion for pursuit (mean range=16.6-33.2 ms). This was found for conditions in which, prior to stimulus presentation, the eye was engaged in ongoing pursuit or the eye was stationary. A large high-contrast masking stimulus disrupted depth-discrimination for stimulus presentations less than 70-75 ms in both pursuit and stationary conditions. This interval might be linked to ocular-following response eye-movement latencies. We conclude that neural mechanisms serving depth from motion parallax generate a depth estimate much more quickly than previously believed. We propose that additional sluggishness might be due to the visual system's attempt to determine the maximum dθ/dα ratio for a selection of points on a complicated stimulus. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Time-domain reflectance diffuse optical tomography with Mellin-Laplace transform for experimental detection and depth localization of a single absorbing inclusion

    PubMed Central

    Puszka, Agathe; Hervé, Lionel; Planat-Chrétien, Anne; Koenig, Anne; Derouard, Jacques; Dinten, Jean-Marc

    2013-01-01

    We show how to apply the Mellin-Laplace transform to process time-resolved reflectance measurements for diffuse optical tomography. We illustrate this method on simulated signals incorporating the main sources of experimental noise and suggest how to fine-tune the method in order to detect the deepest absorbing inclusions and optimize their localization in depth, depending on the dynamic range of the measurement. To finish, we apply this method to measurements acquired with a setup including a femtosecond laser, photomultipliers and a time-correlated single photon counting board. Simulations and experiments are illustrated for a probe featuring the interfiber distance of 1.5 cm and show the potential of time-resolved techniques for imaging absorption contrast in depth with this geometry. PMID:23577292

  6. Electrically tunable metasurface perfect absorbers for ultrathin mid-infrared optical modulators.

    PubMed

    Yao, Yu; Shankar, Raji; Kats, Mikhail A; Song, Yi; Kong, Jing; Loncar, Marko; Capasso, Federico

    2014-11-12

    Dynamically reconfigurable metasurfaces open up unprecedented opportunities in applications such as high capacity communications, dynamic beam shaping, hyperspectral imaging, and adaptive optics. The realization of high performance metasurface-based devices remains a great challenge due to very limited tuning ranges and modulation depths. Here we show that a widely tunable metasurface composed of optical antennas on graphene can be incorporated into a subwavelength-thick optical cavity to create an electrically tunable perfect absorber. By switching the absorber in and out of the critical coupling condition via the gate voltage applied on graphene, a modulation depth of up to 100% can be achieved. In particular, we demonstrated ultrathin (thickness < λ0/10) high speed (up to 20 GHz) optical modulators over a broad wavelength range (5-7 μm). The operating wavelength can be scaled from the near-infrared to the terahertz by simply tailoring the metasurface and cavity dimensions.

  7. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  8. Wavefront measurement using computational adaptive optics.

    PubMed

    South, Fredrick A; Liu, Yuan-Zhi; Bower, Andrew J; Xu, Yang; Carney, P Scott; Boppart, Stephen A

    2018-03-01

    In many optical imaging applications, it is necessary to correct for aberrations to obtain high quality images. Optical coherence tomography (OCT) provides access to the amplitude and phase of the backscattered optical field for three-dimensional (3D) imaging samples. Computational adaptive optics (CAO) modifies the phase of the OCT data in the spatial frequency domain to correct optical aberrations without using a deformable mirror, as is commonly done in hardware-based adaptive optics (AO). This provides improvement of image quality throughout the 3D volume, enabling imaging across greater depth ranges and in highly aberrated samples. However, the CAO aberration correction has a complicated relation to the imaging pupil and is not a direct measurement of the pupil aberrations. Here we present new methods for recovering the wavefront aberrations directly from the OCT data without the use of hardware adaptive optics. This enables both computational measurement and correction of optical aberrations.

  9. Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light

    PubMed Central

    Wang, Ying Min; Judkewitz, Benjamin; DiMarzio, Charles A.; Yang, Changhuei

    2012-01-01

    Fluorescence imaging is one of the most important research tools in biomedical sciences. However, scattering of light severely impedes imaging of thick biological samples beyond the ballistic regime. Here we directly show focusing and high-resolution fluorescence imaging deep inside biological tissues by digitally time-reversing ultrasound-tagged light with high optical gain (~5×105). We confirm the presence of a time-reversed optical focus along with a diffuse background—a corollary of partial phase conjugation—and develop an approach for dynamic background cancellation. To illustrate the potential of our method, we image complex fluorescent objects and tumour microtissues at an unprecedented depth of 2.5 mm in biological tissues at a lateral resolution of 36 μm×52 μm and an axial resolution of 657 μm. Our results set the stage for a range of deep-tissue imaging applications in biomedical research and medical diagnostics. PMID:22735456

  10. Infrared Imaging Tools for Diagnostic Applications in Dermatology.

    PubMed

    Gurjarpadhye, Abhijit Achyut; Parekh, Mansi Bharat; Dubnika, Arita; Rajadas, Jayakumar; Inayathullah, Mohammed

    Infrared (IR) imaging is a collection of non-invasive imaging techniques that utilize the IR domain of the electromagnetic spectrum for tissue assessment. A subset of these techniques construct images using back-reflected light, while other techniques rely on detection of IR radiation emitted by the tissue as a result of its temperature. Modern IR detectors sense thermal emissions and produce a heat map of surface temperature distribution in tissues. Thus, the IR spectrum offers a variety of imaging applications particularly useful in clinical diagnostic area, ranging from high-resolution, depth-resolved visualization of tissue to temperature variation assessment. These techniques have been helpful in the diagnosis of many medical conditions including skin/breast cancer, arthritis, allergy, burns, and others. In this review, we discuss current roles of IR-imaging techniques for diagnostic applications in dermatology with an emphasis on skin cancer, allergies, blisters, burns and wounds.

  11. Processing Near-Infrared Imagery of the Orion Heatshield During EFT-1 Hypersonic Reentry

    NASA Technical Reports Server (NTRS)

    Spisz, Thomas S.; Taylor, Jeff C.; Gibson, David M.; Kennerly, Steve; Osei-Wusu, Kwame; Horvath, Thomas J.; Schwartz, Richard J.; Tack, Steven; Bush, Brett C.; Oliver, A. Brandon

    2016-01-01

    The Scientifically Calibrated In-Flight Imagery (SCIFLI) team captured high-resolution, calibrated, near-infrared imagery of the Orion capsule during atmospheric reentry of the EFT-1 mission. A US Navy NP-3D aircraft equipped with a multi-band optical sensor package, referred to as Cast Glance, acquired imagery of the Orion capsule's heatshield during a period when Orion was slowing from approximately Mach 10 to Mach 7. The line-of-sight distance ranged from approximately 65 to 40 nmi. Global surface temperatures of the capsule's thermal heatshield derived from the near-infrared intensity measurements complemented the in-depth (embedded) thermocouple measurements. Moreover, these derived surface temperatures are essential to the assessment of the thermocouples' reliance on inverse heat transfer methods and material response codes to infer the surface temperature from the in-depth measurements. The paper describes the image processing challenges associated with a manually-tracked, high-angular rate air-to-air observation. Issues included management of significant frame-to-frame motions due to both tracking jerk and jitter as well as distortions due to atmospheric effects. Corrections for changing sky backgrounds (including some cirrus clouds), atmospheric attenuation, and target orientations and ranges also had to be made. The image processing goal is to reduce the detrimental effects due to motion (both sensor and capsule), vibration (jitter), and atmospherics for image quality improvement, without compromising the quantitative integrity of the data, especially local intensity (temperature) variations. The paper will detail the approach of selecting and utilizing only the highest quality images, registering several co-temporal image frames to a single image frame to the extent frame-to-frame distortions would allow, and then co-adding the registered frames to improve image quality and reduce noise. Using preflight calibration data, the registered and averaged infrared intensity images were converted to surface temperatures on the Orion capsule's heatshield. Temperature uncertainties will be discussed relative to uncertainties of surface emissivity and atmospheric transmission loss. Comparison of limited onboard surface thermocouple data to the image derived surface temperature will be presented.

  12. WE-F-16A-02: Design, Fabrication, and Validation of a 3D-Printed Proton Filter for Range Spreading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remmes, N; Courneyea, L; Corner, S

    2014-06-15

    Purpose: To design, fabricate and test a 3D-printed filter for proton range spreading in scanned proton beams. The narrow Bragg peak in lower-energy synchrotron-based scanned proton beams can result in longer treatment times for shallow targets due to energy switching time and plan quality degradation due to minimum monitor unit limitations. A filter with variable thicknesses patterned on the same scale as the beam's lateral spot size will widen the Bragg peak. Methods: The filter consists of pyramids dimensioned to have a Gaussian distribution in thickness. The pyramids are 2.5mm wide at the base, 0.6 mm wide at the peak,more » 5mm tall, and are repeated in a 2.5mm pseudo-hexagonal lattice. Monte Carlo simulations of the filter in a proton beam were run using TOPAS to assess the change in depth profiles and lateral beam profiles. The prototypes were constrained to a 2.5cm diameter disk to allow for micro-CT imaging of promising prototypes. Three different 3D printers were tested. Depth-doses with and without the prototype filter were then measured in a ~70MeV proton beam using a multilayer ion chamber. Results: The simulation results were consistent with design expectations. Prototypes printed on one printer were clearly unacceptable on visual inspection. Prototypes on a second printer looked acceptable, but the micro-CT image showed unacceptable voids within the pyramids. Prototypes from the third printer appeared acceptable visually and on micro-CT imaging. Depth dose scans using the prototype from the third printer were consistent with simulation results. Bragg peak width increased by about 3x. Conclusions: A prototype 3D printer pyramid filter for range spreading was successfully designed, fabricated and tested. The filter has greater design flexibility and lower prototyping and production costs compared to traditional ridge filters. Printer and material selection played a large role in the successful development of the filter.« less

  13. Quantifying how the combination of blur and disparity affects the perceived depth

    NASA Astrophysics Data System (ADS)

    Wang, Junle; Barkowsky, Marcus; Ricordel, Vincent; Le Callet, Patrick

    2011-03-01

    The influence of a monocular depth cue, blur, on the apparent depth of stereoscopic scenes will be studied in this paper. When 3D images are shown on a planar stereoscopic display, binocular disparity becomes a pre-eminent depth cue. But it induces simultaneously the conflict between accommodation and vergence, which is often considered as a main reason for visual discomfort. If we limit this visual discomfort by decreasing the disparity, the apparent depth also decreases. We propose to decrease the (binocular) disparity of 3D presentations, and to reinforce (monocular) cues to compensate the loss of perceived depth and keep an unaltered apparent depth. We conducted a subjective experiment using a twoalternative forced choice task. Observers were required to identify the larger perceived depth in a pair of 3D images with/without blur. By fitting the result to a psychometric function, we obtained points of subjective equality in terms of disparity. We found that when blur is added to the background of the image, the viewer can perceive larger depth comparing to the images without any blur in the background. The increase of perceived depth can be considered as a function of the relative distance between the foreground and background, while it is insensitive to the distance between the viewer and the depth plane at which the blur is added.

  14. Fluorescence Imaging In Vivo at Wavelengths beyond 1500 nm.

    PubMed

    Diao, Shuo; Blackburn, Jeffrey L; Hong, Guosong; Antaris, Alexander L; Chang, Junlei; Wu, Justin Z; Zhang, Bo; Cheng, Kai; Kuo, Calvin J; Dai, Hongjie

    2015-12-01

    Compared to imaging in the visible and near-infrared regions below 900 nm, imaging in the second near-infrared window (NIR-II, 1000-1700 nm) is a promising method for deep-tissue high-resolution optical imaging in vivo mainly owing to the reduced scattering of photons traversing through biological tissues. Herein, semiconducting single-walled carbon nanotubes with large diameters were used for in vivo fluorescence imaging in the long-wavelength NIR region (1500-1700 nm, NIR-IIb). With this imaging agent, 3-4 μm wide capillary blood vessels at a depth of about 3 mm could be resolved. Meanwhile, the blood-flow speeds in multiple individual vessels could be mapped simultaneously. Furthermore, NIR-IIb tumor imaging of a live mouse was explored. NIR-IIb imaging can be generalized to a wide range of fluorophores emitting at up to 1700 nm for high-performance in vivo optical imaging. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Comprehensive vascular imaging using optical coherence tomography-based angiography and photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Zabihian, Behrooz; Chen, Zhe; Rank, Elisabet; Sinz, Christoph; Bonesi, Marco; Sattmann, Harald; Ensher, Jason; Minneman, Michael P.; Hoover, Erich; Weingast, Jessika; Ginner, Laurin; Leitgeb, Rainer; Kittler, Harald; Zhang, Edward; Beard, Paul; Drexler, Wolfgang; Liu, Mengyang

    2016-09-01

    Studies have proven the relationship between cutaneous vasculature abnormalities and dermatological disorders, but to image vasculature noninvasively in vivo, advanced optical imaging techniques are required. In this study, we imaged a palm of a healthy volunteer and three subjects with cutaneous abnormalities with photoacoustic tomography (PAT) and optical coherence tomography with angiography extension (OCTA). Capillaries in the papillary dermis that are too small to be discerned with PAT are visualized with OCTA. From our results, we speculate that the PA signal from the palm is mostly from hemoglobin in capillaries rather than melanin, knowing that melanin concentration in volar skin is significantly smaller than that in other areas of the skin. We present for the first time OCTA images of capillaries along with the PAT images of the deeper vessels, demonstrating the complementary effective imaging depth range and the visualization capabilities of PAT and OCTA for imaging human skin in vivo. The proposed imaging system in this study could significantly improve treatment monitoring of dermatological diseases associated with cutaneous vasculature abnormalities.

  16. Establishing the suitability of quantitative optical CT microscopy of PRESAGE® radiochromic dosimeters for the verification of synchrotron microbeam therapy

    NASA Astrophysics Data System (ADS)

    Doran, Simon J.; Rahman, A. T. Abdul; Bräuer-Krisch, Elke; Brochard, Thierry; Adamovics, John; Nisbet, Andrew; Bradley, David

    2013-09-01

    Previous research on optical computed tomography (CT) microscopy in the context of the synchrotron microbeam has shown the potential of the technique and demonstrated high quality images, but has left two questions unanswered: (i) are the images suitably quantitative for 3D dosimetry? and (ii) what is the impact on the spatial resolution of the system of the limited depth-of-field of the microscope optics? Cuvette and imaging studies are reported here that address these issues. Two sets of cuvettes containing the radiochromic plastic PRESAGE® were irradiated at the ID17 biomedical beamline of the European Synchrotron Radiation facility over the ranges 0-20 and 0-35 Gy and a third set of cuvettes was irradiated over the range 0-20 Gy using a standard medical linac. In parallel, three cylindrical PRESAGE® samples of diameter 9.7 mm were irradiated with test patterns that allowed the quantitative capabilities of the optical CT microscope to be verified, and independent measurements of the imaging modulation transfer function (MTF) to be made via two different methods. Both spectrophotometric analysis and imaging gave a linear dose response, with gradients ranging from 0.036-0.041 cm-1 Gy-1 in the three sets of cuvettes and 0.037 (optical CT units) Gy-1 for the imaging. High-quality, quantitative imaging results were obtained throughout the 3D volume, as illustrated by depth-dose profiles. These profiles are shown to be monoexponential, and the linear attention coefficient of PRESAGE® for the synchrotron-generated x-ray beam is measured to be (0.185 ± 0.02) cm-1 in excellent agreement with expectations. Low-level (<5%) residual image artefacts are discussed in detail. It was possible to resolve easily slit patterns of width 37 µm (which are smaller than many of the microbeams used on ID-17), but some uncertainty remains as to whether the low values of MTF for the higher spatial frequencies are scanner related or a result of genuine (but non-ideal) dose distributions. We conclude that microscopy images from our scanner do indeed have intensities that are proportional to spectrophotometric optical density and can thus be used as the basis for accurate dosimetry. However, further investigations are necessary before the microscopy images can be used to make the quantitative measures of peak-to-valley ratios for small-diameter microbeams. We suggest various strategies for moving forward and are optimistic about the future potential of this system.

  17. An x-ray fluorescence imaging system for gold nanoparticle detection.

    PubMed

    Ricketts, K; Guazzoni, C; Castoldi, A; Gibson, A P; Royle, G J

    2013-11-07

    Gold nanoparticles (GNPs) may be used as a contrast agent to identify tumour location and can be modified to target and image specific tumour biological parameters. There are currently no imaging systems in the literature that have sufficient sensitivity to GNP concentration and distribution measurement at sufficient tissue depth for use in in vivo and in vitro studies. We have demonstrated that high detecting sensitivity of GNPs can be achieved using x-ray fluorescence; furthermore this technique enables greater depth imaging in comparison to optical modalities. Two x-ray fluorescence systems were developed and used to image a range of GNP imaging phantoms. The first system consisted of a 10 mm(2) silicon drift detector coupled to a slightly focusing polycapillary optic which allowed 2D energy resolved imaging in step and scan mode. The system has sensitivity to GNP concentrations as low as 1 ppm. GNP concentrations different by a factor of 5 could be resolved, offering potential to distinguish tumour from non-tumour. The second system was designed to avoid slow step and scan image acquisition; the feasibility of excitation of the whole specimen with a wide beam and detection of the fluorescent x-rays with a pixellated controlled drift energy resolving detector without scanning was investigated. A parallel polycapillary optic coupled to the detector was successfully used to ascertain the position where fluorescence was emitted. The tissue penetration of the technique was demonstrated to be sufficient for near-surface small-animal studies, and for imaging 3D in vitro cellular constructs. Previous work demonstrates strong potential for both imaging systems to form quantitative images of GNP concentration.

  18. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    PubMed

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  19. Visual control of robots using range images.

    PubMed

    Pomares, Jorge; Gil, Pablo; Torres, Fernando

    2010-01-01

    In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  20. Imaging of tissue using a NIR supercontinuum laser light source with wavelengths in the second and third NIR optical windows

    NASA Astrophysics Data System (ADS)

    Sordillo, Laura A.; Lindwasser, Lukas; Budansky, Yury; Leproux, Philippe; Alfano, R. R.

    2015-03-01

    Supercontinuum light (SC) at wavelengths in the second (1,100 nm to 1,350 nm) and third (1,600 nm to 1,870 nm) NIR optical windows can be used to improve penetration depths of light through tissue and produce clearer images. Image quality is increased due to a reduction in scattering (inverse wavelength power dependence 1/λn, n≥1). We report on the use of a compact Leukos supercontinuum laser (model STM-2000-IR), which utilizes the spectral range from 700 nm to 2,400 nm and offers between 200 - 500 microwatt/nm power in the second and third NIR windows, with an InGaAs detector to image abnormalities hidden beneath thick tissue.

  1. Seismic reflection evidence for a northeast-dipping Hayward fault near Fremont, California: Implications for seismic hazard

    USGS Publications Warehouse

    Williams, R.A.; Simpson, R.W.; Jachens, R.C.; Stephenson, W.J.; Odum, J.K.; Ponce, D.A.

    2005-01-01

    A 1.6-km-long seismic reflection profile across the creeping trace of the southern Hayward fault near Fremont, California, images the fault to a depth of 650 m. Reflector truncations define a fault dip of about 70 degrees east in the 100 to 650 m depth range that projects upward to the creeping surface trace, and is inconsistent with a nearly vertical fault in this vicinity as previously believed. This fault projects to the Mission seismicity trend located at 4-10 km depth about 2 km east of the surface trace and suggests that the southern end of the fault is as seismically active as the part north of San Leandro. The seismic hazard implication is that the Hayward fault may have a more direct connection at depth with the Calaveras fault, affecting estimates of potential event magnitudes that could occur on the combined fault surfaces, thus affecting hazard assessments for the south San Francisco Bay region.

  2. Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    So, Peter T.

    2016-03-01

    Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens.

  3. Depth map occlusion filling and scene reconstruction using modified exemplar-based inpainting

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Marchuk, V. I.; Fisunov, A. V.; Tokareva, S. V.; Egiazarian, K. O.

    2015-03-01

    RGB-D sensors are relatively inexpensive and are commercially available off-the-shelf. However, owing to their low complexity, there are several artifacts that one encounters in the depth map like holes, mis-alignment between the depth and color image and lack of sharp object boundaries in the depth map. Depth map generated by Kinect cameras also contain a significant amount of missing pixels and strong noise, limiting their usability in many computer vision applications. In this paper, we present an efficient hole filling and damaged region restoration method that improves the quality of the depth maps obtained with the Microsoft Kinect device. The proposed approach is based on a modified exemplar-based inpainting and LPA-ICI filtering by exploiting the correlation between color and depth values in local image neighborhoods. As a result, edges of the objects are sharpened and aligned with the objects in the color image. Several examples considered in this paper show the effectiveness of the proposed approach for large holes removal as well as recovery of small regions on several test images of depth maps. We perform a comparative study and show that statistically, the proposed algorithm delivers superior quality results compared to existing algorithms.

  4. Profiling defect depth in composite materials using thermal imaging NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2018-04-01

    Sonic Infrared (IR) NDE, is a relatively new NDE technology; it has been demonstrated as a reliable and sensitive method to detect defects. SIR uses ultrasonic excitation with IR imaging to detect defects and flaws in the structures being inspected. An IR camera captures infrared radiation from the target for a period of time covering the ultrasound pulse. This period of time may be much longer than the pulse depending on the defect depth and the thermal properties of the materials. With the increasing deployment of composites in modern aerospace and automobile structures, fast, wide-area and reliable NDE methods are necessary. Impact damage is one of the major concerns in modern composites. Damage can occur at a certain depth without any visual indication on the surface. Defect depth information can influence maintenance decisions. Depth profiling relies on the time delays in the captured image sequence. We'll present our work on the defect depth profiling by using the temporal information of IR images. An analytical model is introduced to describe heat diffusion from subsurface defects in composite materials. Depth profiling using peak time is introduced as well.

  5. Four-dimensional optical coherence tomography imaging of total liquid ventilated rats

    NASA Astrophysics Data System (ADS)

    Kirsten, Lars; Schnabel, Christian; Gaertner, Maria; Koch, Edmund

    2013-06-01

    Optical coherence tomography (OCT) can be utilized for the spatially and temporally resolved visualization of alveolar tissue and its dynamics in rodent models, which allows the investigation of lung dynamics on the microscopic scale of single alveoli. The findings could provide experimental input data for numerical simulations of lung tissue mechanics and could support the development of protective ventilation strategies. Real four-dimensional OCT imaging permits the acquisition of several OCT stacks within one single ventilation cycle. Thus, the entire four-dimensional information is directly obtained. Compared to conventional virtual four-dimensional OCT imaging, where the image acquisition is extended over many ventilation cycles and is triggered on pressure levels, real four-dimensional OCT is less vulnerable against motion artifacts and non-reproducible movement of the lung tissue over subsequent ventilation cycles, which widely reduces image artifacts. However, OCT imaging of alveolar tissue is affected by refraction and total internal reflection at air-tissue interfaces. Thus, only the first alveolar layer beneath the pleura is visible. To circumvent this effect, total liquid ventilation can be carried out to match the refractive indices of lung tissue and the breathing medium, which improves the visibility of the alveolar structure, the image quality and the penetration depth and provides the real structure of the alveolar tissue. In this study, a combination of four-dimensional OCT imaging with total liquid ventilation allowed the visualization of the alveolar structure in rat lung tissue benefiting from the improved depth range beneath the pleura and from the high spatial and temporal resolution.

  6. Fiber-optic annular detector array for large depth of field photoacoustic macroscopy.

    PubMed

    Bauer-Marschallinger, Johannes; Höllinger, Astrid; Jakoby, Bernhard; Burgholzer, Peter; Berer, Thomas

    2017-03-01

    We report on a novel imaging system for large depth of field photoacoustic scanning macroscopy. Instead of commonly used piezoelectric transducers, fiber-optic based ultrasound detection is applied. The optical fibers are shaped into rings and mainly receive ultrasonic signals stemming from the ring symmetry axes. Four concentric fiber-optic rings with varying diameters are used in order to increase the image quality. Imaging artifacts, originating from the off-axis sensitivity of the rings, are reduced by coherence weighting. We discuss the working principle of the system and present experimental results on tissue mimicking phantoms. The lateral resolution is estimated to be below 200 μm at a depth of 1.5 cm and below 230 μm at a depth of 4.5 cm. The minimum detectable pressure is in the order of 3 Pa. The introduced method has the potential to provide larger imaging depths than acoustic resolution photoacoustic microscopy and an imaging resolution similar to that of photoacoustic computed tomography.

  7. Planarity constrained multi-view depth map reconstruction for urban scenes

    NASA Astrophysics Data System (ADS)

    Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie

    2018-05-01

    Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.

  8. RGB-D depth-map restoration using smooth depth neighborhood supports

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Xue, Haoyang; Yu, Zhongjie; Wu, Qiang; Yang, Jie

    2015-05-01

    A method to restore the depth map of an RGB-D image using smooth depth neighborhood (SDN) supports is presented. The SDN supports are computed based on the corresponding color image of the depth map. Compared with the most widely used square supports, the proposed SDN supports can well-capture the local structure of the object. Only pixels with similar depth values are allowed to be included in the support. We combine our SDN supports with the joint bilateral filter (JBF) to form the SDN-JBF and use it to restore depth maps. Experimental results show that our SDN-JBF can not only rectify the misaligned depth pixels but also preserve sharp depth discontinuities.

  9. Shear-wave elastography in the diagnosis of solid breast masses: what leads to false-negative or false-positive results?

    PubMed

    Yoon, Jung Hyun; Jung, Hae Kyoung; Lee, Jong Tae; Ko, Kyung Hee

    2013-09-01

    To investigate the factors that have an effect on false-positive or false-negative shear-wave elastography (SWE) results in solid breast masses. From June to December 2012, 222 breast lesions of 199 consecutive women (mean age: 45.3 ± 10.1 years; range, 21 to 88 years) who had been scheduled for biopsy or surgical excision were included. Greyscale ultrasound and SWE were performed in all women before biopsy. Final ultrasound assessments and SWE parameters (pattern classification and maximum elasticity) were recorded and compared with histopathology results. Patient and lesion factors in the 'true' and 'false' groups were compared. Of the 222 masses, 175 (78.8 %) were benign, and 47 (21.2 %) were malignant. False-positive rates of benign masses were significantly higher than false-negative rates of malignancy in SWE patterns, 36.6 % to 6.4 % (P < 0.001). Among both benign and malignant masses, factors showing significance among false SWE features were lesion size, breast thickness and lesion depth (all P < 0.05). All 47 malignant breast masses had SWE images of good quality. False SWE features were more significantly seen in benign masses. Lesion size, breast thickness and lesion depth have significance in producing false results, and this needs consideration in SWE image acquisition. • Shear-wave elastography (SWE) is widely used during breast imaging • At SWE, false-positive rates were significantly higher than false-negative rates • Larger size, breast thickness, depth and fair quality influences false-positive SWE features • Smaller size, larger breast thickness and depth influences false-negative SWE features.

  10. Quantitative structural markers of colorectal dysplasia in a cross sectional study of ex vivo murine tissue using label-free multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Prieto, Sandra P.; Greening, Gage J.; Lai, Keith K.; Muldoon, Timothy J.

    2016-03-01

    Two-photon excitation of label-free tissue is of increasing interest, as advances have been made in endoscopic clinical application of multiphoton microscopy, such as second harmonic generation (SHG) scanning endoscopy used to monitor cervical collagen in mice1. We used C57BL mice as a model to investigate the progression of gastrointestinal structures, specifically glandular area and circularity. We used multiphoton microscopy to image ex-vivo label-free murine colon, focusing on the collagen structure changes over time, in mice ranging from 10 to 20 weeks of age. Series of images were acquired within the colonic and intestinal tissue at depth intervals of 20 microns from muscularis to the epithelium, up to a maximum depth of 180 microns. The imaging system comprised a two-photon laser tuned to 800nm wavelength excitation, and the SHG emission was filtered with a 400/40 bandpass filter before reaching the photomultiplier tube. Images were acquired at 15 frames per second, for 200 to 300 cumulative frames, with a field of view of 261um by 261um, and 40mW at sample. Image series were compared to histopathology H&E slides taken from adjacent locations. Quantitative metrics for determining differences between murine glandular structures were applied, specifically glandular area and circularity.

  11. Myocardial imaging using ultrahigh-resolution spectral domain optical coherence tomography

    PubMed Central

    Yao, Xinwen; Gan, Yu; Marboe, Charles C.; Hendon, Christine P.

    2016-01-01

    Abstract. We present an ultrahigh-resolution spectral domain optical coherence tomography (OCT) system in 800 nm with a low-noise supercontinuum source (SC) optimized for myocardial imaging. The system was demonstrated to have an axial resolution of 2.72  μm with a large imaging depth of 1.78 mm and a 6-dB falloff range of 0.89 mm. The lateral resolution (5.52  μm) was compromised to enhance the image penetration required for myocardial imaging. The noise of the SC source was analyzed extensively and an imaging protocol was proposed for SC-based OCT imaging with appreciable contrast. Three-dimensional datasets were acquired ex vivo on the endocardium side of tissue specimens from different chambers of fresh human and swine hearts. With the increased resolution and contrast, features such as elastic fibers, Purkinje fibers, and collagen fiber bundles were observed. The correlation between the structural information revealed in the OCT images and tissue pathology was discussed as well. PMID:27001162

  12. Myocardial imaging using ultrahigh-resolution spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Yao, Xinwen; Gan, Yu; Marboe, Charles C.; Hendon, Christine P.

    2016-06-01

    We present an ultrahigh-resolution spectral domain optical coherence tomography (OCT) system in 800 nm with a low-noise supercontinuum source (SC) optimized for myocardial imaging. The system was demonstrated to have an axial resolution of 2.72 μm with a large imaging depth of 1.78 mm and a 6-dB falloff range of 0.89 mm. The lateral resolution (5.52 μm) was compromised to enhance the image penetration required for myocardial imaging. The noise of the SC source was analyzed extensively and an imaging protocol was proposed for SC-based OCT imaging with appreciable contrast. Three-dimensional datasets were acquired ex vivo on the endocardium side of tissue specimens from different chambers of fresh human and swine hearts. With the increased resolution and contrast, features such as elastic fibers, Purkinje fibers, and collagen fiber bundles were observed. The correlation between the structural information revealed in the OCT images and tissue pathology was discussed as well.

  13. A Compact Polarization Imager

    NASA Technical Reports Server (NTRS)

    Thompson, Karl E.; Rust, David M.; Chen, Hua

    1995-01-01

    A new type of image detector has been designed to analyze the polarization of light simultaneously at all picture elements (pixels) in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a custom-designed charge-coupled device with signal-analysis circuitry, all integrated on a silicon chip. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Other applications include environmental monitoring and robot vision. Innovations in the IDID include two interleaved 512 x 1024 pixel imaging arrays (one for each polarization plane), large dynamic range (well depth of 10(exp 6) electrons per pixel), simultaneous readout and display of both images at 10(exp 6) pixels per second, and on-chip analog signal processing to produce polarization maps in real time. When used with a lithium niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can reveal tiny differences between simultaneous images at two wavelengths.

  14. A real-time 3D range image sensor based on a novel tip-tilt-piston micromirror and dual frequency phase shifting

    NASA Astrophysics Data System (ADS)

    Skotheim, Øystein; Schumann-Olsen, Henrik; Thorstensen, Jostein; Kim, Anna N.; Lacolle, Matthieu; Haugholt, Karl-Henrik; Bakke, Thor

    2015-03-01

    Structured light is a robust and accurate method for 3D range imaging in which one or more light patterns are projected onto the scene and observed with an off-axis camera. Commercial sensors typically utilize DMD- or LCD-based LED projectors, which produce good results but have a number of drawbacks, e.g. limited speed, limited depth of focus, large sensitivity to ambient light and somewhat low light efficiency. We present a 3D imaging system based on a laser light source and a novel tip-tilt-piston micro-mirror. Optical interference is utilized to create sinusoidal fringe patterns. The setup allows fast and easy control of both the frequency and the phase of the fringe patterns by altering the axes of the micro-mirror. For 3D reconstruction we have adapted a Dual Frequency Phase Shifting method which gives robust range measurements with sub-millimeter accuracy. The use of interference for generating sine patterns provides high light efficiency and good focusing properties. The use of a laser and a bandpass filter allows easy removal of ambient light. The fast response of the micro-mirror in combination with a high-speed camera and real-time processing on the GPU allows highly accurate 3D range image acquisition at video rates.

  15. Calibration and validation of a small-scale urban surface water flood event using crowdsourced images

    NASA Astrophysics Data System (ADS)

    Green, Daniel; Yu, Dapeng; Pattison, Ian

    2017-04-01

    Surface water flooding occurs when intense precipitation events overwhelm the drainage capacity of an area and excess overland flow is unable to infiltrate into the ground or drain via natural or artificial drainage channels, such as river channels, manholes or SuDS. In the UK, over 3 million properties are at risk from surface water flooding alone, accounting for approximately one third of the UK's flood risk. The risk of surface water flooding is projected to increase due to several factors, including population increases, land-use alterations and future climatic changes in precipitation resulting in an increased magnitude and frequency of intense precipitation events. Numerical inundation modelling is a well-established method of investigating surface water flood risk, allowing the researcher to gain a detailed understanding of the depth, velocity, discharge and extent of actual or hypothetical flood scenarios over a wide range of spatial scales. However, numerical models require calibration of key hydrological and hydraulic parameters (e.g. infiltration, evapotranspiration, drainage rate, roughness) to ensure model outputs adequately represent the flood event being studied. Furthermore, validation data such as crowdsourced images or spatially-referenced flood depth collected during a flood event may provide a useful validation of inundation depth and extent for actual flood events. In this study, a simplified two-dimensional inertial based flood inundation model requiring minimal pre-processing of data (FloodMap-HydroInundation) was used to model a short-duration, intense rainfall event (27.8 mm in 15 minutes) that occurred over the Loughborough University campus on the 28th June 2012. High resolution (1m horizontal, +/- 15cm vertical) DEM data, rasterised Ordnance Survey topographic structures data and precipitation data recorded at the University weather station were used to conduct numerical modelling over the small (< 2km2), contained urban catchment. To validate model outputs and allow a reconstruction of spatially referenced flood depth and extent during the flood event, crowdsourced images were obtained from social media (Twitter) and from individuals present during the flood event via the University noticeboards, as well as using dGPS flood depth data collected at one of the worst affected areas. An investigation into the sensitivity of key model parameters suggests that the numerical model code is highly sensitivity to changes within the recommended range of roughness and infiltration values, as well as changes in DEM and building mesh resolutions, but less sensitive to changes in evapotranspiration and drainage capacity parameters. The study also demonstrates the potential of using crowdsourced images to validate urban surface water flood models and inform parameterisation when calibrating numerical inundation models.

  16. Gabor fusion master slave optical coherence tomography

    PubMed Central

    Cernat, Ramona; Bradu, Adrian; Israelsen, Niels Møller; Bang, Ole; Rivet, Sylvain; Keane, Pearse A.; Heath, David-Garway; Rajendram, Ranjan; Podoleanu, Adrian

    2017-01-01

    This paper describes the application of the Gabor filtering protocol to a Master/Slave (MS) swept source optical coherence tomography (SS)-OCT system at 1300 nm. The MS-OCT system delivers information from selected depths, a property that allows operation similar to that of a time domain OCT system, where dynamic focusing is possible. The Gabor filtering processing following collection of multiple data from different focus positions is different from that utilized by a conventional swept source OCT system using a Fast Fourier transform (FFT) to produce an A-scan. Instead of selecting the bright parts of A-scans for each focus position, to be placed in a final B-scan image (or in a final volume), and discarding the rest, the MS principle can be employed to advantageously deliver signal from the depths within each focus range only. The MS procedure is illustrated on creating volumes of data of constant transversal resolution from a cucumber and from an insect by repeating data acquisition for 4 different focus positions. In addition, advantage is taken from the tolerance to dispersion of the MS principle that allows automatic compensation for dispersion created by layers above the object of interest. By combining the two techniques, Gabor filtering and Master/Slave, a powerful imaging instrument is demonstrated. The master/slave technique allows simultaneous display of three categories of images in one frame: multiple depth en-face OCT images, two cross-sectional OCT images and a confocal like image obtained by averaging the en-face ones. We also demonstrate the superiority of MS-OCT over its FFT based counterpart when used with a Gabor filtering OCT instrument in terms of the speed of assembling the fused volume. For our case, we show that when more than 4 focus positions are required to produce the final volume, MS is faster than the conventional FFT based procedure. PMID:28270987

  17. Feasibility of transcranial photoacoustic imaging for interventional guidance of endonasal surgeries

    NASA Astrophysics Data System (ADS)

    Lediju Bell, Muyinatu A.; Ostrowski, Anastasia K.; Kazanzides, Peter; Boctor, Emad

    2014-03-01

    Endonasal surgeries to remove pituitary tumors incur the deadly risk of carotid artery injury due to limitations with real-time visualization of blood vessels surrounded by bone. We propose to use photoacoustic imaging to overcome current limitations. Blood vessels and surrounding bone would be illuminated by an optical fiber attached to the endonasal drill, while a transducer placed on the pterional region outside of the skull acquires images. To investigate feasibility, a plastisol phantom embedded with a spherical metal target was submerged in a water tank. The target was aligned with a 1-mm optical fiber coupled to a 1064nm Nd:YAG laser. An Ultrasonix L14-5W/60 linear transducer, placed approximately 1 cm above the phantom, acquired photoacoustic and ultrasound images of the target in the presence and absence of 2- and 4-mm-thick human adult cadaveric skull specimens. Though visualized at 18 mm depth when no bone was present, the target was not detectable in ultrasound images when the 4-mm thick skull specimen was placed between the transducer and phantom. In contrast, the target was visible in photoacoustic images at depths of 17-18 mm with and without the skull specimen. To mimic a clinical scenario where cranial bone in the nasal cavity reduces optical transmission prior to drill penetration, the 2-mm-thick specimen was placed between the phantom and optical fiber, while the 4-mm specimen remained between the phantom and transducer. In this case, the target was present at depths of 15-17 mm for energies ranging 9-18 mJ. With conventional delay-and-sum beamforming, the photoacoustic signal-tonoise ratios measured 15-18 dB and the contrast measured 5-13 dB. A short-lag spatial coherence beamformer was applied to increase signal contrast by 11-27 dB with similar values for SNR at most laser energies. Results are generally promising for photoacoustic-guided endonasal surgeries.

  18. An image-guided precision proton radiation platform for preclinical in vivo research

    NASA Astrophysics Data System (ADS)

    Ford, E.; Emery, R.; Huff, D.; Narayanan, M.; Schwartz, J.; Cao, N.; Meyer, J.; Rengan, R.; Zeng, J.; Sandison, G.; Laramore, G.; Mayr, N.

    2017-01-01

    There are many unknowns in the radiobiology of proton beams and other particle beams. We describe the development and testing of an image-guided low-energy proton system optimized for radiobiological research applications. A 50 MeV proton beam from an existing cyclotron was modified to produce collimated beams (as small as 2 mm in diameter). Ionization chamber and radiochromic film measurements were performed and benchmarked with Monte Carlo simulations (TOPAS). The proton beam was aligned with a commercially-available CT image-guided x-ray irradiator device (SARRP, Xstrahl Inc.). To examine the alternative possibility of adapting a clinical proton therapy system, we performed Monte Carlo simulations of a range-shifted 100 MeV clinical beam. The proton beam exhibits a pristine Bragg Peak at a depth of 21 mm in water with a dose rate of 8.4 Gy min-1 (3 mm depth). The energy of the incident beam can be modulated to lower energies while preserving the Bragg peak. The LET was: 2.0 keV µm-1 (water surface), 16 keV µm-1 (Bragg peak), 27 keV µm-1 (10% peak dose). Alignment of the proton beam with the SARRP system isocenter was measured at 0.24 mm agreement. The width of the beam changes very little with depth. Monte Carlo-based calculations of dose using the CT image data set as input demonstrate in vivo use. Monte Carlo simulations of the modulated 100 MeV clinical proton beam show a significantly reduced Bragg peak. We demonstrate the feasibility of a proton beam integrated with a commercial x-ray image-guidance system for preclinical in vivo studies. To our knowledge this is the first description of an experimental image-guided proton beam for preclinical radiobiology research. It will enable in vivo investigations of radiobiological effects in proton beams.

  19. Computational and design methods for advanced imaging

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.

    This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

  20. Noise removal in extended depth of field microscope images through nonlinear signal processing.

    PubMed

    Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J

    2013-04-01

    Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.

  1. Non-toric extended depth of focus contact lenses for astigmatism and presbyopia correction

    NASA Astrophysics Data System (ADS)

    Ben Yaish, Shai; Zlotnik, Alex; Yehezkel, Oren; Lahav-Yacouel, Karen; Belkin, Michael; Zalevsky, Zeev

    2010-02-01

    Purpose: Testing whether the extended depth of focus technology embedded on non-toric contact lenses is a suitable treatment for both astigmatism and presbyopia. Methods: The extended depth of focus pattern consisting of microndepth concentric grooves was engraved on a surface of a mono-focal soft contact lens. These grooves create an interference pattern extending the focus from a point to a length of about 1mm providing a 3.00D extension in the depth of focus. The extension in the depth of focus provides high quality focused imaging capabilities from near through intermediate and up to far ranges. Due to the angular symmetry of the engraved pattern the extension in the depth of focus can also resolve regular as well as irregular astigmatism aberrations. Results: The contact lens was tested on a group of 8 astigmatic and 13 subjects with presbyopia. Average correction of 0.70D for astigmatism and 1.50D for presbyopia was demonstrated. Conclusions: The extended depth of focus technology in a non-toric contact lens corrects simultaneously astigmatism and presbyopia. The proposed solution is based upon interference rather than diffraction effects and thus it is characterized by high energetic efficiency to the retina plane as well as reduced chromatic aberrations.

  2. SPAD array based TOF SoC design for unmanned vehicle

    NASA Astrophysics Data System (ADS)

    Pan, An; Xu, Yuan; Xie, Gang; Huang, Zhiyu; Zheng, Yanghao; Shi, Weiwei

    2018-03-01

    As for the requirement of unmanned-vehicle mobile Lidar system, this paper presents a SoC design based on pulsed TOF depth image sensor. This SoC has a detection range of 300m and detecting resolution of 1.5cm. Pixels are made of SPAD. Meanwhile, SoC adopts a structure of multi-pixel sharing TDC, which significantly reduces chip area and improve the fill factor of light-sensing surface area. SoC integrates a TCSPC module to achieve the functionality of receiving each photon, measuring photon flight time and processing depth information in one chip. The SOC is designed in the SMIC 0.13μm CIS CMOS technology

  3. Endoscopic laser range scanner for minimally invasive, image guided kidney surgery

    NASA Astrophysics Data System (ADS)

    Friets, Eric; Bieszczad, Jerry; Kynor, David; Norris, James; Davis, Brynmor; Allen, Lindsay; Chambers, Robert; Wolf, Jacob; Glisson, Courtenay; Herrell, S. Duke; Galloway, Robert L.

    2013-03-01

    Image guided surgery (IGS) has led to significant advances in surgical procedures and outcomes. Endoscopic IGS is hindered, however, by the lack of suitable intraoperative scanning technology for registration with preoperative tomographic image data. This paper describes implementation of an endoscopic laser range scanner (eLRS) system for accurate, intraoperative mapping of the kidney surface, registration of the measured kidney surface with preoperative tomographic images, and interactive image-based surgical guidance for subsurface lesion targeting. The eLRS comprises a standard stereo endoscope coupled to a steerable laser, which scans a laser fan beam across the kidney surface, and a high-speed color camera, which records the laser-illuminated pixel locations on the kidney. Through calibrated triangulation, a dense set of 3-D surface coordinates are determined. At maximum resolution, the eLRS acquires over 300,000 surface points in less than 15 seconds. Lower resolution scans of 27,500 points are acquired in one second. Measurement accuracy of the eLRS, determined through scanning of reference planar and spherical phantoms, is estimated to be 0.38 +/- 0.27 mm at a range of 2 to 6 cm. Registration of the scanned kidney surface with preoperative image data is achieved using a modified iterative closest point algorithm. Surgical guidance is provided through graphical overlay of the boundaries of subsurface lesions, vasculature, ducts, and other renal structures labeled in the CT or MR images, onto the eLRS camera image. Depth to these subsurface targets is also displayed. Proof of clinical feasibility has been established in an explanted perfused porcine kidney experiment.

  4. Imaging and characterizing shallow sedimentary strata using teleseismic arrivals recorded on linear arrays: An example from the Atlantic Coastal Plain of the southeastern U.S.

    NASA Astrophysics Data System (ADS)

    Pratt, T. L.

    2017-12-01

    Unconsolidated, near-surface sediments can influence the amplitudes and frequencies of ground shaking during earthquakes. Ideally these effects are accounted for when determining ground motion prediction equations and in hazard estimates summarized in seismic hazard maps. This study explores the use of teleseismic arrivals recorded on linear receiver arrays to estimate the seismic velocities, determine the frequencies of fundamental resonance peaks, and image the major reflectors in the Atlantic Coastal Plain (ACP) and Mississippi Embayment (ME) strata of the central and southeastern United States. These strata have thicknesses as great as 2 km near the coast in the study areas, but become thin and eventually pinch out landward. Spectral ratios relative to bedrock sites were computed from teleseismic arrivals recorded on linear arrays deployed across the sedimentary sequences. The large contrast in properties at the bedrock surface produces a strong fundamental resonance peak in the 0.2 to 4 Hz range. Contour maps of sediment thicknesses derived from drill hole data allow for the theoretical estimation of average velocities by matching the observed frequencies at which resonance peaks occur. The sloping bedrock surface allows for calculation of a depth-varying velocity profile, under the assumption that the velocities at each depth do not change laterally between stations. The spectral ratios can then be converted from frequency to depth, resulting in an image of the subsurface similar to that of a seismic reflection profile but with amplitudes being the spectral ratio caused by a reflector at that depth. The complete data set thus provides an average velocity function for the sedimentary sequence, the frequencies and amplitudes of the major resonance peaks, and a subsurface image of the major reflectors producing resonance peaks. The method is demonstrated using three major receiver arrays crossing the ACP and ME strata that originally were deployed for imaging the crust and mantle, confirming that teleseismic signals can be used to characterize sedimentary strata in the upper km.

  5. Seismic Structure of the Antarctic Upper Mantle and Transition Zone Unearthed by Full Waveform Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Lloyd, A. J.; Wiens, D.; Zhu, H.; Tromp, J.; Nyblade, A.; Anandakrishnan, S.; Aster, R. C.; Huerta, A. D.; Winberry, J. P.; Wilson, T. J.; Dalziel, I. W. D.; Hansen, S. E.; Shore, P.

    2017-12-01

    The upper mantle and transition zone beneath Antarctica and the surrounding ocean are among the poorest seismically imaged regions of the Earth's interior. Over the last 1.5 decades researchers have deployed several large temporary broadband seismic arrays focusing on major tectonic features in the Antarctic. The broader international community has also facilitated further instrumentation of the continent, often operating stations in additional regions. As of 2016, waveforms are available from almost 300 unique station locations. Using these stations along with 26 southern mid-latitude seismic stations we have imaged the seismic structure of the upper mantle and transition zone using full waveform adjoint techniques. The full waveform adjoint inversion assimilates phase observations from 3-component seismograms containing P, S, Rayleigh, and Love waves, including reflections and overtones, from 270 earthquakes (5.5 ≤ Mw ≤ 7.0) that occurred between 2001-2003 and 2007-2016. We present the major results of the full waveform adjoint inversion following 20 iterations, resulting in a continental-scale seismic model (ANT_20) with regional-scale resolution. Within East Antarctica, ANT_20 reveals internal seismic heterogeneity and differences in lithospheric thickness. For example, fast seismic velocities extending to 200-300 km depth are imaged beneath both Wilkes Land and the Gamburtsev Subglacial Mountains, whereas fast velocities only extend to 100-200 km depth beneath the Lambert Graben and Enderby Land. Furthermore, fast velocities are not found beneath portions of Dronning Maud Land, suggesting old cratonic lithosphere may be absent. Beneath West Antarctica slow upper mantle seismic velocities are imaged extending from the Balleny Island southward along the Transantarctic Mountains front, and broaden beneath the southern and northern portion of the mountain range. In addition, slow upper mantle velocities are imaged beneath the West Antarctic coast extending from Marie Byrd Land to the Antarctic Peninsula. This region of slow velocity only extends to 150-200 km depth beneath the Antarctic Peninsula, while elsewhere it extends to deeper upper mantle depths and possibly into the transition zone as well as offshore, suggesting two different geodynamic processes are at play.

  6. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences

    PubMed Central

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933

  7. Phoebe: A Surface Dominated by Water

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley C.; Brown, Michael E.

    2018-07-01

    The Saturnian irregular satellite, Phoebe, can be broadly described as a water-rich rock. This object, which presumably originated from the same primordial population shared by the dynamically excited Kuiper Belt Objects (KBOs), has received high-resolution spectral imaging during the Cassini flyby. We present a new analysis of the Visual Infrared Mapping Spectrometer observations of Phoebe, which critically, includes a geometry correction routine that enables pixel-by-pixel mapping of visible and infrared spectral cubes directly onto the Phoebe shape model, even when an image exhibits significant trailing errors. The result of our re-analysis is a successful match of 46 images, producing spectral maps covering the majority of Phoebe’s surface, roughly a third of which is imaged by high-resolution observations (<22 km per pixel resolution). There is no spot on Phoebe’s surface that is absent of water absorption. The regions richest in water are clearly associated with the Jason and south pole impact basins. Phoebe exhibits only three spectral types, and a water–ice concentration that correlates with physical depth and visible albedo. The water-rich and water-poor regions exhibit significantly different crater size frequency distributions and different large crater morphologies. We propose that Phoebe once had a water-poor surface whose water–ice concentration was enhanced by basin-forming impacts that exposed richer subsurface layers. The range of Phoebe’s water–ice absorption spans the same range exhibited by dynamically excited KBOs. The common water–ice absorption depths and primordial origins, and the association of Phoebe’s water-rich regions with its impact basins, suggests the plausible idea that KBOs also originated with water-poor surfaces that were enhanced through stochastic collisional modification.

  8. From the eye of the albatrosses: a bird-borne camera shows an association between albatrosses and a killer whale in the Southern Ocean.

    PubMed

    Sakamoto, Kentaro Q; Takahashi, Akinori; Iwata, Takashi; Trathan, Philip N

    2009-10-07

    Albatrosses fly many hundreds of kilometers across the open ocean to find and feed upon their prey. Despite the growing number of studies concerning their foraging behaviour, relatively little is known about how albatrosses actually locate their prey. Here, we present our results from the first deployments of a combined animal-borne camera and depth data logger on free-ranging black-browed albatrosses (Thalassarche melanophrys). The still images recorded from these cameras showed that some albatrosses actively followed a killer whale (Orcinus orca), possibly to feed on food scraps left by this diving predator. The camera images together with the depth profiles showed that the birds dived only occasionally, but that they actively dived when other birds or the killer whale were present. This association with diving predators or other birds may partially explain how albatrosses find their prey more efficiently in the apparently 'featureless' ocean, with a minimal requirement for energetically costly diving or landing activities.

  9. From the Eye of the Albatrosses: A Bird-Borne Camera Shows an Association between Albatrosses and a Killer Whale in the Southern Ocean

    PubMed Central

    Sakamoto, Kentaro Q.; Takahashi, Akinori; Iwata, Takashi; Trathan, Philip N.

    2009-01-01

    Albatrosses fly many hundreds of kilometers across the open ocean to find and feed upon their prey. Despite the growing number of studies concerning their foraging behaviour, relatively little is known about how albatrosses actually locate their prey. Here, we present our results from the first deployments of a combined animal-borne camera and depth data logger on free-ranging black-browed albatrosses (Thalassarche melanophrys). The still images recorded from these cameras showed that some albatrosses actively followed a killer whale (Orcinus orca), possibly to feed on food scraps left by this diving predator. The camera images together with the depth profiles showed that the birds dived only occasionally, but that they actively dived when other birds or the killer whale were present. This association with diving predators or other birds may partially explain how albatrosses find their prey more efficiently in the apparently ‘featureless’ ocean, with a minimal requirement for energetically costly diving or landing activities. PMID:19809497

  10. Three dimensional measurement with an electrically tunable focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng

    2017-03-01

    A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.

  11. Three dimensional measurement with an electrically tunable focused plenoptic camera.

    PubMed

    Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng

    2017-03-01

    A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.

  12. Use of remote sensing techniques and aeromagnetic data to study episodic oil seep discharges along the Gulf of Suez in Egypt.

    PubMed

    Kaiser, M F; Aziz, A M; Ghieth, B M

    2013-07-15

    Four successive oil discharges were observed during the last 2 years following the recording of the earthquake events. Oil slicks were clearly observed in the thermal band of the Enhanced Thematic Mapper images acquired during the discharge events. Lineaments were extracted from the ETM+ image data and SRTM (DEM). The seismic activity is conformable in time and spatially related to active major faults and structural lineaments. The concerned site was subjected to a numerous earthquakes with magnitudes ranging from 3 to 5.4 Mb. Aeromagnetic field data analyses indicated the existence of deep major faults crossing the Gebel El-Zeit and the Mellaha basins (oil reservoirs). The magnetic field survey showed major distinctive fault striking NE-SW at 7000 m depth. Occurrence of these faults at great depth enables the crude oil to migrate upward and appear at the surfaces as oil seeps onshore and as offshore slicks in the Gemsa-Hurghada coastal zone. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Very high frame rate volumetric integration of depth images on mobile devices.

    PubMed

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  14. Case study: the introduction of stereoscopic games on the Sony PlayStation 3

    NASA Astrophysics Data System (ADS)

    Bickerstaff, Ian

    2012-03-01

    A free stereoscopic firmware update on Sony Computer Entertainment's PlayStation® 3 console provides the potential to increase enormously the popularity of stereoscopic 3D in the home. For this to succeed though, a large selection of content has to become available that exploits 3D in the best way possible. In addition to the existing challenges found in creating 3D movies and television programmes, the stereography must compensate for the dynamic and unpredictable environments found in games. Automatically, the software must map the depth range of the scene into the display's comfort zone, while minimising depth compression. This paper presents a range of techniques developed to solve this problem and the challenge of creating twice as many images as the 2D version without excessively compromising the frame rate or image quality. At the time of writing, over 80 stereoscopic PlayStation 3 games have been released and notable titles are used as examples to illustrate how the techniques have been adapted for different game genres. Since the firmware's introduction in 2010, the industry has matured with a large number of developers now producing increasingly sophisticated 3D content. New technologies such as viewer head tracking and head-mounted displays should increase the appeal of 3D in the home still further.

  15. Cloud Optical Depth Measured with Ground-Based, Uncooled Infrared Imagers

    NASA Technical Reports Server (NTRS)

    Shaw, Joseph A.; Nugent, Paul W.; Pust, Nathan J.; Redman, Brian J.; Piazzolla, Sabino

    2012-01-01

    Recent advances in uncooled, low-cost, long-wave infrared imagers provide excellent opportunities for remotely deployed ground-based remote sensing systems. However, the use of these imagers in demanding atmospheric sensing applications requires that careful attention be paid to characterizing and calibrating the system. We have developed and are using several versions of the ground-based "Infrared Cloud Imager (ICI)" instrument to measure spatial and temporal statistics of clouds and cloud optical depth or attenuation for both climate research and Earth-space optical communications path characterization. In this paper we summarize the ICI instruments and calibration methodology, then show ICI-derived cloud optical depths that are validated using a dual-polarization cloud lidar system for thin clouds (optical depth of approximately 4 or less).

  16. High-Frequency Ultrasonic Imaging of the Anterior Segment Using an Annular Array Transducer

    PubMed Central

    Silverman, Ronald H.; Ketterling, Jeffrey A.; Coleman, D. Jackson

    2006-01-01

    Objective Very-high-frequency (>35 MHz) ultrasound (VHFU) allows imaging of anterior segment structures of the eye with a resolution of less than 40-μm. The low focal ratio of VHFU transducers, however, results in a depth-of-field (DOF) of less than 1-mm. Our aim was to develop a high-frequency annular array transducer for ocular imaging with improved DOF, sensitivity and resolution compared to conventional transducers. Design Experimental Study Participants Cadaver eyes, ex vivo cow eyes, in vivo rabbit eyes. Methods A spherically curved annular array ultrasound transducer was fabricated. The array consisted of five concentric rings of equal area, had an overall aperture of 6 mm and a geometric focus of 12 mm. The nominal center frequency of all array elements was 40 MHz. An experimental system was designed in which a single array element was pulsed and echo data recorded from all elements. By sequentially pulsing each element, echo data were acquired for all 25 transmit/receive annuli combinations. The echo data were then synthetically focused and composite images produced. Transducer operation was tested by scanning a test object consisting of a series of 25-μm diameter wires spaced at increasing range from the transducer. Imaging capabilities of the annular array were demonstrated in ex vivo bovine, in vivo rabbit and human cadaver eyes. Main Outcome Measures Depth of field, resolution and sensitivity. Results The wire scans verified the operation of the array and demonstrated a 6.0 mm DOF compared to the 1.0 mm DOF of a conventional single-element transducer of comparable frequency, aperture and focal length. B-mode images of ex vivo bovine, in vivo rabbit and cadaver eyes showed that while the single-element transducer had high sensitivity and resolution within 1–2 mm of its focus, the array with synthetic focusing maintained this quality over a 6 mm DOF. Conclusion An annular array for high-resolution ocular imaging has been demonstrated. This technology offers improved depth-of-field, sensitivity and lateral resolution compared to single-element fixed focus transducers currently used for VHFU imaging of the eye. PMID:17141314

  17. Segmentation of the macular choroid in OCT images acquired at 830nm and 1060nm

    NASA Astrophysics Data System (ADS)

    Lee, Sieun; Beg, Mirza F.; Sarunic, Marinko V.

    2013-06-01

    Retinal imaging with optical coherence tomography (OCT) has rapidly advanced in ophthalmic applications with the broad availability of Fourier domain (FD) technology in commercial systems. The high sensitivity afforded by FD-OCT has enabled imaging of the choroid, a layer of blood vessels serving the outer retina. Improved visualization of the choroid and the choroid-sclera boundary has been investigated using techniques such as enhanced depth imaging (EDI), and also with OCT systems operating in the 1060-nm wavelength range. We report on a comparison of imaging the macular choroid with commercial and prototype OCT systems, and present automated 3D segmentation of the choroid-scleral layer using a graph cut algorithm. The thickness of the choroid is an important measurement to investigate for possible correlation with severity, or possibly early diagnosis, of diseases such as age-related macular degeneration.

  18. Combined spectral-domain optical coherence tomography and hyperspectral imaging applied for tissue analysis: Preliminary results

    NASA Astrophysics Data System (ADS)

    Dontu, S.; Miclos, S.; Savastru, D.; Tautan, M.

    2017-09-01

    In recent years many optoelectronic techniques have been developed for improvement and the development of devices for tissue analysis. Spectral-Domain Optical Coherence Tomography (SD-OCT) is a new medical interferometric imaging modality that provides depth resolved tissue structure information with resolution in the μm range. However, SD-OCT has its own limitations and cannot offer the biochemical information of the tissue. These data can be obtained with hyperspectral imaging, a non-invasive, sensitive and real time technique. In the present study we have combined Spectral-Domain Optical Coherence Tomography (SD-OCT) with Hyperspectral imaging (HSI) for tissue analysis. The Spectral-Domain Optical Coherence Tomography (SD-OCT) and Hyperspectral imaging (HSI) are two methods that have demonstrated significant potential in this context. Preliminary results using different tissue have highlighted the capabilities of this technique of combinations.

  19. Photo-induced ultrasound microscopy for photo-acoustic imaging of non-absorbing specimens

    NASA Astrophysics Data System (ADS)

    Tcarenkova, Elena; Koho, Sami V.; Hänninen, Pekka E.

    2017-08-01

    Photo-Acoustic Microscopy (PAM) has raised high interest in in-vivo imaging due to its ability to preserve the near-diffraction limited spatial resolution of optical microscopes, whilst extending the penetration depth to the mm-range. Another advantage of PAM is that it is a label-free technique - any substance that absorbs PAM excitation laser light can be viewed. However, not all sample structures desired to be observed absorb sufficiently to provide contrast for imaging. This work describes a novel imaging method that makes it possible to visualize optically transparent samples that lack intrinsic photo-acoustic contrast, without the addition of contrast agents. A thin, strongly light absorbing layer next to sample is used to generate a strong ultrasonic signal. This signal, when recorded from opposite side, contains ultrasonic transmission information of the sample and thus the method can be used to obtain an ultrasound transmission image on any PAM.

  20. Quantitative assessment of Cerenkov luminescence for radioguided brain tumor resection surgery

    NASA Astrophysics Data System (ADS)

    Klein, Justin S.; Mitchell, Gregory S.; Cherry, Simon R.

    2017-05-01

    Cerenkov luminescence imaging (CLI) is a developing imaging modality that detects radiolabeled molecules via visible light emitted during the radioactive decay process. We used a Monte Carlo based computer simulation to quantitatively investigate CLI compared to direct detection of the ionizing radiation itself as an intraoperative imaging tool for assessment of brain tumor margins. Our brain tumor model consisted of a 1 mm spherical tumor remnant embedded up to 5 mm in depth below the surface of normal brain tissue. Tumor to background contrast ranging from 2:1 to 10:1 were considered. We quantified all decay signals (e±, gamma photon, Cerenkov photons) reaching the brain volume surface. CLI proved to be the most sensitive method for detecting the tumor volume in both imaging and non-imaging strategies as assessed by contrast-to-noise ratio and by receiver operating characteristic output of a channelized Hotelling observer.

  1. Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging

    NASA Astrophysics Data System (ADS)

    Micó, Vicente; Zalevsky, Zeev

    2010-07-01

    Digital in-line holographic microscopy (DIHM) is a modern approach capable of achieving micron-range lateral and depth resolutions in three-dimensional imaging. DIHM in combination with numerical imaging reconstruction uses an extremely simplified setup while retaining the advantages provided by holography with enhanced capabilities derived from algorithmic digital processing. We introduce superresolved DIHM incoming from time and angular multiplexing of the sample spatial frequency information and yielding in the generation of a synthetic aperture (SA). The SA expands the cutoff frequency of the imaging system, allowing submicron resolutions in both transversal and axial directions. The proposed approach can be applied when imaging essentially transparent (low-concentration dilutions) and static (slow dynamics) samples. Validation of the method for both a synthetic object (U.S. Air Force resolution test) to quantify the resolution improvement and a biological specimen (sperm cells biosample) are reported showing the generation of high synthetic numerical aperture values working without lenses.

  2. Laser Illumination Modality of Photoacoustic Imaging Technique for Prostate Cancer

    NASA Astrophysics Data System (ADS)

    Peng, Dong-qing; Peng, Yuan-yuan; Guo, Jian; Li, Hui

    2016-02-01

    Photoacoustic imaging (PAI) has recently emerged as a promising imaging technique for prostate cancer. But there was still a lot of challenge in the PAI for prostate cancer detection, such as laser illumination modality. Knowledge of absorbed light distribution in prostate tissue was essential since the distribution characteristic of absorbed light energy would influence the imaging depth and range of PAI. In order to make a comparison of different laser illumination modality of photoacoustic imaging technique for prostate cancer, optical model of human prostate was established and combined with Monte Carlo simulation method to calculate the light absorption distribution in the prostate tissue. Characteristic of light absorption distribution of transurethral and trans-rectal illumination case, and of tumor at different location was compared with each other.The relevant conclusions would be significant for optimizing the light illumination in a PAI system for prostate cancer detection.

  3. All-digital full waveform recording photon counting flash lidar

    NASA Astrophysics Data System (ADS)

    Grund, Christian J.; Harwit, Alex

    2010-08-01

    Current generation analog and photon counting flash lidar approaches suffer from limitation in waveform depth, dynamic range, sensitivity, false alarm rates, optical acceptance angle (f/#), optical and electronic cross talk, and pixel density. To address these issues Ball Aerospace is developing a new approach to flash lidar that employs direct coupling of a photocathode and microchannel plate front end to a high-speed, pipelined, all-digital Read Out Integrated Circuit (ROIC) to achieve photon-counting temporal waveform capture in each pixel on each laser return pulse. A unique characteristic is the absence of performance-limiting analog or mixed signal components. When implemented in 65nm CMOS technology, the Ball Intensified Imaging Photon Counting (I2PC) flash lidar FPA technology can record up to 300 photon arrivals in each pixel with 100 ps resolution on each photon return, with up to 6000 range bins in each pixel. The architecture supports near 100% fill factor and fast optical system designs (f/#<1), and array sizes to 3000×3000 pixels. Compared to existing technologies, >60 dB ultimate dynamic range improvement, and >104 reductions in false alarm rates are anticipated, while achieving single photon range precision better than 1cm. I2PC significantly extends long-range and low-power hard target imaging capabilities useful for autonomous hazard avoidance (ALHAT), navigation, imaging vibrometry, and inspection applications, and enables scannerless 3D imaging for distributed target applications such as range-resolved atmospheric remote sensing, vegetation canopies, and camouflage penetration from terrestrial, airborne, GEO, and LEO platforms. We discuss the I2PC architecture, development status, anticipated performance advantages, and limitations.

  4. Depth-to-Ice Map of a Southern Mars Site Near Melea Planum

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Color coding in this map of a far-southern site on Mars indicates the change in nighttime ground-surface temperature between summer and fall. This site, like most of high-latitude Mars, has water ice mixed with soil near the surface. The ice is probably in a rock-hard frozen layer beneath a few centimeters or inches of looser, dry soil. The amount of temperature change at the surface likely corresponds to how close to the surface the icy material lies.

    The dense, icy layer retains heat better than the looser soil above it, so where the icy layer is closer to the surface, the surface temperature changes more slowly than where the icy layer is buried deeper. On the map, areas of the surface that cooled more slowly between summer and autumn (interpreted as having the ice closer to the surface) are coded blue and green. Areas that cooled more quickly (interpreted as having more distance to the ice) are coded red and yellow.

    The depth to the top of the icy layer estimated from these observations suggests that in some areas, but not others, water is being exchanged by diffusion between atmospheric water vapor and subsurface water ice. Differences in what type of material lies above the ice appear to affect the depth to the ice. The area in this image with the greatest seasonal change in surface temperature corresponds to an area of sand dunes.

    This map and its interpretation are in a May 3, 2007, report in the journal Nature by Joshua Bandfield of Arizona State University, Tempe. The Thermal Emission Imaging System camera on NASA's Mars Odyssey orbiter collected the data presented in the map. The site is centered near 67 degrees south latitude, 36.5 degrees east longitude, near a plain named Melea Planum. This site is within the portion of the planet where, in 2002, the Gamma Ray Spectrometer suite of instruments on Mars Odyssey found evidence for water ice lying just below the surface. The information from the Gamma Ray Spectrometer is averaged over patches of ground hundreds of kilometers or miles wide. The information from the Thermal Emission Imaging System allows more than 100-fold higher resolution in mapping variations in the depth to ice.

    The Thermal Emission Imaging System observed the site in infrared wavelengths during night time, providing surface-temperature information. It did so once on Dec. 27, 2005, during late summer in Mars' southern hemisphere, and again on Jan. 22, 2006, the first day of autumn there. The colors on this map signify relative differences in how much the surface temperature changed between those two observations. Blue indicates the locations with the least change. Red indicates areas with most change. Modeling provides estimates that the range of temperature changes shown in this map corresponds to a range in depth-to-ice of less than 1 centimeter (0.4 inch) to more than 19 centimeters (more than 7.5 inches). The sensitivity of this method for estimating the depth is not good for depths greater than about 20 centimeters (8 inches).

    The temperature-change data are overlaid on a mosaic of black-and-white, daytime images taken in infrared wavelengths by the same camera, providing information about shapes in the landscape. The 20-kilometer scale bar is 12.4 miles long.

    NASA's Jet Propulsion Laboratory manages the Mars Odyssey mission for NASA's Science Mission Directorate, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University in collaboration with Raytheon Santa Barbara Remote Sensing. Lockheed Martin Space Systems, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  5. 40 MHz high-frequency ultrafast ultrasound imaging.

    PubMed

    Huang, Chih-Chung; Chen, Pei-Yu; Peng, Po-Hsun; Lee, Po-Yang

    2017-06-01

    Ultrafast high-frame-rate ultrasound imaging based on coherent-plane-wave compounding has been developed for many biomedical applications. Most coherent-plane-wave compounding systems typically operate at 3-15 MHz, and the image resolution for this frequency range is not sufficient for visualizing microstructure tissues. Therefore, the purpose of this study was to implement a high-frequency ultrafast ultrasound imaging operating at 40 MHz. The plane-wave compounding imaging and conventional multifocus B-mode imaging were performed using the Field II toolbox of MATLAB in simulation study. In experiments, plane-wave compounding images were obtained from a 256 channel ultrasound research platform with a 40 MHz array transducer. All images were produced by point-spread functions and cyst phantoms. The in vivo experiment was performed from zebrafish. Since high-frequency ultrasound exhibits a lower penetration, chirp excitation was applied to increase the imaging depth in simulation. The simulation results showed that a lateral resolution of up to 66.93 μm and a contrast of up to 56.41 dB were achieved when using 75-angles plane waves in compounding imaging. The experimental results showed that a lateral resolution of up to 74.83 μm and a contrast of up to 44.62 dB were achieved when using 75-angles plane waves in compounding imaging. The dead zone and compounding noise are about 1.2 mm and 2.0 mm in depth for experimental compounding imaging, respectively. The structure of zebrafish heart was observed clearly using plane-wave compounding imaging. The use of fewer than 23 angles for compounding allowed a frame rate higher than 1000 frames per second. However, the compounding imaging exhibits a similar lateral resolution of about 72 μm as the angle of plane wave is higher than 10 angles. This study shows the highest operational frequency for ultrafast high-frame-rate ultrasound imaging. © 2017 American Association of Physicists in Medicine.

  6. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.

  7. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  8. Extended depth of field integral imaging using multi-focus fusion

    NASA Astrophysics Data System (ADS)

    Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua

    2018-03-01

    In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.

  9. Relationship between the Foveal Avascular Zone and Foveal Pit Morphology

    PubMed Central

    Dubis, Adam M.; Hansen, Benjamin R.; Cooper, Robert F.; Beringer, Joseph; Dubra, Alfredo; Carroll, Joseph

    2012-01-01

    Purpose. To assess the relationship between foveal pit morphology and size of the foveal avascular zone (FAZ). Methods. Forty-two subjects were recruited. Volumetric images of the macula were obtained using spectral domain optical coherence tomography. Images of the FAZ were obtained using either a modified fundus camera or an adaptive optics scanning light ophthalmoscope. Foveal pit metrics (depth, diameter, slope, volume, and area) were automatically extracted from retinal thickness data, whereas the FAZ was manually segmented by two observers to extract estimates of FAZ diameter and area. Results. Consistent with previous reports, the authors observed significant variation in foveal pit morphology. The average foveal pit volume was 0.081 mm3 (range, 0.022 to 0.190 mm3). The size of the FAZ was also highly variable between persons, with FAZ area ranging from 0.05 to 1.05 mm2 and FAZ diameter ranging from 0.20 to 1.08 mm. FAZ area was significantly correlated with foveal pit area, depth, and volume; deeper and broader foveal pits were associated with larger FAZs. Conclusions. Although these results are consistent with predictions from existing models of foveal development, more work is needed to confirm the developmental link between the size of the FAZ and the degree of foveal pit excavation. In addition, more work is needed to understand the relationship between these and other anatomic features of the human foveal region, including peak cone density, rod-free zone diameter, and Henle fiber layer. PMID:22323466

  10. Single-camera three-dimensional tracking of natural particulate and zooplankton

    NASA Astrophysics Data System (ADS)

    Troutman, Valerie A.; Dabiri, John O.

    2018-07-01

    We develop and characterize an image processing algorithm to adapt single-camera defocusing digital particle image velocimetry (DDPIV) for three-dimensional (3D) particle tracking velocimetry (PTV) of natural particulates, such as those present in the ocean. The conventional DDPIV technique is extended to facilitate tracking of non-uniform, non-spherical particles within a volume depth an order of magnitude larger than current single-camera applications (i.e. 10 cm  ×  10 cm  ×  24 cm depth) by a dynamic template matching method. This 2D cross-correlation method does not rely on precise determination of the centroid of the tracked objects. To accommodate the broad range of particle number densities found in natural marine environments, the performance of the measurement technique at higher particle densities has been improved by utilizing the time-history of tracked objects to inform 3D reconstruction. The developed processing algorithms were analyzed using synthetically generated images of flow induced by Hill’s spherical vortex, and the capabilities of the measurement technique were demonstrated empirically through volumetric reconstructions of the 3D trajectories of particles and highly non-spherical, 5 mm zooplankton.

  11. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  12. FIMTrack: An open source tracking and locomotion analysis software for small animals.

    PubMed

    Risse, Benjamin; Berh, Dimitri; Otto, Nils; Klämbt, Christian; Jiang, Xiaoyi

    2017-05-01

    Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.

  13. MT2D Inversion to Image the Gorda Plate Subduction Zone

    NASA Astrophysics Data System (ADS)

    Lubis, Y. K.; Niasari, S. W.; Hartantyo, E.

    2018-04-01

    The magnetotelluric method is applicable for studying complicated geological structures because the subsurface electrical properties are strongly influenced by the electric and magnetic fields. This research located in the Gorda subduction zone beneath the North American continental plate. Magnetotelluric 2D inversion was used to image the variation of subsurface resistivity although the phase tensor analysis shows that the majority of dimensionality data is 3D. 19 MT sites were acquired from EarthScope/USArray Project. Wepresent the image of MT 2D inversion to exhibit conductivity distribution from the middle crust to uppermost asthenosphere at a depth of 120 kilometers. Based on the inversion, the overall data misfit value is 3.89. The Gorda plate subduction appears as a high resistive zone beneath the California. Local conductive features are found in the middle crust downward Klamath Mountain, Bonneville Lake, and below the eastern of Utah. Furthermore, mid-crustal is characterized by moderately resistive. Below the extensional Basin and Range province was related to highly resistive. The middle crust to the uppermost asthenosphere becomes moderately resistive. We conclude that the electrical parameters and the dimensionality of datain the shallow depth(about 22.319 km) beneath the North American platein accordance with surface geological features.

  14. Photoacoustic imaging with planoconcave optical microresonator sensors: feasibility studies based on phantom imaging

    NASA Astrophysics Data System (ADS)

    Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.

    2017-03-01

    The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.

  15. An efficient hole-filling method based on depth map in 3D view generation

    NASA Astrophysics Data System (ADS)

    Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.

  16. Deep Tissue Photoacoustic Imaging Using a Miniaturized 2-D Capacitive Micromachined Ultrasonic Transducer Array

    PubMed Central

    Kothapalli, Sri-Rajasekhar; Ma, Te-Jen; Vaithilingam, Srikant; Oralkan, Ömer

    2014-01-01

    In this paper, we demonstrate 3-D photoacoustic imaging (PAI) of light absorbing objects embedded as deep as 5 cm inside strong optically scattering phantoms using a miniaturized (4 mm × 4 mm × 500 µm), 2-D capacitive micromachined ultrasonic transducer (CMUT) array of 16 × 16 elements with a center frequency of 5.5 MHz. Two-dimensional tomographic images and 3-D volumetric images of the objects placed at different depths are presented. In addition, we studied the sensitivity of CMUT-based PAI to the concentration of indocyanine green dye at 5 cm depth inside the phantom. Under optimized experimental conditions, the objects at 5 cm depth can be imaged with SNR of about 35 dB and a spatial resolution of approximately 500 µm. Results demonstrate that CMUTs with integrated front-end amplifier circuits are an attractive choice for achieving relatively high depth sensitivity for PAI. PMID:22249594

  17. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Characterisation of optically cleared paper by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Fabritius, T.; Alarousu, E.; Prykäri, T.; Hast, J.; Myllylä, Risto

    2006-02-01

    Due to the highly light scattering nature of paper, the imaging depth of optical methods such as optical coherence tomography (OCT) is limited. In this work, we study the effect of refractive index matching on improving the imaging depth of OCT in paper. To this end, four different refractive index matching liquids (ethanol, 1-pentanol, glycerol and benzyl alcohol) with a refraction index between 1.359 and 1.538 were used in experiments. Low coherent light transmission was studied in commercial copy paper sheets, and the results indicate that benzyl alcohol offers the best improvement in imaging depth, while also being sufficiently stable for the intended purpose. Constructed cross-sectional images demonstrate visually that the imaging depth of OCT is considerably improved by optical clearing. Both surfaces of paper sheets can be detected along with information about the sheet's inner structure.

  18. The Estimation of the Water Table and the Specific Yield with time-lapse 2D Electrical Resistivity Imaging in the Minzu Basin of Central Taiwan

    NASA Astrophysics Data System (ADS)

    Yao, H. J.; Chang, P. Y.

    2017-12-01

    The Minzu Basin is located at the central part of Taiwan, which is bounded by the Changhua fault in the west and the Chelungpu thrust fault in its east. The Chuoshui river flows through the basin and brings in thick unconsolidated gravel layers deposited over the Pleistocene rocks and gravels. Thus, the area has a great potential for groundwater developments. However, there are not enough observation wells in the study area for a further investigation of groundwater characteristics. Therefore, we tried to use the electrical resistivity imaging(ERI) method for estimating the depth of the groundwater table and the specific yield of the unconfined aquifer in dry and wet seasons. We have deployed 13 survey lines with the Wenner-Schlumberger array in the study area in March and June of 2017. Based on the data from the ERI measurements and the nearby Xinming observation well, we turned the resistivity into the relative saturation with respect to the saturated background based on the Archie's Law. With the depth distribution curve of the relative saturation, we found that the curve exhibits a similar shape to the Soil-Water Characteristic Curve. Hence we attempted to use the Van-Genuchten model for characterizing the depth of the water table. And we also tried to calculated the specific yield by taking the difference between the saturated and residual water contents. According to our preliminary results, we found that the depth of groundwater is ranging from 8-m to 10.7-m and the specific yield is about 0.095 0.146 in March. In addition, the depth of groundwater in June is ranging from about 7.6m to 9.8m and the estimated specific yield is about 0.1 0.157. The average level of groundwater in the wet season of June is raised about 0.6m than that in March. We are now working on collecting more time-lapse data, as well as making the direct comparisons with the data from new observation wells completed recently, in order to verify our estimations from the resistivity surveys.

  19. All-near-infrared multiphoton microscopy interrogates intact tissues at deeper imaging depths than conventional single- and two-photon near-infrared excitation microscopes

    PubMed Central

    Sarder, Pinaki; Yazdanfar, Siavash; Akers, Walter J.; Tang, Rui; Sudlow, Gail P.; Egbulefu, Christopher

    2013-01-01

    Abstract. The era of molecular medicine has ushered in the development of microscopic methods that can report molecular processes in thick tissues with high spatial resolution. A commonality in deep-tissue microscopy is the use of near-infrared (NIR) lasers with single- or multiphoton excitations. However, the relationship between different NIR excitation microscopic techniques and the imaging depths in tissue has not been established. We compared such depth limits for three NIR excitation techniques: NIR single-photon confocal microscopy (NIR SPCM), NIR multiphoton excitation with visible detection (NIR/VIS MPM), and all-NIR multiphoton excitation with NIR detection (NIR/NIR MPM). Homologous cyanine dyes provided the fluorescence. Intact kidneys were harvested after administration of kidney-clearing cyanine dyes in mice. NIR SPCM and NIR/VIS MPM achieved similar maximum imaging depth of ∼100  μm. The NIR/NIR MPM enabled greater than fivefold imaging depth (>500  μm) using the harvested kidneys. Although the NIR/NIR MPM used 1550-nm excitation where water absorption is relatively high, cell viability and histology studies demonstrate that the laser did not induce photothermal damage at the low laser powers used for the kidney imaging. This study provides guidance on the imaging depth capabilities of NIR excitation-based microscopic techniques and reveals the potential to multiplex information using these platforms. PMID:24150231

  20. A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study.

    PubMed

    Li, Guang; Luo, Shouhua; Yan, Yuling; Gu, Ning

    2015-01-01

    The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion.

  1. A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study

    PubMed Central

    2015-01-01

    Background The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. Methods In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. Results By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. Conclusions The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion. PMID:25602532

  2. Oxygen Isotope Variability within Nautilus Shell Growth Bands

    PubMed Central

    2016-01-01

    Nautilus is often used as an analogue for the ecology and behavior of extinct externally shelled cephalopods. Nautilus shell grows quickly, has internal growth banding, and is widely believed to precipitate aragonite in oxygen isotope equilibrium with seawater. Pieces of shell from a wild-caught Nautilus macromphalus from New Caledonia and from a Nautilus belauensis reared in an aquarium were cast in epoxy, polished, and then imaged. Growth bands were visible in the outer prismatic layer of both shells. The thicknesses of the bands are consistent with previously reported daily growth rates measured in aquarium reared individuals. In situ analysis of oxygen isotope ratios using secondary ion mass spectrometry (SIMS) with 10 μm beam-spot size reveals inter- and intra-band δ18O variation. In the wild-caught sample, a traverse crosscutting 45 growth bands yielded δ18O values ranging 2.5‰, from +0.9 to -1.6 ‰ (VPDB), a range that is larger than that observed in many serial sampling of entire shells by conventional methods. The maximum range within a single band (~32 μm) was 1.5‰, and 27 out of 41 bands had a range larger than instrumental precision (±2 SD = 0.6‰). The results from the wild individual suggest depth migration is recorded by the shell, but are not consistent with a simple sinusoidal, diurnal depth change pattern. To create the observed range of δ18O, however, this Nautilus must have traversed a temperature gradient of at least ~12°C, corresponding to approximately 400 m depth change. Isotopic variation was also measured in the aquarium-reared sample, but the pattern within and between bands likely reflects evaporative enrichment arising from a weekly cycle of refill and replacement of the aquarium water. Overall, this work suggests that depth migration behavior in ancient nektonic mollusks could be elucidated by SIMS analysis across individual growth bands. PMID:27100183

  3. Oxygen isotope variability within Nautilus shell growth bands

    DOE PAGES

    Linzmeier, Benjamin J.; Kozdon, Reinhard; Peters, Shanan E.; ...

    2016-04-21

    Nautilus is often used as an analogue for the ecology and behavior of extinct externally shelled cephalopods. Nautilus shell grows quickly, has internal growth banding, and is widely believed to precipitate aragonite in oxygen isotope equilibrium with seawater. Pieces of shell from a wild-caught Nautilus macromphalus from New Caledonia and from a Nautilus belauensis reared in an aquarium were cast in epoxy, polished, and then imaged. Growth bands were visible in the outer prismatic layer of both shells. The thicknesses of the bands are consistent with previously reported daily growth rates measured in aquarium reared individuals. In situ analysis ofmore » oxygen isotope ratios using secondary ion mass spectrometry (SIMS) with 10 μm beam-spot size reveals inter- and intra-band δ 18O variation. In the wild-caught sample, a traverse crosscutting 45 growth bands yielded δ 18O values ranging 2.5‰, from +0.9 to -1.6 ‰ (VPDB), a range that is larger than that observed in many serial sampling of entire shells by conventional methods. The maximum range within a single band (~32 μm) was 1.5‰, and 27 out of 41 bands had a range larger than instrumental precision (±2 SD = 0.6‰). The results from the wild individual suggest depth migration is recorded by the shell, but are not consistent with a simple sinusoidal, diurnal depth change pattern. In addition, to create the observed range of δ 18O, however, this Nautilus must have traversed a temperature gradient of at least ~12°C, corresponding to approximately 400 m depth change. Isotopic variation was also measured in the aquarium-reared sample, but the pattern within and between bands likely reflects evaporative enrichment arising from a weekly cycle of refill and replacement of the aquarium water. Overall, this work suggests that depth migration behavior in ancient nektonic mollusks could be elucidated by SIMS analysis across individual growth bands.« less

  4. A deep learning approach for pose estimation from volumetric OCT data.

    PubMed

    Gessert, Nils; Schlüter, Matthias; Schlaefer, Alexander

    2018-05-01

    Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images' 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label's resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Prompt Gamma Imaging for In Vivo Range Verification of Pencil Beam Scanning Proton Therapy.

    PubMed

    Xie, Yunhe; Bentefour, El Hassane; Janssens, Guillaume; Smeets, Julien; Vander Stappen, François; Hotoiu, Lucian; Yin, Lingshu; Dolney, Derek; Avery, Stephen; O'Grady, Fionnbarr; Prieels, Damien; McDonough, James; Solberg, Timothy D; Lustig, Robert A; Lin, Alexander; Teo, Boon-Keng K

    2017-09-01

    To report the first clinical results and value assessment of prompt gamma imaging for in vivo proton range verification in pencil beam scanning mode. A stand-alone, trolley-mounted, prototype prompt gamma camera utilizing a knife-edge slit collimator design was used to record the prompt gamma signal emitted along the proton tracks during delivery of proton therapy for a brain cancer patient. The recorded prompt gamma depth detection profiles of individual pencil beam spots were compared with the expected profiles simulated from the treatment plan. In 6 treatment fractions recorded over 3 weeks, the mean (± standard deviation) range shifts aggregated over all spots in 9 energy layers were -0.8 ± 1.3 mm for the lateral field, 1.7 ± 0.7 mm for the right-superior-oblique field, and -0.4 ± 0.9 mm for the vertex field. This study demonstrates the feasibility and illustrates the distinctive benefits of prompt gamma imaging in pencil beam scanning treatment mode. Accuracy in range verification was found in this first clinical case to be better than the range uncertainty margin applied in the treatment plan. These first results lay the foundation for additional work toward tighter integration of the system for in vivo proton range verification and quantification of range uncertainties. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Fluorescence hyperspectral imaging (fHSI) using a spectrally resolved detector array

    PubMed Central

    Luthman, Anna Siri; Dumitru, Sebastian; Quiros‐Gonzalez, Isabel; Joseph, James

    2017-01-01

    Abstract The ability to resolve multiple fluorescent emissions from different biological targets in video rate applications, such as endoscopy and intraoperative imaging, has traditionally been limited by the use of filter‐based imaging systems. Hyperspectral imaging (HSI) facilitates the detection of both spatial and spectral information in a single data acquisition, however, instrumentation for HSI is typically complex, bulky and expensive. We sought to overcome these limitations using a novel robust and low cost HSI camera based on a spectrally resolved detector array (SRDA). We integrated this HSI camera into a wide‐field reflectance‐based imaging system operating in the near‐infrared range to assess the suitability for in vivo imaging of exogenous fluorescent contrast agents. Using this fluorescence HSI (fHSI) system, we were able to accurately resolve the presence and concentration of at least 7 fluorescent dyes in solution. We also demonstrate high spectral unmixing precision, signal linearity with dye concentration and at depth in tissue mimicking phantoms, and delineate 4 fluorescent dyes in vivo. Our approach, including statistical background removal, could be directly generalised to broader spectral ranges, for example, to resolve tissue reflectance or autofluorescence and in future be tailored to video rate applications requiring snapshot HSI data acquisition. PMID:28485130

  7. Depth of maturity in the Moon's regolith

    NASA Astrophysics Data System (ADS)

    Denevi, B. W.; Duck, A.; Klem, S.; Ravi, S.; Robinson, M. S.; Speyerer, E. J.

    2017-12-01

    The observed maturity of the lunar surface is a function of its exposure to the weathering agents of the space environment as well as the rates of regolith gardening and overturn. Regolith exposed on the surface weathers until it is buried below material delivered to the surface by impact events; weathering resumes when it is re-exposed to the surface environment by later impacts. This cycle repeats until a mature layer of some thickness develops. The gardening rate of the upper regolith has recently been shown to be substantially higher than previously thought, and new insights on the rates of space weathering and potential variation of these rates with solar wind flux have been gained from remote sensing as well as laboratory studies. Examining the depth to which the lunar regolith is mature across a variety of locations on the Moon can provide new insight into both gardening and space weathering. Here we use images from the Lunar Reconnaissance Orbiter Camera (LROC) with pixel scales less than approximately 50 cm to examine the morphology and reflectance of impact craters in the 2- to 100-m diameter size range. Apollo core samples show substantial variation, but suggest that the upper 50 cm to >1 m of regolith is mature at the sampled sites. These depths indicate that because craters excavate to a maximum depth of 10% of the transient crater diameter, craters with diameters less than 5-10 m will typically expose only mature material and this phenomenon should be observable in LROC images. Thus, we present the results of classifying craters by both morphology and reflectance to determine the size-frequency distribution of craters that expose immature material versus those that do not. These results are then compared to observations of reflectance values for the ejecta of craters that have formed during the LRO mission. These newly formed craters span a similar range of diameters, and there is no ambiguity about post-impact weathering because they are less than a decade old.

  8. Validation of luminescent source reconstruction using spectrally resolved bioluminescence images

    NASA Astrophysics Data System (ADS)

    Virostko, John M.; Powers, Alvin C.; Jansen, E. D.

    2008-02-01

    This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.

  9. An end-to-end assessment of range uncertainty in proton therapy using animal tissues.

    PubMed

    Zheng, Yuanshui; Kang, Yixiu; Zeidan, Omar; Schreuder, Niek

    2016-11-21

    Accurate assessment of range uncertainty is critical in proton therapy. However, there is a lack of data and consensus on how to evaluate the appropriate amount of uncertainty. The purpose of this study is to quantify the range uncertainty in various treatment conditions in proton therapy, using transmission measurements through various animal tissues. Animal tissues, including a pig head, beef steak, and lamb leg, were used in this study. For each tissue, an end-to-end test closely imitating patient treatments was performed. This included CT scan simulation, treatment planning, image-guided alignment, and beam delivery. Radio-chromic films were placed at various depths in the distal dose falloff region to measure depth dose. Comparisons between measured and calculated doses were used to evaluate range differences. The dose difference at the distal falloff between measurement and calculation depends on tissue type and treatment conditions. The estimated range difference was up to 5, 6 and 4 mm for the pig head, beef steak, and lamb leg irradiation, respectively. Our study shows that the TPS was able to calculate proton range within about 1.5% plus 1.5 mm. Accurate assessment of range uncertainty in treatment planning would allow better optimization of proton beam treatment, thus fully achieving proton beams' superior dose advantage over conventional photon-based radiation therapy.

  10. Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.

    PubMed

    Zalvidea, D; Sicre, E E

    1998-06-10

    A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.

  11. Crater Morphometry and Crater Degradation on Mercury: Mercury Laser Altimeter (MLA) Measurements and Comparison to Stereo-DTM Derived Results

    NASA Technical Reports Server (NTRS)

    Leight, C.; Fassett, C. I.; Crowley, M. C.; Dyar, M. D.

    2017-01-01

    Two types of measurements of Mercury's surface topography were obtained by the MESSENGER (MErcury Surface Space ENvironment, GEochemisty and Ranging) spacecraft: laser ranging data from Mercury Laser Altimeter (MLA) [1], and stereo imagery from the Mercury Dual Imaging System (MDIS) camera [e.g., 2, 3]. MLA data provide precise and accurate elevation meaurements, but with sparse spatial sampling except at the highest northern latitudes. Digital terrain models (DTMs) from MDIS have superior resolution but with less vertical accuracy, limited approximately to the pixel resolution of the original images (in the case of [3], 15-75 m). Last year [4], we reported topographic measurements of craters in the D=2.5 to 5 km diameter range from stereo images and suggested that craters on Mercury degrade more quickly than on the Moon (by a factor of up to approximately 10×). However, we listed several alternative explanations for this finding, including the hypothesis that the lower depth/diameter ratios we observe might be a result of the resolution and accuracy of the stereo DTMs. Thus, additional measurements were undertaken using MLA data to examine the morphometry of craters in this diameter range and assess whether the faster crater degradation rates proposed to occur on Mercury is robust.

  12. Local birefringence of the anterior segment of the human eye in a single capture with a full range polarisation-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Li, Qingyun; Karnowski, Karol; Villiger, Martin; Sampson, David D.

    2017-04-01

    A fibre-based full-range polarisation-sensitive optical coherence tomography system is developed to enable complete capture of the structural and birefringence properties of the anterior segment of the human eye in a single acquisition. The system uses a wavelength swept source centered at 1.3 μm, passively depth-encoded, orthogonal polarisation states in the illumination path and polarisation-diversity detection. Off-pivot galvanometer scanning is used to extend the imaging range and compensate for sensitivity drop-off. A Mueller matrix-based method is used to analyse data. We demonstrate the performance of the system and discuss issues relating to its optimisation.

  13. Electrical conductivity imaging in the western Pacific subduction zone

    NASA Astrophysics Data System (ADS)

    Utada, Hisashi; Baba, Kiyoshi; Shimizu, Hisayoshi

    2010-05-01

    Oceanic plate subduction is an important process for the dynamics and evolution of the Earth's interior, as it is regarded as a typical downward flow of the mantle convection that transports materials from the near surface to the deep mantle. Recent seismological study showed evidence suggesting the transportation of a certain amount of water by subduction of old oceanic plate such as the Pacific plate down to 150-200 km depth into the back arc mantle. However it is not well clarified how deep into the mantle the water can be transported. The electromagnetic induction method to image electrical conductivity distribution is a possible tool to answer this question as it is known to be sensitive to the presence of water. Here we show recent result of observational study from the western Pacific subduction zone to examine the electrical conductivity distribution in the upper mantle and in the mantle transition zone (MTZ), which will provide implications how water distributes in the mantle. We take two kinds of approach for imaging the mantle conductivity, (a) semi-global and (b) regional induction approaches. Result may be summarized as follows: (a) Long (5-30 years) time series records from 8 submarine cables and 13 geomagnetic observatories in the north Pacific region were analyzed and long period magnetotelluric (MT) and geomagnetic deep sounding (GDS) responses were estimated in the period range from 1.7 to 35 days. These frequency dependent response functions were inverted to 3-dimensional conductivity distribution in the depth range between 350 and 850 km. Three major features are suggested in the MTZ depth such as, (1) a high conductivity anomaly beneath the Philippine Sea, (2) a high conductivity anomaly beneath the Hawaiian Islands, and (3) a low conductivity anomaly beneath and in the vicinity of northern Japan. (b) A three-year long deployment of ocean bottom electro-magnetometers (OBEM's) was conducted in the Philippine Sea and west Pacific Ocean from 2005 to 2008. As a preliminary investigation, MT response functions from 20 sites in the Philippine Sea and 4 sites in the west Pacific basin in the period range between 300 and 80000 sec were respectively inverted to one-dimensional (1-D) profile of electrical conductivity by quantitatively considering the effect of the heterogeneous conductivity distribution (ocean and lands) at the surface. The resultant 1-D models show three main features: (1) Strong contrast in the conductivity for the shallower 200 km of the upper mantle depths is recognized between the two regions, which is qualitatively consistent with the difference in lithospheric age. (2) The conductivity at 200-300 km depth is more or less similar to each other at about 0.3 S /m. (3) The conductivity around the MTZ depth is higher for the Philippine Sea mantle than for the Pacific mantle, which is consistent with the implication obtained from a semi-global approach (a). As already suggested in our previous work, the high conductivity in the MTZ below the Philippine Sea can be explained by the excess conduction due to the presence of hydrogen (water) in wadesleyite or in ringwoodite. Therefore, it implies a large scale circulation of water in the back arc mantle not only in the upper mantle but also down to the MTZ depth. However, our interpretation indicates that the high conductivity of the Philippine Sea uppermost upper mantle cannot be explained only by the effect of hydrogen conduction in olivine, but that additional conduction enhancement such as the presence of partial melt is required.

  14. Quantitative shear wave optical coherence elastography (SW-OCE) with acoustic radiation force impulses (ARFI) induced by phase array transducer

    NASA Astrophysics Data System (ADS)

    Song, Shaozhen; Le, Nhan Minh; Wang, Ruikang K.; Huang, Zhihong

    2015-03-01

    Shear Wave Optical Coherence Elastography (SW-OCE) uses the speed of propagating shear waves to provide a quantitative measurement of localized shear modulus, making it a valuable technique for the elasticity characterization of tissues such as skin and ocular tissue. One of the main challenges in shear wave elastography is to induce a reliable source of shear wave; most of nowadays techniques use external vibrators which have several drawbacks such as limited wave propagation range and/or difficulties in non-invasive scans requiring precisions, accuracy. Thus, we propose linear phase array ultrasound transducer as a remote wave source, combined with the high-speed, 47,000-frame-per-second Shear-wave visualization provided by phase-sensitive OCT. In this study, we observed for the first time shear waves induced by a 128 element linear array ultrasound imaging transducer, while the ultrasound and OCT images (within the OCE detection range) were triggered simultaneously. Acoustic radiation force impulses are induced by emitting 10 MHz tone-bursts of sub-millisecond durations (between 50 μm - 100 μm). Ultrasound beam steering is achieved by programming appropriate phase delay, covering a lateral range of 10 mm and full OCT axial (depth) range in the imaging sample. Tissue-mimicking phantoms with agarose concentration of 0.5% and 1% was used in the SW-OCE measurements as the only imaging samples. The results show extensive improvements over the range of SW-OCE elasticity map; such improvements can also be seen over shear wave velocities in softer and stiffer phantoms, as well as determining the boundary of multiple inclusions with different stiffness. This approach opens up the feasibility to combine medical ultrasound imaging and SW-OCE for high-resolution localized quantitative measurement of tissue biomechanical property.

  15. Microbiome variation in corals with distinct depth distribution ranges across a shallow-mesophotic gradient (15-85 m)

    NASA Astrophysics Data System (ADS)

    Glasl, Bettina; Bongaerts, Pim; Elisabeth, Nathalie H.; Hoegh-Guldberg, Ove; Herndl, Gerhard J.; Frade, Pedro R.

    2017-06-01

    Mesophotic coral ecosystems (MCEs) are generally poorly studied, and our knowledge of lower MCEs (below 60 m depth) is largely limited to visual surveys. Here, we provide a first detailed assessment of the prokaryotic community associated with scleractinian corals over a depth gradient to the lower mesophotic realm (15-85 m). Specimens of three Caribbean coral species exhibiting differences in their depth distribution ranges ( Agaricia grahamae, Madracis pharensis and Stephanocoenia intersepta) were collected with a manned submersible on the island of Curaçao, and their prokaryotic communities assessed using 16S rRNA gene sequencing analysis. Corals with narrower depth distribution ranges (depth-specialists) were associated with a stable prokaryotic community, whereas corals with a broader niche range (depth-generalists) revealed a higher variability in their prokaryotic community. The observed depth effects match previously described patterns in Symbiodinium depth zonation. This highlights the contribution of structured microbial communities over depth to the coral's ability to colonize a broader depth range.

  16. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    NASA Astrophysics Data System (ADS)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-11-01

    Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of  ≈1.14 mm for the simulated PET detector and  ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.

  17. Subduction and collision processes in the Central Andes constrained by converted seismic phases.

    PubMed

    Yuan, X; Sobolev, S V; Kind, R; Oncken, O; Bock, G; Asch, G; Schurr, B; Graeber, F; Rudloff, A; Hanka, W; Wylegalla, K; Tibi, R; Haberland, C; Rietbrock, A; Giese, P; Wigger, P; Röwer, P; Zandt, G; Beck, S; Wallace, T; Pardo, M; Comte, D

    The Central Andes are the Earth's highest mountain belt formed by ocean-continent collision. Most of this uplift is thought to have occurred in the past 20 Myr, owing mainly to thickening of the continental crust, dominated by tectonic shortening. Here we use P-to-S (compressional-to-shear) converted teleseismic waves observed on several temporary networks in the Central Andes to image the deep structure associated with these tectonic processes. We find that the Moho (the Mohorovicić discontinuity--generally thought to separate crust from mantle) ranges from a depth of 75 km under the Altiplano plateau to 50 km beneath the 4-km-high Puna plateau. This relatively thin crust below such a high-elevation region indicates that thinning of the lithospheric mantle may have contributed to the uplift of the Puna plateau. We have also imaged the subducted crust of the Nazca oceanic plate down to 120 km depth, where it becomes invisible to converted teleseismic waves, probably owing to completion of the gabbro-eclogite transformation; this is direct evidence for the presence of kinetically delayed metamorphic reactions in subducting plates. Most of the intermediate-depth seismicity in the subducting plate stops at 120 km depth as well, suggesting a relation with this transformation. We see an intracrustal low-velocity zone, 10-20 km thick, below the entire Altiplano and Puna plateaux, which we interpret as a zone of continuing metamorphism and partial melting that decouples upper-crustal imbrication from lower-crustal thickening.

  18. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  19. Explosive change in crater properties during high power nanosecond laser ablation of silicon

    NASA Astrophysics Data System (ADS)

    Yoo, J. H.; Jeong, S. H.; Greif, R.; Russo, R. E.

    2000-08-01

    Mass removed from single crystal silicon samples by high irradiance (1×109 to 1×1011W/cm2) single pulse laser ablation was studied by measuring the resulting crater morphology with a white light interferometric microscope. The craters show a strong nonlinear change in both the volume and depth when the laser irradiance is less than or greater than ≈2.2×1010W/cm2. Time-resolved shadowgraph images of the ablated silicon plume were obtained over this irradiance range. The images show that the increase in crater volume and depth at the threshold of 2.2×1010W/cm2 is accompanied by large size droplets leaving the silicon surface, with a time delay ˜300 ns. A numerical model was used to estimate the thickness of the layer heated to approximately the critical temperature. The model includes transformation of liquid metal into liquid dielectric near the critical state (i.e., induced transparency). In this case, the estimated thickness of the superheated layer at a delay time of 200-300 ns shows a close agreement with measured crater depths. Induced transparency is demonstrated to play an important role in the formation of a deep superheated liquid layer, with subsequent explosive boiling responsible for large-particulate ejection.

  20. Wide field and highly sensitive angiography based on optical coherence tomography with akinetic swept source.

    PubMed

    Xu, Jingjiang; Song, Shaozhen; Wei, Wei; Wang, Ruikang K

    2017-01-01

    Wide-field vascular visualization in bulk tissue that is of uneven surface is challenging due to the relatively short ranging distance and significant sensitivity fall-off for most current optical coherence tomography angiography (OCTA) systems. We report a long ranging and ultra-wide-field OCTA (UW-OCTA) system based on an akinetic swept laser. The narrow instantaneous linewidth of the swept source with its high phase stability, combined with high-speed detection in the system enable us to achieve long ranging (up to 46 mm) and almost negligible system sensitivity fall-off. To illustrate these advantages, we compare the basic system performances between conventional spectral domain OCTA and UW-OCTA systems and their functional imaging of microvascular networks in living tissues. In addition, we show that the UW-OCTA is capable of different depth-ranging of cerebral blood flow within entire brain in mice, and providing unprecedented blood perfusion map of human finger in vivo . We believe that the UW-OCTA system has promises to augment the existing clinical practice and explore new biomedical applications for OCT imaging.

  1. Wide field and highly sensitive angiography based on optical coherence tomography with akinetic swept source

    PubMed Central

    Xu, Jingjiang; Song, Shaozhen; Wei, Wei; Wang, Ruikang K.

    2016-01-01

    Wide-field vascular visualization in bulk tissue that is of uneven surface is challenging due to the relatively short ranging distance and significant sensitivity fall-off for most current optical coherence tomography angiography (OCTA) systems. We report a long ranging and ultra-wide-field OCTA (UW-OCTA) system based on an akinetic swept laser. The narrow instantaneous linewidth of the swept source with its high phase stability, combined with high-speed detection in the system enable us to achieve long ranging (up to 46 mm) and almost negligible system sensitivity fall-off. To illustrate these advantages, we compare the basic system performances between conventional spectral domain OCTA and UW-OCTA systems and their functional imaging of microvascular networks in living tissues. In addition, we show that the UW-OCTA is capable of different depth-ranging of cerebral blood flow within entire brain in mice, and providing unprecedented blood perfusion map of human finger in vivo. We believe that the UW-OCTA system has promises to augment the existing clinical practice and explore new biomedical applications for OCT imaging. PMID:28101428

  2. Experimental characterization of the perceptron laser rangefinder

    NASA Technical Reports Server (NTRS)

    Kweon, I. S.; Hoffman, Regis; Krotkov, Eric

    1991-01-01

    In this report, we characterize experimentally a scanning laser rangefinder that employs active sensing to acquire three-dimensional images. We present experimental techniques applicable to a wide variety of laser scanners, and document the results of applying them to a device manufactured by Perceptron. Nominally, the sensor acquires data over a 60 deg x 60 deg field of view in 256 x 256 pixel images at 2 Hz. It digitizes both range and reflectance pixels to 12 bits, providing a maximum range of 40 m and a depth resolution of 1 cm. We present methods and results from experiments to measure geometric parameters including the field of view, angular scanning increments, and minimum sensing distance. We characterize qualitatively problems caused by implementation flaws, including internal reflections and range drift over time, and problems caused by inherent limitations of the rangefinding technology, including sensitivity to ambient light and surface material. We characterize statistically the precision and accuracy of the range measurements. We conclude that the performance of the Perceptron scanner does not compare favorably with the nominal performance, that scanner modifications are required, and that further experimentation must be conducted.

  3. Comparing Yb-fiber and Ti:Sapphire lasers for depth resolved imaging of human skin (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Balu, Mihaela; Saytashev, Ilyas; Hou, Jue; Dantus, Marcos; Tromberg, Bruce J.

    2016-02-01

    We report on a direct comparison between Ti:Sapphire and Yb fiber lasers for depth-resolved label-free multimodal imaging of human skin. We found that the penetration depth achieved with the Yb laser was 80% greater than for the Ti:Sapphire. Third harmonic generation (THG) imaging with Yb laser excitation provides additional information about skin structure. Our results indicate the potential of fiber-based laser systems for moving into clinical use.

  4. Superficial Dosimetry Imaging of Čerenkov Emission in Electron Beam Radiotherapy of Phantoms

    PubMed Central

    Zhang, Rongxiao; Fox, Colleen J.; Glaser, Adam K.; Gladstone, David J.; Pogue, Brian W.

    2014-01-01

    Čerenkov emission is generated from ionizing radiation in tissue above 264keV energy. This study presents the first examination of this optical emission as a surrogate for the absorbed superficial dose. Čerenkov emission was imaged from the surface of flat tissue phantoms irradiated with electrons, using a range of field sizes from 6cm×6cm to 20cm×20cm, incident angles from 0 to 50 degrees, and energies from 6 to 18 MeV. The Čerenkov images were compared with estimated superficial dose in phantoms from direct diode measurements, as well as calculations by Monte Carlo and the treatment planning system. Intensity images showed outstanding linear agreement (R2=0.97) with reference data of the known dose for energies from 6MeV to 18MeV. When orthogonal delivery was done, the in-plane and cross-plane dose distribution comparisons indicated very little difference (±2~4% differences) between the different methods of estimation as compared to Čerenkov light imaging. For an incident angle 50 degrees, the Čerenkov images and Monte Carlo simulation show excellent agreement with the diode data, but the treatment planning system (TPS) had at a larger error (OPT=±1~2%, Diode=±2~3%, TPS=±6~8% differences) as would be expected. The sampling depth of superficial dosimetry based on Čerenkov radiation has been simulated in layered skin model, showing the potential of sampling depth tuning by spectral filtering. Taken together, these measurements and simulations indicate that Čerenkov emission imaging might provide a valuable way to superficial dosimetry imaging from incident radiotherapy beams of electrons. PMID:23880473

  5. Pose-Invariant Face Recognition via RGB-D Images.

    PubMed

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  6. Performance Evaluation of a Pose Estimation Method based on the SwissRanger SR4000

    DTIC Science & Technology

    2012-08-01

    however, not suitable for navigating a small robot. Commercially available Flash LIDAR now has sufficient accuracy for robotic application. A...Flash LIDAR simultaneously produces intensity and range images of the scene at a video frame rate. It has the following advantages over stereovision...fully dense depth data across its field-of-view. The commercially available Flash LIDAR includes the SwissRanger [17] and TigerEye 3D [18

  7. Superficial ultrasound shear wave speed measurements in soft and hard elasticity phantoms: repeatability and reproducibility using two ultrasound systems.

    PubMed

    Dillman, Jonathan R; Chen, Shigao; Davenport, Matthew S; Zhao, Heng; Urban, Matthew W; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L

    2015-03-01

    There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. To assess the repeatability and reproducibility of superficial shear wave speed measurements acquired from elasticity phantoms at varying imaging depths using three imaging methods, two US systems and multiple operators. Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems Inc. (Norfolk, VA) were utilized for our investigation. Institution No. 1 used an Acuson S3000 US system (Siemens Medical Solutions USA, Malvern, PA) and three shear wave imaging method/transducer combinations, while institution No. 2 used an Aixplorer US system (SuperSonic Imagine, Bothell, WA) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0 cm, 2.5 cm and 4.0 cm) by four operators at each institution. Student's t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single-measure intra-class correlation coefficients (ICCs) and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (P = 0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (P > 0.05). The coefficients of variation were low (0.5-6.8%), and interoperator agreement was near-perfect (ICCs ≥ 0.99). Shear wave imaging method and imaging depth significantly affected measured SWS (P < 0.0001). Superficial shear wave speed measurements in elasticity phantoms demonstrate minimal variability across imaging method/transducer combinations, imaging depths and operators. The exact clinical significance of this variation is uncertain and may change according to organ and specific disease state.

  8. Superficial Ultrasound Shear Wave Speed Measurements in Soft and Hard Elasticity Phantoms: Repeatability and Reproducibility Using Two Different Ultrasound Systems

    PubMed Central

    Dillman, Jonathan R.; Chen, Shigao; Davenport, Matthew S.; Zhao, Heng; Urban, Matthew W.; Song, Pengfei; Watcharotone, Kuanwong; Carson, Paul L.

    2014-01-01

    Background There is a paucity of data available regarding the repeatability and reproducibility of superficial shear wave speed (SWS) measurements at imaging depths relevant to the pediatric population. Purpose To assess the repeatability and reproducibility of superficial shear wave speed (SWS) measurements acquired from elasticity phantoms at varying imaging depths using three different imaging methods, two different ultrasound systems, and multiple operators. Methods and Materials Soft and hard elasticity phantoms manufactured by Computerized Imaging Reference Systems, Inc. (Norfolk, VA) were utilized for our investigation. Institution #1 used an Acuson S3000 ultrasound system (Siemens Medical Solutions USA, Inc.) and three different shear wave imaging method/transducer combinations, while institution #2 used an Aixplorer ultrasound system (Supersonic Imagine) and two different transducers. Ten stiffness measurements were acquired from each phantom at three depths (1.0, 2.5, and 4.0 cm) by four operators at each institution. Student’s t-test was used to compare SWS measurements between imaging techniques, while SWS measurement agreement was assessed with two-way random effects single measure intra-class correlation coefficients and coefficients of variation. Mixed model regression analysis determined the effect of predictor variables on SWS measurements. Results For the soft phantom, the average of mean SWS measurements across the various imaging methods and depths was 0.84 ± 0.04 m/s (mean ± standard deviation) for the Acuson S3000 system and 0.90 ± 0.02 m/s for the Aixplorer system (p=0.003). For the hard phantom, the average of mean SWS measurements across the various imaging methods and depths was 2.14 ± 0.08 m/s for the Acuson S3000 system and 2.07 ± 0.03 m/s Aixplorer system (p>0.05). The coefficients of variation were low (0.5–6.8%), and inter-operator agreement was near-perfect (ICCs ≥0.99). Shear wave imaging method and imaging depth significantly affected measured SWS (p<0.0001). Conclusions Superficial SWS measurements in elasticity phantoms demonstrate minimal variability across imaging method/transducer combinations, imaging depths, and between operators. The exact clinical significance of this variability is uncertain and may vary by organ and specific disease state. PMID:25249389

  9. Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Wimmer, Michael

    2016-02-01

    With the enormous advances of the acquisition technology over the last years, fast processing and high-quality visualization of large point clouds have gained increasing attention. Commonly, a mesh surface is reconstructed from the point cloud and a high-resolution texture is generated over the mesh from the images taken at the site to represent surface materials. However, this global reconstruction and texturing approach becomes impractical with increasing data sizes. Recently, due to its potential for scalability and extensibility, a method for texturing a set of depth maps in a preprocessing and stitching them at runtime has been proposed to represent large scenes. However, the rendering performance of this method is strongly dependent on the number of depth maps and their resolution. Moreover, for the proposed scene representation, every single depth map has to be textured by the images, which in practice heavily increases processing costs. In this paper, we present a novel method to break these dependencies by introducing an efficient raytracing of multiple depth maps. In a preprocessing phase, we first generate high-resolution textured depth maps by rendering the input points from image cameras and then perform a graph-cut based optimization to assign a small subset of these points to the images. At runtime, we use the resulting point-to-image assignments (1) to identify for each view ray which depth map contains the closest ray-surface intersection and (2) to efficiently compute this intersection point. The resulting algorithm accelerates both the texturing and the rendering of the depth maps by an order of magnitude.

  10. Penetration depth measurement of near-infrared hyperspectral imaging light for milk powder

    USDA-ARS?s Scientific Manuscript database

    The increasingly common application of near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging ligh...

  11. The application of coded excitation technology in medical ultrasonic Doppler imaging

    NASA Astrophysics Data System (ADS)

    Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin

    2008-03-01

    Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.

  12. Terahertz Imaging of Three-Dimensional Dehydrated Breast Cancer Tumors

    NASA Astrophysics Data System (ADS)

    Bowman, Tyler; Wu, Yuhao; Gauch, John; Campbell, Lucas K.; El-Shenawee, Magda

    2017-06-01

    This work presents the application of terahertz imaging to three-dimensional formalin-fixed, paraffin-embedded human breast cancer tumors. The results demonstrate the capability of terahertz for in-depth scanning to produce cross section images without the need to slice the tumor. Samples of tumors excised from women diagnosed with infiltrating ductal carcinoma and lobular carcinoma are investigated using a pulsed terahertz time domain imaging system. A time of flight estimation is used to obtain vertical and horizontal cross section images of tumor tissues embedded in paraffin block. Strong agreement is shown comparing the terahertz images obtained by electronically scanning the tumor in-depth in comparison with histopathology images. The detection of cancer tissue inside the block is found to be accurate to depths over 1 mm. Image processing techniques are applied to provide improved contrast and automation of the obtained terahertz images. In particular, unsharp masking and edge detection methods are found to be most effective for three-dimensional block imaging.

  13. Four faint T dwarfs from the UKIRT Infrared Deep Sky Survey (UKIDSS) Southern Stripe

    NASA Astrophysics Data System (ADS)

    Chiu, Kuenley; Liu, Michael C.; Jiang, Linhua; Allers, Katelyn N.; Stark, Daniel P.; Bunker, Andrew; Fan, Xiaohui; Glazebrook, Karl; Dupuy, Trent J.

    2008-03-01

    We present the optical and near-infrared photometry and spectroscopy of four faint T dwarfs newly discovered from the UKIDSS first data release. The sample, drawn from an imaged area of ~136 deg2 to a depth of Y = 19.9 (5σ, Vega), is located in the Sloan Digital Sky Survey (SDSS) Southern Equatorial Stripe, a region of significant future deep imaging potential. We detail the selection and followup of these objects, three of which are spectroscopically confirmed brown dwarfs ranging from type T2.5 to T7.5, and one is photometrically identified as early T. Their magnitudes range from Y = 19.01 to 19.88 with derived distances from 34 to 98 pc, making these among the coldest and faintest brown dwarfs known. The T7.5 dwarf appears to be single based on 0.05-arcsec images from Keck laser guide star adaptive optics. The sample brings the total number of T dwarfs found or confirmed by UKIDSS data in this region to nine, and we discuss the projected numbers of dwarfs in the future survey data. We estimate that ~240 early and late T dwarfs are discoverable in the UKIDSS Large Area Survey (LAS) data, falling significantly short of published model projections and suggesting that initial mass functions and/or birth rates may be at the low end of possible models. Thus, deeper optical data have good potential to exploit the UKIDSS survey depth more fully, but may still find the potential Y dwarf sample to be extremely rare.

  14. Optimization of a miniature short-wavelength infrared objective optics of a short-wavelength infrared to visible upconversion layer attached to a mobile-devices visible camera

    NASA Astrophysics Data System (ADS)

    Kadosh, Itai; Sarusi, Gabby

    2017-10-01

    The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.

  15. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  16. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  17. Monocular Depth Perception and Robotic Grasping of Novel Objects

    DTIC Science & Technology

    2009-06-01

    resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show

  18. Deep learning-based depth estimation from a synthetic endoscopy image training set

    NASA Astrophysics Data System (ADS)

    Mahmood, Faisal; Durr, Nicholas J.

    2018-03-01

    Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.

  19. [Comparative analysis of light sensitivity, depth and motion perception in animals and humans].

    PubMed

    Schaeffel, F

    2017-11-01

    This study examined how humans perform regarding light sensitivity, depth perception and motion vision in comparison to various animals. The parameters that limit the performance of the visual system for these different functions were examined. This study was based on literature studies (search in PubMed) and own results. Light sensitivity is limited by the brightness of the retinal image, which in turn is determined by the f‑number of the eye. Furthermore, it is limited by photon noise, thermal decay of rhodopsin, noise in the phototransduction cascade and neuronal processing. In invertebrates, impressive optical tricks have been developed to increase the number of photons reaching the photoreceptors. Furthermore, the spontaneous decay of the photopigment is lower in invertebrates at the cost of higher energy consumption. For depth perception at close range, stereopsis is the most precise but is available only to a few vertebrates. In contrast, motion parallax is used by many species including vertebrates as well as invertebrates. In a few cases accommodation is used for depth measurements or chromatic aberration. In motion vision the temporal resolution of the eye is most important. The ficker fusion frequency correlates in vertebrates with metabolic turnover and body temperature but also has very high values in insects. Apart from that the flicker fusion frequency generally declines with increasing body weight. Compared to animals the performance of the visual system in humans is among the best regarding light sensitivity, is the best regarding depth resolution and in the middle range regarding motion resolution.

  20. Ultrahigh resolution optical coherence elastography combined with a rigid micro-endoscope (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Fang, Qi; Curatolo, Andrea; Wijesinghe, Philip; Hamzah, Juliana; Ganss, Ruth; Noble, Peter B.; Karnowski, Karol; Sampson, David D.; Kim, Jun Ki; Lee, Wei M.; Kennedy, Brendan F.

    2017-02-01

    The mechanical forces that living cells experience represent an important framework in the determination of a range of intricate cellular functions and processes. Current insight into cell mechanics is typically provided by in vitro measurement systems; for example, atomic force microscopy (AFM) measurements are performed on cells in culture or, at best, on freshly excised tissue. Optical techniques, such as Brillouin microscopy and optical elastography, have been used for ex vivo and in situ imaging, recently achieving cellular-scale resolution. The utility of these techniques in cell mechanics lies in quick, three-dimensional and label-free mechanical imaging. Translation of these techniques toward minimally invasive in vivo imaging would provide unprecedented capabilities in tissue characterization. Here, we take the first steps along this path by incorporating a gradient-index micro-endoscope into an ultrahigh resolution optical elastography system. Using this endoscope, a lateral resolution of 2 µm is preserved over an extended depth-of-field of 80 µm, achieved by Bessel beam illumination. We demonstrate this combined system by imaging stiffness of a silicone phantom containing stiff inclusions and a freshly excised murine liver tissue. Additionally, we test this system on murine ribs in situ. We show that our approach can provide high quality extended depth-of-field images through an endoscope and has the potential to measure cell mechanics deep in tissue. Eventually, we believe this tool will be capable of studying biological processes and disease progression in vivo.

Top