Science.gov

Sample records for achievable imaging depth

  1. Depth remapping using seam carving for depth image based rendering

    NASA Astrophysics Data System (ADS)

    Tsubaki, Ikuko; Iwauchi, Kenichi

    2015-03-01

    Depth remapping is a technique to control depth range of stereo images. Conventional remapping which uses a transform function in the whole image has a stable characteristic, however it sometimes reduces the 3D appearance too much. To cope with this problem, a depth remapping method which preserves the details of depth structure is proposed. We apply seam carving, which is an effective technique for image retargeting, to depth remapping. An extended depth map is defined as a space-depth volume, and a seam surface which is a 2D monotonic and connected manifold is introduced. The depth range is reduced by removing depth values on the seam surface from the space-depth volume. Finally a stereo image pair is synthesized from the corrected depth map and an input color image by depth image based rendering.

  2. Depth image enhancement using perceptual texture priors

    NASA Astrophysics Data System (ADS)

    Bang, Duhyeon; Shim, Hyunjung

    2015-03-01

    A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.

  3. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods. PMID:23893762

  4. Extended depth of field imaging for high speed object analysis

    NASA Technical Reports Server (NTRS)

    Ortyn, William (Inventor); Basiji, David (Inventor); Frost, Keith (Inventor); Liang, Luchuan (Inventor); Bauer, Richard (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)

    2011-01-01

    A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.

  5. Fast planar segmentation of depth images

    NASA Astrophysics Data System (ADS)

    Javan Hemmat, Hani; Pourtaherian, Arash; Bondarev, Egor; de With, Peter H. N.

    2015-03-01

    One of the major challenges for applications dealing with the 3D concept is the real-time execution of the algorithms. Besides this, for the indoor environments, perceiving the geometry of surrounding structures plays a prominent role in terms of application performance. Since indoor structures mainly consist of planar surfaces, fast and accurate detection of such features has a crucial impact on quality and functionality of the 3D applications, e.g. decreasing model size (decimation), enhancing localization, mapping, and semantic reconstruction. The available planar-segmentation algorithms are mostly developed using surface normals and/or curvatures. Therefore, they are computationally expensive and challenging for real-time performance. In this paper, we introduce a fast planar-segmentation method for depth images avoiding surface normal calculations. Firstly, the proposed method searches for 3D edges in a depth image and finds the lines between identified edges. Secondly, it merges all the points on each pair of intersecting lines into a plane. Finally, various enhancements (e.g. filtering) are applied to improve the segmentation quality. The proposed algorithm is capable of handling VGA-resolution depth images at a 6 FPS frame-rate with a single-thread implementation. Furthermore, due to the multi-threaded design of the algorithm, we achieve a factor of 10 speedup by deploying a GPU implementation.

  6. PSF engineering in multifocus microscopy for increased depth volumetric imaging.

    PubMed

    Hajj, Bassam; El Beheiry, Mohamed; Dahan, Maxime

    2016-03-01

    Imaging and localizing single molecules with high accuracy in a 3D volume is a challenging task. Here we combine multifocal microscopy, a recently developed volumetric imaging technique, with point spread function engineering to achieve an increased depth for single molecule imaging. Applications in 3D single molecule localization-based super-resolution imaging is shown over an axial depth of 4 µm as well as for the tracking of diffusing beads in a fluid environment over 8 µm. PMID:27231584

  7. PSF engineering in multifocus microscopy for increased depth volumetric imaging

    PubMed Central

    Hajj, Bassam; El Beheiry, Mohamed; Dahan, Maxime

    2016-01-01

    Imaging and localizing single molecules with high accuracy in a 3D volume is a challenging task. Here we combine multifocal microscopy, a recently developed volumetric imaging technique, with point spread function engineering to achieve an increased depth for single molecule imaging. Applications in 3D single molecule localization-based super-resolution imaging is shown over an axial depth of 4 µm as well as for the tracking of diffusing beads in a fluid environment over 8 µm. PMID:27231584

  8. Directional Joint Bilateral Filter for Depth Images

    PubMed Central

    Le, Anh Vu; Jung, Seung-Won; Won, Chee Sun

    2014-01-01

    Depth maps taken by the low cost Kinect sensor are often noisy and incomplete. Thus, post-processing for obtaining reliable depth maps is necessary for advanced image and video applications such as object recognition and multi-view rendering. In this paper, we propose adaptive directional filters that fill the holes and suppress the noise in depth maps. Specifically, novel filters whose window shapes are adaptively adjusted based on the edge direction of the color image are presented. Experimental results show that our method yields higher quality filtered depth maps than other existing methods, especially at the edge boundaries. PMID:24971470

  9. Directional joint bilateral filter for depth images.

    PubMed

    Le, Anh Vu; Jung, Seung-Won; Won, Chee Sun

    2014-01-01

    Depth maps taken by the low cost Kinect sensor are often noisy and incomplete. Thus, post-processing for obtaining reliable depth maps is necessary for advanced image and video applications such as object recognition and multi-view rendering. In this paper, we propose adaptive directional filters that fill the holes and suppress the noise in depth maps. Specifically, novel filters whose window shapes are adaptively adjusted based on the edge direction of the color image are presented. Experimental results show that our method yields higher quality filtered depth maps than other existing methods, especially at the edge boundaries. PMID:24971470

  10. Image inpainting strategy for Kinect depth maps

    NASA Astrophysics Data System (ADS)

    Yao, Huimin; Chen, Yan; Ge, Chenyang

    2013-07-01

    The great advantage of Microsoft Kinect makes the depth acquisition real-time and inexpensive. But the depth maps directly obtained with the Microsoft Kinect device have absent regions and holes caused by optical factors. The noisy depth maps affect lots of complex tasks in computer vision. In order to improve the quality of the depth maps, this paper presents an efficient image inpainting strategy which is based on watershed segmentation and region merging framework of the corresponding color images. The primitive regions produced by watershed transform are merged into lager regions according to color similarity and edge among regions. Finally, mean filter operator to the adjacent pixels is used to fill up missing depth values and deblocking filter is applied for smoothing depth maps.

  11. Single image defogging by multiscale depth fusion.

    PubMed

    Wang, Yuan-Kai; Fan, Ching-Tang

    2014-11-01

    Restoration of fog images is important for the deweathering issue in computer vision. The problem is ill-posed and can be regularized within a Bayesian context using a probabilistic fusion model. This paper presents a multiscale depth fusion (MDF) method for defog from a single image. A linear model representing the stochastic residual of nonlinear filtering is first proposed. Multiscale filtering results are probabilistically blended into a fused depth map based on the model. The fusion is formulated as an energy minimization problem that incorporates spatial Markov dependence. An inhomogeneous Laplacian-Markov random field for the multiscale fusion regularized with smoothing and edge-preserving constraints is developed. A nonconvex potential, adaptive truncated Laplacian, is devised to account for spatially variant characteristics such as edge and depth discontinuity. Defog is solved by an alternate optimization algorithm searching for solutions of depth map by minimizing the nonconvex potential in the random field. The MDF method is experimentally verified by real-world fog images including cluttered-depth scene that is challenging for defogging at finer details. The fog-free images are restored with improving contrast and vivid colors but without over-saturation. Quantitative assessment of image quality is applied to compare various defog methods. Experimental results demonstrate that the accurate estimation of depth map by the proposed edge-preserved multiscale fusion should recover high-quality images with sharp details. PMID:25248180

  12. A New Approach for Image Depth from a Single Image

    NASA Astrophysics Data System (ADS)

    Leng, Jiaojiao; Zhao, Tongzhou; Li, Hui; Li, Xiang

    This paper presents a new method called depth from defocus (DFD) to obtain the image depth from a single still image. The traditional approaches always depend on the local features which are insufficient for estimation or need multiple images that cause a large amount of computation. The reverse heat equation is applied to get the defocused image. Then we use confidence interval to segment the defocused image and obtain a hierarchical image with guided image filter. The method need only a single image so it overcomes the massive computation and enhances the computation effect. The result shows that the DFD method is validate and efficient.

  13. Depth of field in modern thermal imaging

    NASA Astrophysics Data System (ADS)

    Schuster, Norbert; Franks, John

    2015-05-01

    Modern thermal imaging lenses for uncooled detectors are high aperture systems. Very often, their aperture based fnumber is faster than 1.2. The impact of this on the depth of field is dramatic, especially for narrow field lenses. The users would like to know how the image quality changes with and without refocusing for objects at different distances from the camera core. The Depth of Field approach presented here is based on the lens specific Through Focus MTF. It will be averaged for the detector area. The lens specific Through Focus MTF will be determined at the detector Nyquist frequency, which is defined by the pixel pitch. In this way, the specific lens and the specific FPA-geometry (pixel pitch, detector area) are considered. The condition, that the Through Focus MTF at full Nyquist must be higher than 0.25, defines a certain symmetrical depth of focus. This criterion provides a good discrimination for reasonable lens/detector combinations. The examples chosen reflect the actual development of uncooled camera cores. The symmetrical depth of focus is transferred to object space using paraxial relations. This defines a typical depth of field diagram containing three functions: Hyperfocal distance, nearest and furthest distance versus sharp distance (best focus). Pictures taken with an IR Camera illustrate the effect in the depth of field and its dependence on focal length. These pictures confirm the methodology. A separate problem is the acceptable drop of resolution in combination with a specific camera core and specific object scenes. We propose to evaluate the MTF-graph at half Nyquist frequency. This quantifies the resolution loss without refocus in accordance with the IR-picture degradation at the limits of the Depth of Field. The approach is applied to different commercially available lenses. Pictures illustrate the Depth of Field for different pixel pitches and pixel counts.

  14. Depth-resolved image mapping spectrometer (IMS) with structured illumination

    NASA Astrophysics Data System (ADS)

    Gao, Liang; Bedard, Noah; Hagen, Nathan; Kester, Robert T.; Tkaczyk, Tomasz S.

    2011-08-01

    We present a depth-resolved Image Mapping Spectrometer (IMS) which is capable of acquiring 4D (x, y, z, λ) datacubes. Optical sectioning is implemented by structured illumination. The device's spectral imaging performance is demonstrated in a multispectral microsphere and mouse kidney tissue fluorescence imaging experiment. We also compare quantitatively the depth-resolved IMS with a hyperspectral confocal microscope (HCM) in a standard fluorescent bead imaging experiment. The comparison results show that despite the use of a light source with four orders of magnitude lower intensity in the IMS than that in the HCM, the image signal-to-noise ratio acquired by the IMS is 2.6 times higher than that achieved by the equivalent confocal approach.

  15. Depth-based selective image reconstruction using spatiotemporal image analysis

    NASA Astrophysics Data System (ADS)

    Haga, Tetsuji; Sumi, Kazuhiko; Hashimoto, Manabu; Seki, Akinobu

    1999-03-01

    In industrial plants, a remote monitoring system which removes physical tour inspection is often considered desirable. However the image sequence given from the mobile inspection robot is hard to see because interested objects are often partially occluded by obstacles such as pillars or fences. Our aim is to improve the image sequence that increases the efficiency and reliability of remote visual inspection. We propose a new depth-based image processing technique, which removes the needless objects from the foreground and recovers the occluded background electronically. Our algorithm is based on spatiotemporal analysis that enables fine and dense depth estimation, depth-based precise segmentation, and accurate interpolation. We apply this technique to a real time sequence given from the mobile inspection robot. The resulted image sequence is satisfactory in that the operator can make correct visual inspection with less fatigue.

  16. Shallow depth subsurface imaging with microwave holography

    NASA Astrophysics Data System (ADS)

    Zhuravlev, Andrei; Ivashov, Sergey; Razevig, Vladimir; Vasiliev, Igor; Bechtel, Timothy

    2014-05-01

    In this paper, microwave holography is considered as a tool to obtain high resolution images of shallowly buried objects. Signal acquisition is performed at multiple frequencies on a grid using a two-dimensional mechanical scanner moving a single transceiver over an area of interest in close proximity to the surface. The described FFT-based reconstruction technique is used to obtain a stack of plan view images each using only one selected frequency from the operating waveband of the radar. The extent of a synthetically-formed aperture and the signal wavelength define the plan view resolution, which at sounding frequencies near 7 GHz amounts to 2 cm. The system has a short depth of focus which allows easy selection of proper focusing plane. The small distance from the buried objects to the antenna does not prevent recording of clean images due to multiple reflections (as happens with impulse radars). The description of the system hardware and signal processing technique is illustrated using experiments conducted in dry sand. The microwave images of inert anti-personnel mines are demonstrated as examples. The images allow target discrimination based on the same visually-discernible small features that a human observer would employ. The demonstrated technology shows promise for modification to meet the specific practical needs required for humanitarian demining or in multi-sensor survey systems.

  17. Wavelet-based stereo images reconstruction using depth images

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2007-09-01

    It is believed by many that three-dimensional (3D) television will be the next logical development toward a more natural and vivid home entertaiment experience. While classical 3D approach requires the transmission of two video streams, one for each view, 3D TV systems based on depth image rendering (DIBR) require a single stream of monoscopic images and a second stream of associated images usually termed depth images or depth maps, that contain per-pixel depth information. Depth map is a two-dimensional function that contains information about distance from camera to a certain point of the object as a function of the image coordinates. By using this depth information and the original image it is possible to reconstruct a virtual image of a nearby viewpoint by projecting the pixels of available image to their locations in 3D space and finding their position in the desired view plane. One of the most significant advantages of the DIBR is that depth maps can be coded more efficiently than two streams corresponding to left and right view of the scene, thereby reducing the bandwidth required for transmission, which makes it possible to reuse existing transmission channels for the transmission of 3D TV. This technique can also be applied for other 3D technologies such as multimedia systems. In this paper we propose an advanced wavelet domain scheme for the reconstruction of stereoscopic images, which solves some of the shortcommings of the existing methods discussed above. We perform the wavelet transform of both the luminance and depth images in order to obtain significant geometric features, which enable more sensible reconstruction of the virtual view. Motion estimation employed in our approach uses Markov random field smoothness prior for regularization of the estimated motion field. The evaluation of the proposed reconstruction method is done on two video sequences which are typically used for comparison of stereo reconstruction algorithms. The results demonstrate

  18. Extending the depth-of-field for microscopic imaging by means of multifocus color image fusion

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, R.; Toxqui-Quitl, C.; Padilla-Vivanco, A.; Ortega-Mendoza, G.

    2015-09-01

    In microscopy, the depth of field (DOF) is limited by the physical characteristics of imaging systems. Imaging a scene with the all the field of view in focus can be an impossible task to achieve. In this paper, metal samples are inspected on multiple focal planes by moving the microscope stage along the z - axis and for each z plane, an image is digitalized. Through digital image processing, an image with all the focused regions is generated from a set of multi focus images. The proposed fusion algorithm gives a single sharp image. The merger scheme is simple, fast and virtually free of artifacts or false color. Experimental fusion results are shown.

  19. Volumetric retinal fluorescence microscopic imaging with extended depth of field

    NASA Astrophysics Data System (ADS)

    Li, Zengzhuo; Fischer, Andrew; Li, Wei; Li, Guoqiang

    2016-03-01

    Wavefront-engineered microscope with greatly extended depth of field (EDoF) is designed and demonstrated for volumetric imaging with near-diffraction limited optical performance. A bright field infinity-corrected transmissive/reflective light microscope is built with Kohler illumination. A home-made phase mask is placed in between the objective lens and the tube lens for ease of use. General polynomial function is adopted in the design of the phase plate for robustness and custom merit function is used in Zemax for optimization. The resulting EDoF system achieves an engineered point spread function (PSF) that is much less sensitive to object depth variation than conventional systems and therefore 3D volumetric information can be acquired in a single frame with expanded tolerance of defocus. In Zemax simulation for a setup using 32X objective (NA = 0.6), the EDoF is 20μm whereas a conventional one has a DoF of 1.5μm, indicating a 13 times increase. In experiment, a 20X objective lens with NA = 0.4 was used and the corresponding phase plate was designed and fabricated. Retinal fluorescence images of the EDoF microscope using passive adaptive optical phase element illustrate a DoF around 100μm and it is able to recover the volumetric fluorescence images that are almost identical to in-focus images after post processing. The image obtained from the EDoF microscope is also better in resolution and contrast, and the retinal structure is better defined. Hence, due to its high tolerance of defocus and fine restored image quality, EDoF optical systems have promising potential in consumer portable medical imaging devices where user's ability to achieve focus is not optimal, and other medical imaging equipment where achieving best focus is not a necessary.

  20. Monocular depth perception using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Hombali, Apoorv; Gorde, Vaibhav; Deshpande, Abhishek

    2011-10-01

    This paper primarily exploits some of the more obscure, but inherent properties of camera and image to propose a simpler and more efficient way of perceiving depth. The proposed method involves the use of a single stationary camera at an unknown perspective and an unknown height to determine depth of an object on unknown terrain. In achieving so a direct correlation between a pixel in an image and the corresponding location in real space has to be formulated. First, a calibration step is undertaken whereby the equation of the plane visible in the field of view is calculated along with the relative distance between camera and plane by using a set of derived spatial geometrical relations coupled with a few intrinsic properties of the system. The depth of an unknown object is then perceived by first extracting the object under observation using a series of image processing steps followed by exploiting the aforementioned mapping of pixel and real space coordinate. The performance of the algorithm is greatly enhanced by the introduction of reinforced learning making the system independent of hardware and environment. Furthermore the depth calculation function is modified with a supervised learning algorithm giving consistent improvement in results. Thus, the system uses the experience in past and optimizes the current run successively. Using the above procedure a series of experiments and trials are carried out to prove the concept and its efficacy.

  1. Depth Analogy: Data-Driven Approach for Single Image Depth Estimation Using Gradient Samples.

    PubMed

    Choi, Sunghwan; Min, Dongbo; Ham, Bumsub; Kim, Youngjung; Oh, Changjae; Sohn, Kwanghoon

    2015-12-01

    Inferring scene depth from a single monocular image is a highly ill-posed problem in computer vision. This paper presents a new gradient-domain approach, called depth analogy, that makes use of analogy as a means for synthesizing a target depth field, when a collection of RGB-D image pairs is given as training data. Specifically, the proposed method employs a non-parametric learning process that creates an analogous depth field by sampling reliable depth gradients using visual correspondence established on training image pairs. Unlike existing data-driven approaches that directly select depth values from training data, our framework transfers depth gradients as reconstruction cues, which are then integrated by the Poisson reconstruction. The performance of most conventional approaches relies heavily on the training RGB-D data used in the process, and such a dependency severely degenerates the quality of reconstructed depth maps when the desired depth distribution of an input image is quite different from that of the training data, e.g., outdoor versus indoor scenes. Our key observation is that using depth gradients in the reconstruction is less sensitive to scene characteristics, providing better cues for depth recovery. Thus, our gradient-domain approach can support a great variety of training range datasets that involve substantial appearance and geometric variations. The experimental results demonstrate that our (depth) gradient-domain approach outperforms existing data-driven approaches directly working on depth domain, even when only uncorrelated training datasets are available. PMID:26529766

  2. Nanometric depth resolution from multi-focal images in microscopy

    PubMed Central

    Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.

    2011-01-01

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948

  3. Navigating from a Depth Image Converted into Sound

    PubMed Central

    Stoll, Chloé; Palluel-Germain, Richard; Fristot, Vincent; Pellerin, Denis; Alleysson, David; Graff, Christian

    2015-01-01

    Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs). Objective. We tested such a design as a travel aid in a navigation task. Methods. Our portable device (MeloSee) converted 2D array of a depth image into melody in real-time. Distance from the sensor was translated into sound intensity, stereo-modulated laterally, and the pitch represented verticality. Twenty-one blindfolded young adults navigated along four different paths during two sessions separated by one-week interval. In some instances, a dual task required them to recognize a temporal pattern applied through a tactile vibrator while they navigated. Results. Participants learnt how to use the system on both new paths and on those they had already navigated from. Based on travel time and errors, performance improved from one week to the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. Conclusions. The use of Kinect-type sensors to implement SSDs is promising, but it is restricted to indoor use and it is inefficient on too short range. PMID:27019586

  4. Navigating from a Depth Image Converted into Sound.

    PubMed

    Stoll, Chloé; Palluel-Germain, Richard; Fristot, Vincent; Pellerin, Denis; Alleysson, David; Graff, Christian

    2015-01-01

    Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs). Objective. We tested such a design as a travel aid in a navigation task. Methods. Our portable device (MeloSee) converted 2D array of a depth image into melody in real-time. Distance from the sensor was translated into sound intensity, stereo-modulated laterally, and the pitch represented verticality. Twenty-one blindfolded young adults navigated along four different paths during two sessions separated by one-week interval. In some instances, a dual task required them to recognize a temporal pattern applied through a tactile vibrator while they navigated. Results. Participants learnt how to use the system on both new paths and on those they had already navigated from. Based on travel time and errors, performance improved from one week to the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. Conclusions. The use of Kinect-type sensors to implement SSDs is promising, but it is restricted to indoor use and it is inefficient on too short range. PMID:27019586

  5. Objective, comparative assessment of the penetration depth of temporal-focusing microscopy for imaging various organs

    NASA Astrophysics Data System (ADS)

    Rowlands, Christopher J.; Bruns, Oliver T.; Bawendi, Moungi G.; So, Peter T. C.

    2015-06-01

    Temporal focusing is a technique for performing axially resolved widefield multiphoton microscopy with a large field of view. Despite significant advantages over conventional point-scanning multiphoton microscopy in terms of imaging speed, the need to collect the whole image simultaneously means that it is expected to achieve a lower penetration depth in common biological samples compared to point-scanning. We assess the penetration depth using a rigorous objective criterion based on the modulation transfer function, comparing it to point-scanning multiphoton microscopy. Measurements are performed in a variety of mouse organs in order to provide practical guidance as to the achievable penetration depth for both imaging techniques. It is found that two-photon scanning microscopy has approximately twice the penetration depth of temporal-focusing microscopy, and that penetration depth is organ-specific; the heart has the lowest penetration depth, followed by the liver, lungs, and kidneys, then the spleen, and finally white adipose tissue.

  6. Objective, comparative assessment of the penetration depth of temporal-focusing microscopy for imaging various organs

    PubMed Central

    Rowlands, Christopher J.; Bruns, Oliver T.; Bawendi, Moungi G.; So, Peter T. C.

    2015-01-01

    Abstract. Temporal focusing is a technique for performing axially resolved widefield multiphoton microscopy with a large field of view. Despite significant advantages over conventional point-scanning multiphoton microscopy in terms of imaging speed, the need to collect the whole image simultaneously means that it is expected to achieve a lower penetration depth in common biological samples compared to point-scanning. We assess the penetration depth using a rigorous objective criterion based on the modulation transfer function, comparing it to point-scanning multiphoton microscopy. Measurements are performed in a variety of mouse organs in order to provide practical guidance as to the achievable penetration depth for both imaging techniques. It is found that two-photon scanning microscopy has approximately twice the penetration depth of temporal-focusing microscopy, and that penetration depth is organ-specific; the heart has the lowest penetration depth, followed by the liver, lungs, and kidneys, then the spleen, and finally white adipose tissue. PMID:25844509

  7. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina.

    PubMed

    An, Lin; Shen, Tueng T; Wang, Ruikang K

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm(2) with single scan and 7 × 8 mm(2) for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm(2) with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration. PMID:22029360

  8. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina

    NASA Astrophysics Data System (ADS)

    An, Lin; Shen, Tueng T.; Wang, Ruikang K.

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.

  9. Predictive coding of depth images across multiple views

    NASA Astrophysics Data System (ADS)

    Morvan, Yannick; Farin, Dirk; de With, Peter H. N.

    2007-02-01

    A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows the viewer to select his preferred viewpoint, or 3D TV where the depth of the scene can be perceived using a special display. Because the user-selected view does not always correspond to a camera position, it may be necessary to synthesize a virtual camera view. To synthesize such a virtual view, we have adopted a depth image-based rendering technique that employs one depth map for each camera. Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data. This paper presents a predictivecoding algorithm for the compression of depth images across multiple views. The presented algorithm provides (a) an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264), and (b), a random access to different views for fast rendering. The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data avoids an independent transmission of depth for each view, while simplifying the view interpolation by synthesizing depth images for arbitrary view points. We present experimental results for several multiview depth sequences, that result in a quality improvement of up to 1.8 dB as compared to H.264 compression.

  10. Increasing the imaging depth through computational scattering correction (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Koberstein-Schwarz, Benno; Omlor, Lars; Schmitt-Manderbach, Tobias; Mappes, Timo; Ntziachristos, Vasilis

    2016-03-01

    Imaging depth is one of the most prominent limitations in light microscopy. The depth in which we are still able to resolve biological structures is limited by the scattering of light within the sample. We have developed an algorithm to compensate for the influence of scattering. The potential of algorithm is demonstrated on a 3D image stack of a zebrafish embryo captured with a selective plane illumination microscope (SPIM). With our algorithm we were able shift the point in depth, where scattering starts to blur the imaging and effect the image quality by around 30 µm. For the reconstruction the algorithm only uses information from within the image stack. Therefore the algorithm can be applied on the image data from every SPIM system without further hardware adaption. Also there is no need for multiple scans from different views to perform the reconstruction. The underlying model estimates the recorded image as a convolution between the distribution of fluorophores and a point spread function, which describes the blur due to scattering. Our algorithm performs a space-variant blind deconvolution on the image. To account for the increasing amount of scattering in deeper tissue, we introduce a new regularizer which models the increasing width of the point spread function in order to improve the image quality in the depth of the sample. Since the assumptions the algorithm is based on are not limited to SPIM images the algorithm should also be able to work on other imaging techniques which provide a 3D image volume.

  11. Enhanced optical clearing of skin in vivo and optical coherence tomography in-depth imaging

    NASA Astrophysics Data System (ADS)

    Wen, Xiang; Jacques, Steven L.; Tuchin, Valery V.; Zhu, Dan

    2012-06-01

    The strong optical scattering of skin tissue makes it very difficult for optical coherence tomography (OCT) to achieve deep imaging in skin. Significant optical clearing of in vivo rat skin sites was achieved within 15 min by topical application of an optical clearing agent PEG-400, a chemical enhancer (thiazone or propanediol), and physical massage. Only when all three components were applied together could a 15 min treatment achieve a three fold increase in the OCT reflectance from a 300 μm depth and 31% enhancement in image depth Zthreshold.

  12. An image cancellation approach to depth-from-focus

    SciTech Connect

    Lu, Shin-yee; Graser, M.

    1995-03-01

    Depth calculation of an object allows computer reconstruction of the surface of the object in three dimensions. Such information provides human operators 3D measurements for visualization, diagnostic and manipulation. It can also provide the necessary coordinates for semi or fully automated operations. This paper describes a microscopic imaging system with computer vision algorithms that can obtain the depth information by making use of the shallow depth of field of microscopic lenses.

  13. Multiview image and depth map coding for holographic TV system

    NASA Astrophysics Data System (ADS)

    Senoh, Takanori; Wakunami, Koki; Ichihashi, Yasuyuki; Sasaki, Hisayuki; Oi, Ryutaro; Yamamoto, Kenji

    2014-11-01

    A holographic TV system based on multiview image and depth map coding and the analysis of coding noise effects in reconstructed images is proposed. A major problem for holographic TV systems is the huge amount of data that must be transmitted. It has been shown that this problem can be solved by capturing a three-dimensional scene with multiview cameras, deriving depth maps from multiview images or directly capturing them, encoding and transmitting the multiview images and depth maps, and generating holograms at the receiver side. This method shows the same subjective image quality as hologram data transmission with about 1/97000 of the data rate. Speckle noise, which masks coding noise when the coded bit rate is not extremely low, is shown to be the main determinant of reconstructed holographic image quality.

  14. Calibrating river bathymetry via image to depth quantile transformation

    NASA Astrophysics Data System (ADS)

    Legleiter, C. J.

    2015-12-01

    Remote sensing has emerged as a powerful means of measuring river depths, but standard algorithms such as Optimal Band Ratio Analysis (OBRA) require field measurements to calibrate image-derived estimates. Such reliance upon field-based calibration undermines the advantages of remote sensing. This study introduces an alternative approach based on the probability distribution of depths dd within a reach. Provided a quantity XX related to dd can be derived from a remotely sensed data set, image-to-depth quantile transformation (IDQT) infers depths throughout the image by linking the cumulative distribution function (CDF) of XX to that of dd. The algorithm involves determining, for each pixel in the image, the CDF value for that particular value of X/bar{X} and then inferring the depth at that location from the inverse CDF of the scaled depths d/dbard/bar{d}, where the overbar denotes a reach mean. For X/bar{X}, an empirical CDF can be derived directly from pixel values or a probability distribution fitted. Similarly, the CDF of d/dbard/bar{d} can be obtained from field data or from a theoretical model of the frequency distribution of dd within a reach; gamma distributions have been used for this purpose. In essence, the probability distributions calibrate XX to dd while the image provides the spatial distribution of depths. IDQT offers a number of advantages: 1) direct field measurements of dd during image acquisition are not absolutely necessary; 2) because the XX vs. dd relation need not be linear, negative depth estimates along channel margins and shallow bias in pools are avoided; and 3) because individual pixels are not linked to specific depth measurements, accurate geo-referencing of field and image data sets is not critical. Application of OBRA and IDQT to a gravel-bed river indicated that the new, probabilistic algorithm was as accurate as the standard, regression-based approach and lead to more hydraulically reasonable bathymetric maps.

  15. Volumetric depth peeling for medical image display

    NASA Astrophysics Data System (ADS)

    Borland, David; Clarke, John P.; Fielding, Julia R.; TaylorII, Russell M.

    2006-01-01

    Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.

  16. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    NASA Astrophysics Data System (ADS)

    Legleiter, Carl J.

    2016-05-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R2 = 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  17. Total variation based image deconvolution for extended depth-of-field microscopy images

    NASA Astrophysics Data System (ADS)

    Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.

    2015-03-01

    One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.

  18. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods. PMID:23996589

  19. Increasing the depth of field in Multiview 3D images

    NASA Astrophysics Data System (ADS)

    Lee, Beom-Ryeol; Son, Jung-Young; Yano, Sumio; Jung, Ilkwon

    2016-06-01

    A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.

  20. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal

  1. Visually preserving stereoscopic image retargeting using depth carving

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Ma, Huadong; Liu, Liang

    2016-03-01

    This paper presents a method for retargeting a pair of stereoscopic images. Previous works have leveraged seam carving and image warping methods for two-dimensional image editing to address this issue. However, they did not consider the full advantages of the properties of stereoscopic images. Our approach offers substantial performance improvements over the state-of-the-art; the key insights driving the approach are that the input image pair can be decomposed into different depth layers according to the disparity and image segmentation, and the depth cues allow us to address the problem in a three-dimensional (3-D) space domain for best preserving objects. We propose depth carving that extends seam carving in a single image to resize the stereo image pair with disparity consistency. Our method minimizes the shape distortion and preserves object boundaries by creating new occlusions. As a result, the retargeted image pair preserves the stereoscopic quality and protects the original 3-D scene structure. Experimental results demonstrate that our method outperforms the previous methods.

  2. Airway surface liquid depth imaged by surface laser reflectance microscopy.

    PubMed

    Thiagarajah, Jay R; Song, Yuanlin; Derichs, Nico; Verkman, A S

    2010-09-01

    The thin layer of liquid at the surface of airway epithelium, the airway surface liquid (ASL), is important in normal airway physiology and in the pathophysiology of cystic fibrosis. At present, the best method to measure ASL depth involves scanning confocal microscopy after staining with an aqueous-phase fluorescent dye. We describe here a simple, noninvasive imaging method to measure ASL depth by reflectance imaging of an epithelial mucosa in which the surface is illuminated at a 45-degree angle by an elongated 13-microm wide rectangular beam produced by a 670-nm micro-focus laser. The principle of the method is that air-liquid, liquid-liquid, and liquid-cell interfaces produce distinct specular or diffuse reflections that can be imaged to give a micron-resolution replica of the mucosal surface. The method was validated using fluid layers of specified thicknesses and applied to measure ASL depth in cell cultures and ex vivo fragments of pig trachea. In addition, the method was adapted to measure transepithelial fluid transport from the dynamics of fluid layer depth. Compared with confocal imaging, ASL depth measurement by surface laser reflectance microscopy does not require dye staining or costly instrumentation, and can potentially be adapted for in vivo measurements using fiberoptics. PMID:20713545

  3. Coherent diffractive imaging: towards achieving atomic resolution.

    PubMed

    Dietze, S H; Shpyrko, O G

    2015-11-01

    The next generation of X-ray sources will feature highly brilliant X-ray beams that will enable the imaging of local nanoscale structures with unprecedented resolution. A general formalism to predict the achievable spatial resolution in coherent diffractive imaging, based solely on diffracted intensities, is provided. The coherent dose necessary to reach atomic resolution depends significantly on the atomic scale structure, where disordered or amorphous materials require roughly three orders of magnitude lower dose compared with the expected scaling of uniform density materials. Additionally, dose reduction for crystalline materials are predicted at certain resolutions based only on their unit-cell dimensions and structure factors. PMID:26524315

  4. Maximum imaging depth of two-photon autofluorescence microscopy in epithelial tissues

    PubMed Central

    Durr, Nicholas J.; Weisspfennig, Christian T.; Holfeld, Benjamin A.; Ben-Yakar, Adela

    2011-01-01

    Endogenous fluorescence provides morphological, spectral, and lifetime contrast that can indicate disease states in tissues. Previous studies have demonstrated that two-photon autofluorescence microscopy (2PAM) can be used for noninvasive, three-dimensional imaging of epithelial tissues down to approximately 150 μm beneath the skin surface. We report ex-vivo 2PAM images of epithelial tissue from a human tongue biopsy down to 370 μm below the surface. At greater than 320 μm deep, the fluorescence generated outside the focal volume degrades the image contrast to below one. We demonstrate that these imaging depths can be reached with 160 mW of laser power (2-nJ per pulse) from a conventional 80-MHz repetition rate ultrafast laser oscillator. To better understand the maximum imaging depths that we can achieve in epithelial tissues, we studied image contrast as a function of depth in tissue phantoms with a range of relevant optical properties. The phantom data agree well with the estimated contrast decays from time-resolved Monte Carlo simulations and show maximum imaging depths similar to that found in human biopsy results. This work demonstrates that the low staining inhomogeneity (∼20) and large scattering coefficient (∼10 mm−1) associated with conventional 2PAM limit the maximum imaging depth to 3 to 5 mean free scattering lengths deep in epithelial tissue. PMID:21361692

  5. Quantitative comparison of the OCT imaging depth at 1300 nm and 1600 nm

    PubMed Central

    Kodach, V. M.; Kalkman, J.; Faber, D. J.; van Leeuwen, T. G.

    2010-01-01

    One of the present challenges in optical coherence tomography (OCT) is the visualization of deeper structural morphology in biological tissues. Owing to a reduced scattering, a larger imaging depth can be achieved by using longer wavelengths. In this work, we analyze the OCT imaging depth at wavelengths around 1300 nm and 1600 nm by comparing the scattering coefficient and OCT imaging depth for a range of Intralipid concentrations at constant water content. We observe an enhanced OCT imaging depth for 1600 nm compared to 1300 nm for Intralipid concentrations larger than 4 vol.%. For higher Intralipid concentrations, the imaging depth enhancement reaches 30%. The ratio of scattering coefficients at the two wavelengths is constant over a large range of scattering coefficients and corresponds to a scattering power of 2.8 ± 0.1. Based on our results we expect for biological tissues an increase of the OCT imaging depth at 1600 nm compared to 1300 nm for samples with high scattering power and low water content. PMID:21258456

  6. Effects of the "Auditory Discrimination in Depth Program" on Auditory Conceptualization and Reading Achievement.

    ERIC Educational Resources Information Center

    Roberts, Timothy Gerald

    Statistically significant differences were not found between the treatment and non-treatment groups in a study designed to investigate the effectiveness of the Auditory Discrimination in Depth (A.D.D.) Program. The treatment group involved thirty-nine normally achieving and educationally handicapped students who were given the A.D.D. Program…

  7. Underwater Depth Estimation and Image Restoration Based on Single Images.

    PubMed

    Drews, Paulo L J; Nascimento, Erickson R; Botelho, Silvia S C; Campos, Mario Fernando Montenegro

    2016-01-01

    In underwater environments, the scattering and absorption phenomena affect the propagation of light, degrading the quality of captured images. In this work, the authors present a method based on a physical model of light propagation that takes into account the most significant effects to image degradation: absorption, scattering, and backscattering. The proposed method uses statistical priors to restore the visual quality of the images acquired in typical underwater scenarios. PMID:26960026

  8. Spatial Filter Based Bessel-Like Beam for Improved Penetration Depth Imaging in Fluorescence Microscopy

    NASA Astrophysics Data System (ADS)

    Purnapatra, Subhajit B.; Bera, Sampa; Mondal, Partha Pratim

    2012-09-01

    Monitoring and visualizing specimens at a large penetration depth is a challenge. At depths of hundreds of microns, several physical effects (such as, scattering, PSF distortion and noise) deteriorate the image quality and prohibit a detailed study of key biological phenomena. In this study, we use a Bessel-like beam in-conjugation with an orthogonal detection system to achieve depth imaging. A Bessel-like penetrating diffractionless beam is generated by engineering the back-aperture of the excitation objective. The proposed excitation scheme allows continuous scanning by simply translating the detection PSF. This type of imaging system is beneficial for obtaining depth information from any desired specimen layer, including nano-particle tracking in thick tissue. As demonstrated by imaging the fluorescent polymer-tagged-CaCO3 particles and yeast cells in a tissue-like gel-matrix, the system offers a penetration depth that extends up to 650 µm. This achievement will advance the field of fluorescence imaging and deep nano-particle tracking.

  9. Spatial Filter Based Bessel-Like Beam for Improved Penetration Depth Imaging in Fluorescence Microscopy

    PubMed Central

    Purnapatra, Subhajit B.; Bera, Sampa; Mondal, Partha Pratim

    2012-01-01

    Monitoring and visualizing specimens at a large penetration depth is a challenge. At depths of hundreds of microns, several physical effects (such as, scattering, PSF distortion and noise) deteriorate the image quality and prohibit a detailed study of key biological phenomena. In this study, we use a Bessel-like beam in-conjugation with an orthogonal detection system to achieve depth imaging. A Bessel-like penetrating diffractionless beam is generated by engineering the back-aperture of the excitation objective. The proposed excitation scheme allows continuous scanning by simply translating the detection PSF. This type of imaging system is beneficial for obtaining depth information from any desired specimen layer, including nano-particle tracking in thick tissue. As demonstrated by imaging the fluorescent polymer-tagged-CaCO3 particles and yeast cells in a tissue-like gel-matrix, the system offers a penetration depth that extends up to 650 µm. This achievement will advance the field of fluorescence imaging and deep nano-particle tracking. PMID:23012646

  10. Convective gas flow development and the maximum depths achieved by helophyte vegetation in lakes

    PubMed Central

    Sorrell, Brian K.; Hawes, Ian

    2010-01-01

    Background and Aims Convective gas flow in helophytes (emergent aquatic plants) is thought to be an important adaptation for the ability to colonize deep water. In this study, the maximum depths achieved by seven helophytes were compared in 17 lakes differing in nutrient enrichment, light attenuation, shoreline exposure and sediment characteristics to establish the importance of convective flow for their ability to form the deepest helophyte vegetation in different environments. Methods Convective gas flow development was compared amongst the seven species, and species were allocated to ‘flow absent’, ‘low flow’ and ‘high flow’ categories. Regression tree analysis and quantile regression analysis were used to determine the roles of flow category, lake water quality, light attenuation and shoreline exposure on maximum helophyte depths. Key Results Two ‘flow absent’ species were restricted to very shallow water in all lakes and their depths were not affected by any environmental parameters. Three ‘low flow’ and two ‘high flow’ species had wide depth ranges, but ‘high flow’ species formed the deepest vegetation far more frequently than ‘low flow’ species. The ‘low flow’ species formed the deepest vegetation most commonly in oligotrophic lakes where oxygen demands in sediments were low, especially on exposed shorelines. The ‘high flow’ species were almost always those forming the deepest vegetation in eutrophic lakes, with Eleocharis sphacelata predominant when light attenuation was low, and Typha orientalis when light attenuation was high. Depths achieved by all five species with convective flow were limited by shoreline exposure, but T. orientalis was the least exposure-sensitive species. Conclusions Development of convective flow appears to be essential for dominance of helophyte species in >0·5 m depth, especially under eutrophic conditions. Exposure, sediment characteristics and light attenuation frequently constrain them

  11. Animated Depth Images for Interactive Remote Visualization of Time-Varying Data Sets.

    PubMed

    Cui, Jian; Ma, Zhiqiang; Popescu, Voicu

    2014-11-01

    Remote visualization has become both a necessity, as data set sizes have grown faster than computer network performance, and an opportunity, as laptop, tablet, and smartphone mobile computing platforms have become ubiquitous. However, the conventional remote visualization (CRV) approach of sending a new image from the server to the client for every view parameter change suffers from reduced interactivity. One problem is high latency, as the network has to be traversed twice, once to communicate the view parameters to the server and once to transmit the new image to the client. A second problem is reduced image quality due to aggressive compression or low resolution. We address these problems by constructing and transmitting enhanced images that are sufficient for quality output frame reconstruction at the client for a range of view parameter values. The client reconstructs thousands of frames locally, without any additional data from the server, which avoids latency and aggressive compression. We introduce animated depth images, which not only store a color and depth sample at every pixel, but also store the trajectory of the samples for a given time interval. Sample trajectories are stored compactly by partitioning the image into semi-rigid sample clusters and by storing one sequence of rigid body transformations per cluster. Animated depth images leverage sample trajectory coherence to achieve a good compression of animation data, with a small and user-controllable approximation error. We demonstrate animated depth images in the context of finite element analysis and SPH data sets. PMID:26355328

  12. Extended focused imaging and depth map reconstruction in optical scanning holography.

    PubMed

    Ren, Zhenbo; Chen, Ni; Lam, Edmund Y

    2016-02-10

    In conventional microscopy, specimens lying within the depth of field are clearly recorded whereas other parts are blurry. Although digital holographic microscopy allows post-processing on holograms to reconstruct multifocus images, it suffers from defocus noise as a traditional microscope in numerical reconstruction. In this paper, we demonstrate a method that can achieve extended focused imaging (EFI) and reconstruct a depth map (DM) of three-dimensional (3D) objects. We first use a depth-from-focus algorithm to create a DM for each pixel based on entropy minimization. Then we show how to achieve EFI of the whole 3D scene computationally. Simulation and experimental results involving objects with multiple axial sections are presented to validate the proposed approach. PMID:26906373

  13. Robust image, depth, and occlusion generation from uncalibrated stereo

    NASA Astrophysics Data System (ADS)

    Barenbrug, B.; Berretty, R.-P. M.; Klein Gunnewiek, R.

    2008-02-01

    Philips is developing a product line of multi-view auto-stereoscopic 3D displays.1 For interfacing, the image-plus-depth format is used. 2, 3 Being independent of specific display properties, such as number of views, view mapping on pixel grid, etc., this interface format allows optimal multi-view visualisation of content from many different sources, while maintaining interoperability between display types. A vastly growing number of productions from the entertainment industry are aiming at 3D movie theatres. These productions use a two view format, primarily intended for eye-wear assisted viewing. It has been shown 4 how to convert these sequences into the image-plus-depth format. This results in a single layer depth profile, lacking information about areas that are occluded and can be revealed by the stereoscopic parallax. Recently, it has been shown how to compute for intermediate views for a stereo pair. 4, 5 Unfortunately, these approaches are not compatible to the image-plus-depth format, which might hamper the applicability for broadcast 3D television. 3

  14. Depth

    PubMed Central

    Koenderink, Jan J; van Doorn, Andrea J; Wagemans, Johan

    2011-01-01

    Depth is the feeling of remoteness, or separateness, that accompanies awareness in human modalities like vision and audition. In specific cases depths can be graded on an ordinal scale, or even measured quantitatively on an interval scale. In the case of pictorial vision this is complicated by the fact that human observers often appear to apply mental transformations that involve depths in distinct visual directions. This implies that a comparison of empirically determined depths between observers involves pictorial space as an integral entity, whereas comparing pictorial depths as such is meaningless. We describe the formal structure of pictorial space purely in the phenomenological domain, without taking recourse to the theories of optics which properly apply to physical space—a distinct ontological domain. We introduce a number of general ways to design and implement methods of geodesy in pictorial space, and discuss some basic problems associated with such measurements. We deal mainly with conceptual issues. PMID:23145244

  15. 3D motion artifact compenstation in CT image with depth camera

    NASA Astrophysics Data System (ADS)

    Ko, Youngjun; Baek, Jongduk; Shim, Hyunjung

    2015-02-01

    Computed tomography (CT) is a medical imaging technology that projects computer-processed X-rays to acquire tomographic images or the slices of specific organ of body. A motion artifact caused by patient motion is a common problem in CT system and may introduce undesirable artifacts in CT images. This paper analyzes the critical problems in motion artifacts and proposes a new CT system for motion artifact compensation. We employ depth cameras to capture the patient motion and account it for the CT image reconstruction. In this way, we achieve the significant improvement in motion artifact compensation, which is not possible by previous techniques.

  16. Ultra-long scan depth optical coherence tomography for imaging the anterior segment of human eye

    NASA Astrophysics Data System (ADS)

    Zhu, Dexi; Shen, Meixiao; Leng, Lin

    2012-12-01

    Spectral domain optical coherence tomography (SD-OCT) was developed in order to image the anterior segment of human eye. The optical path at reference arm was switched to compensate the sensitivity drop in OCT images. An scan depth of 12.28 mm and an axial resolution of 12.8 μm in air were achieved. The anterior segment from cornea to posterior surface of crystalline lens was clearly imaged and measured using this system. A custom designed Badal optometer was coupled into the sample arm to induce the accommodation, and the movement of crystalline lens was traced after the image registration. Our research demonstrates that SD-OCT with ultra-long scan depth can be used to image the human eye for accommodation research.

  17. Thermal parametric imaging in the evaluation of skin burn depth.

    PubMed

    Rumiński, Jacek; Kaczmarek, Mariusz; Renkielska, Alicja; Nowakowski, Antoni

    2007-02-01

    The aim of this paper is to determine the extent to which infrared (IR) thermal imaging may be used for skin burn depth evaluation. The analysis can be made on the basis of the development of a thermal model of the burned skin. Different methods such as the traditional clinical visual approach and the IR imaging modalities of static IR thermal imaging, active IR thermal imaging and active-dynamic IR thermal imaging (ADT) are analyzed from the point of view of skin burn depth diagnostics. In ADT, a new approach is proposed on the basis of parametric image synthesis. Calculation software is implemented for single-node and distributed systems. The properties of all the methods are verified in experiments using phantoms and subsequently in vivo with animals with a reference histopathological examination. The results indicate that it is possible to distinguish objectively and quantitatively burns which will heal spontaneously within three weeks of infliction and which should be treated conservatively from those which need surgery because they will not heal within this period. PMID:17278587

  18. Depth Imaging of OBS Reflection Data With Wave Field Separation

    NASA Astrophysics Data System (ADS)

    Asakawa, E.; Mizohata, S.; Tanaka, H.; Mikada, H.; Nishizawa, A.

    2007-12-01

    We propose a newly-developed depth imaging approach for OBS (Ocean Bottom Seismometer) reflection data in active-source structural survey using wavefield separation and PSDM (Prestack Depth Migration). OBS data includes a lot of valuable signals not only reflection but refraction. However its wavefield is contaminated with a various kind of waves that degrade the quality of the depth imaging of OBS reflection. Water reverberations, for example, have been thought as a strong source of noise in the imaging. Surprisingly, we found that the multiples reflection waves have the wide spreads of reflection points and we take advantage of this feature to improve the depth imaging after careful processing acquired OBS data. We would like to demonstrate that multiples could be utilized to enhance signal-to-noise ratio. The processing of OBS data in this study is summarized as follows. First, we categorize the OBS wavefield into two parts, i.e., near and far offset data. The near and far offset data inhere are waves that arrive after and before the direct water arrival, respectively. Then we separate the both near and far wavefields into two parts at the arrival of the first-order multiple. The reflection signals before the multiple are primary and up-going waves, whereas the reflection after the multiple events are mainly multiple and down-going in a common receiver gather. After these time-based separations, we apply up/downgoing wave field separation using geophone vertical-component and hydrophone data. Hydrophone records water pressure and, hence, are omni-directional while the vertical component of geophone measures a component of the vector response. These characteristic difference leads us to the separation of upgoing primary reflections and downgoing multiples using the polarity differences due to propagation direction of incoming waves. Finally, we obtain the OBS reflections to 4 domains, near offset primary, near offset multiple, far offset primary and far offset

  19. Particle-Image Velocimeter Having Large Depth of Field

    NASA Technical Reports Server (NTRS)

    Bos, Brent

    2009-01-01

    An instrument that functions mainly as a particle-image velocimeter provides data on the sizes and velocities of flying opaque particles. The instrument is being developed as a means of characterizing fluxes of wind-borne dust particles in the Martian atmosphere. The instrument could also adapted to terrestrial use in measuring sizes and velocities of opaque particles carried by natural winds and industrial gases. Examples of potential terrestrial applications include monitoring of airborne industrial pollutants and airborne particles in mine shafts. The design of this instrument reflects an observation, made in field research, that airborne dust particles derived from soil and rock are opaque enough to be observable by use of bright field illumination with high contrast for highly accurate measurements of sizes and shapes. The instrument includes a source of collimated light coupled to an afocal beam expander and an imaging array of photodetectors. When dust particles travel through the collimated beam, they cast shadows. The shadows are magnified by the beam expander and relayed to the array of photodetectors. Inasmuch as the images captured by the array are of dust-particle shadows rather of the particles themselves, the depth of field of the instrument can be large: the instrument has a depth of field of about 11 mm, which is larger than the depths of field of prior particle-image velocimeters. The instrument can resolve, and measure the sizes and velocities of, particles having sizes in the approximate range of 1 to 300 m. For slowly moving particles, data from two image frames are used to calculate velocities. For rapidly moving particles, image smear lengths from a single frame are used in conjunction with particle- size measurement data to determine velocities.

  20. Obtaining anisotropic velocity data for proper depth seismic imaging

    SciTech Connect

    Egerev, Sergey; Yushin, Victor; Ovchinnikov, Oleg; Dubinsky, Vladimir; Patterson, Doug

    2012-05-24

    The paper deals with the problem of obtaining anisotropic velocity data due to continuous acoustic impedance-based measurements while scanning in the axial direction along the walls of the borehole. Diagrams of full conductivity of the piezoceramic transducer were used to derive anisotropy parameters of the rock sample. The measurements are aimed to support accurate depth imaging of seismic data. Understanding these common anisotropy effects is important when interpreting data where it is present.

  1. Prestack depth imaging via model-independent stacking

    NASA Astrophysics Data System (ADS)

    Druzhinin, Alexander; MacBeth, Colin; Hitchen, Ken

    1999-12-01

    Most seismic reflection imaging methods are confronted with the difficulty of accurately knowing input velocity information. To eliminate this, we develop a special prestack depth migration technique which avoids the necessity of constructing a macro-velocity model. It is based upon the weighted Kirchhoff-type migration formula expressed in terms of model-independent stacking velocity and arrival angle. This formula is applied to synthetic sub-basaltic data. Numerical results show that the method can be used to successfully image beneath basalts.

  2. Imaginative resonance training (IRT) achieves elimination of amputees' phantom pain (PLP) coupled with a spontaneous in-depth proprioception of a restored limb as a marker for permanence and supported by pre-post functional magnetic resonance imaging (fMRI).

    PubMed

    Meyer, Paul; Matthes, Christoph; Kusche, Karl Erwin; Maurer, Konrad

    2012-05-31

    Non-pharmacological approaches such as mirror therapy and graded motor imagery often provide amelioration of amputees' phantom limb pain (PLP), but elimination has proved difficult to achieve. Proprioception of the amputated limb has been noted in studies to be defective and/or distorted in the presence of PLP, but has not, apparently, been researched for various stages of amelioration up to the absence of PLP. Previous studies using functional magnetic resonance imaging (fMRI) suggested that pathological cortical reorganisation after amputation may be the underlying neurobiological correlate of PLP. We report two cases of permanent elimination of PLP after application of imaginative resonance training. The patients, 69 years and 84 years old, reported freedom from PLP together with in-depth achievement of proprioception of a restored limb at the end of the treatment, which may thus be taken as an indication of permanence. Pre/post fMRI for the first case showed, against a group of healthy controls, analogous changes of activation in the sensorimotor cortex. PMID:22748628

  3. Measuring perceived depth in natural images and study of its relation with monocular and binocular depth cues

    NASA Astrophysics Data System (ADS)

    Lebreton, Pierre; Raake, Alexander; Barkowsky, Marcus; Le Callet, Patrick

    2014-03-01

    The perception of depth in images and video sequences is based on different depth cues. Studies have considered depth perception threshold as a function of viewing distance (Cutting and Vishton, 1995), the combination of different monocular depth cues and their quantitative relation with binocular depth cues and their different possible type of interactions (Landy, l995). But these studies only consider artificial stimuli and none of them attempts to provide a quantitative contribution of monocular and binocular depth cues compared to each other in the specific context of natural images. This study targets this particular application case. The evaluation of the strength of different depth cues compared to each other using a carefully designed image database to cover as much as possible different combinations of monocular (linear perspective, texture gradient, relative size and defocus blur) and binocular depth cues. The 200 images were evaluated in two distinct subjective experiments to evaluate separately perceived depth and different monocular depth cues. The methodology and the description of the definition of the different scales will be detailed. The image database (DC3Dimg) is also released for the scientific community.

  4. Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens.

    PubMed

    Shin, Dong-Hak; Lee, Byoungho; Kim, Eun-Soo

    2006-10-01

    We propose a curved integral imaging system with large depth achieved by the additional use of a large-aperture lens in a conventional large-depth integral imaging system. The additional large-aperture lens provides a multidirectional curvature effect and improves the viewing angle. The proposed system has a simple structure due to the use of well-fabricated, unmodified flat devices. To calculate the proper elemental images for the proposed system, we explain a modified computer-generated pickup technique based on an ABCD matrix and analyze an effective viewing zone in the proposed system. From experiments, we show that the proposed system has an improved viewing angle of more than 7 degrees compared with conventional integral imaging. PMID:16983427

  5. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  6. Dual-imaging system for burn depth diagnosis.

    PubMed

    Ganapathy, Priya; Tamminedi, Tejaswi; Qin, Yi; Nanney, Lillian; Cardwell, Nancy; Pollins, Alonda; Sexton, Kevin; Yadegar, Jacob

    2014-02-01

    Currently, determination of burn depth and healing outcomes has been limited to subjective assessment or a single modality, e.g., laser Doppler imaging. Such measures have proven less than ideal. Recent developments in other non-contact technologies such as optical coherence tomography (OCT) and pulse speckle imaging (PSI) offer the promise that an intelligent fusion of information across these modalities can improve visualization of burn regions thereby increasing the sensitivity of the diagnosis. In this work, we combined OCT and PSI images to classify the degree of burn (superficial, partial-thickness and full-thickness burns). Algorithms were developed to integrate and visualize skin structure (with and without burns) from the two modalities. We have completed the proposed initiatives by employing a porcine burn model and compiled results that attest to the utility of our proposed dual-modal fusion approach. Computer-derived data indicating the varying burn depths were validated through immunohistochemical analysis performed on burned skin tissue. The combined performance of OCT and PSI modalities provided an overall ROC-AUC=0.87 (significant at p<0.001) in classifying different burn types measured after 1-h of creating the burn wounds. Porcine model studies to assess feasibility of this dual-imaging system for wound tracking are underway. PMID:23790396

  7. Developing a methodology for imaging stress transients at seismogenic depth

    NASA Astrophysics Data System (ADS)

    Valette-Silver, N.; Silver, P. G.; Niu, F.; Daley, T.; Majer, E. L.

    2003-12-01

    It is well known that the crust contains cracks down to a depth of several kilometers. The dependence of crustal seismic velocities on crack properties, and in turn, the dependence of crack properties on stress, means that seismic velocity exhibits stress dependence. This dependence constitutes a powerful instrument for studying subsurface transient changes in stress. While these relationships have been known for several decades, time-dependent seismic imaging has not, as of yet, become a reliable means of measuring subsurface seismogenic stress changes. There are two primary reasons for this: 1) lack of sufficient delay-time precision necessary to detect small changes in stress, and 2) the difficulty in establishing a reliable calibration between stress and seismic velocity. The best sources of calibration are the solid-earth tides and barometric pressure, both of which produce weak stress perturbations of order 102-103 Pa. Detecting these sources of stress requires precision in the measurement of fractional velocity changes δ v/v of order 10-5-10-6, based on laboratory experiments. Preliminary field experiments and the analysis of uncertainty from known sources of error suggest that the above precision is now in fact achievable with an active source. Since the most common way of measuring δ v/v is by measuring the fractional change in travel time along the path, δ T/T = -δ v/v, one of the dominant issues in measuring temporal changes in velocity between source and receiver is how precisely we can measure travel time. Analysis based on the Cramer-Rao Lower Bound in signal processing provides a means of identifying optimal choices of parameters in designing the experimental setup, the geometry, and source characteristics so as to maximize precision. For example, the optimal frequency for measuring δ T/T is found to be proportional to the Q of the medium. As an illustration, given a Q of 60 and source-receiver distances of 3 m, 30 m, 100 m and 2000 m the

  8. Efficient human pose estimation from single depth images.

    PubMed

    Shotton, Jamie; Girshick, Ross; Fitzgibbon, Andrew; Sharp, Toby; Cook, Mat; Finocchio, Mark; Moore, Richard; Kohli, Pushmeet; Criminisi, Antonio; Kipman, Alex; Blake, Andrew

    2013-12-01

    We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features and parallelizable decision forests, both approaches can run super-real time on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities. PMID:24136424

  9. Efficient Human Pose Estimation from Single Depth Images.

    PubMed

    Shotton, Jamie; Girshick, Ross; Fitzgibbon, Andrew; Sharp, Toby; Cook, Mat; Finocchio, Mark; Moore, Richard; Kohli, Pushmeet; Criminisi, Antonio; Kipman, Alex; Blake, Andrew

    2012-10-26

    We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image, without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features, and parallelizable decision forests, both approaches can run super-realtime on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities. PMID:23109523

  10. Hyperspectral Imaging for Burn Depth Assessment in an Animal Model

    PubMed Central

    Chin, Michael S.; Babchenko, Oksana; Lujan-Hernandez, Jorge; Nobel, Lisa; Ignotz, Ronald; Lalikos, Janice F.

    2015-01-01

    Abstract Background: Differentiating between superficial and deep-dermal (DD) burns remains challenging. Superficial-dermal burns heal with conservative treatment; DD burns often require excision and skin grafting. Decision of surgical treatment is often delayed until burn depth is definitively identified. This study’s aim is to assess the ability of hyperspectral imaging (HSI) to differentiate burn depth. Methods: Thermal injury of graded severity was generated on the dorsum of hairless mice with a heated brass rod. Perfusion and oxygenation parameters of injured skin were measured with HSI, a noninvasive method of diffuse reflectance spectroscopy, at 2 minutes, 1, 24, 48 and 72 hours after wounding. Burn depth was measured histologically in 12 mice from each burn group (n = 72) at 72 hours. Results: Three levels of burn depth were verified histologically: intermediate-dermal (ID), DD, and full-thickness. At 24 hours post injury, total hemoglobin (tHb) increased by 67% and 16% in ID and DD burns, respectively. In contrast, tHb decreased to 36% of its original levels in full-thickness burns. Differences in deoxygenated and tHb among all groups were significant (P < 0.001) at 24 hours post injury. Conclusions: HSI was able to differentiate among 3 discrete levels of burn injury. This is likely because of its correlation with skin perfusion: superficial burn injury causes an inflammatory response and increased perfusion to the burn site, whereas deeper burns destroy the dermal microvasculature and a decrease in perfusion follows. This study supports further investigation of HSI in early burn depth assessment. PMID:26894016

  11. A depth camera for natural human-computer interaction based on near-infrared imaging and structured light

    NASA Astrophysics Data System (ADS)

    Liu, Yue; Wang, Liqiang; Yuan, Bo; Liu, Hao

    2015-08-01

    Designing of a novel depth camera is presented, which targets close-range (20-60cm) natural human-computer interaction especially for mobile terminals. In order to achieve high precision through the working range, a two-stepping method is employed to match the near infrared intensity image to absolute depth in real-time. First, we use structured light achieved by an 808nm laser diode and a Dammann grating to coarsely quantize the output space of depth values into discrete bins. Then use a learning-based classification forest algorithm to predict the depth distribution over these bins for each pixel in the image. The quantitative experimental results show that this depth camera has 1% precision over range of 20-60cm, which show that the camera suit resource-limited and low-cost application.

  12. Multi-layer 3D imaging using a few viewpoint images and depth map

    NASA Astrophysics Data System (ADS)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  13. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers

    PubMed Central

    Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m−2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m−2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037

  14. Can wavefront coding infrared imaging system achieve decoded images approximating to in-focus infrared images?

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Zhang, Chengshuo; Xu, Baoshu; Shi, Zelin

    2015-11-01

    Artefacts and noise degrade the decoded image of a wavefront coding infrared imaging system, which usually results in the decoded image being inferior to the in-focus infrared image of a conventional infrared imaging system. The previous letter showed that the decoded image fell behind the in-focus infrared image. For comparison, a bar target experiment at temperature of 20°C and two groups of outdoor experiments at temperatures of 28°C and 70°C are respectively conducted. Experimental results prove that a wavefront coding infrared imaging system can achieve the decoded image being approximating to its corresponding in-focus infrared image.

  15. Electrical resistivity imaging for unknown bridge foundation depth determination

    NASA Astrophysics Data System (ADS)

    Arjwech, Rungroj

    Unknown bridge foundations pose a significant safety risk due to stream scour and erosion. Records from older structures may be non-existent, incomplete, or incorrect. Nondestructive and inexpensive geophysical methods have been identified as suitable to investigate unknown bridge foundations. The objective of the present study is to apply advanced 2D electrical resistivity imaging (ERI) in order to identify depth of unknown bridge foundations. A survey procedure is carried out in mixed terrain water and land environments with rough topography. A conventional resistivity survey procedure is used with the electrodes installed on the stream banks. However, some electrodes must be adapted for underwater use. Tests were conducted in one laboratory experimentation and at five field experimentations located at three roadway bridges, a geotechnical test site, and a railway bridge. The first experimentation was at the bridges with the smallest foundations, later working up in size to larger drilled shafts and spread footings. Both known to unknown foundations were investigated. The geotechnical test site is used as an experimental site for 2D and 3D ERI. The data acquisition is carried out along 2D profile with a linear array in the dipole-dipole configuration. The data collections have been carried out using electrodes deployed directly across smaller foundations. Electrodes are deployed in proximity to larger foundations to image them from the side. The 2D ERI can detect the presence of a bridge foundation but is unable to resolve its precise shape and depth. Increasing the spatial extent of the foundation permits better image of its shape and depth. Using electrode < 1 m to detect a slender foundation < 1 m in diameter is not feasible. The 2D ERI method that has been widely used for land surface surveys presently can be adapted effectively in water-covered environments. The method is the most appropriate geophysical method for determination of unknown bridge foundations

  16. Enhancing imaging depth by multi-angle imaging of embryonic structures

    NASA Astrophysics Data System (ADS)

    Sudheendran, Narendran; Wu, Chen; Dickinson, Mary E.; Larina, Irina V.; Larin, Kirill V.

    2014-03-01

    Because of the ease in generating transgenic/gene knock out models and accessibility to early stages of embryogenesis, mouse and rat models have become invaluable to studying the mechanisms that underlie human birth defects. To study precisely how structural birth defects arise, Ultrasound, MRI, microCT, Optical Projection Tomography (OPT), Optical Coherence Tomography (OCT) and histological methods have all been used for imaging mouse/rat embryos. However, of these methods, only OCT enables live, functional imaging with high spatial and temporal resolution. However, one of the major limitations of conventional OCT imaging is the light depth penetration, which limits acquisition of structural information from the whole embryo. Here we introduce new imaging scheme by OCT imaging from different sides of the embryos that extend the depth penetration of OCT to permit high-resolution imaging of 3D and 4D volumes.

  17. Hybrid Imaging for Extended Depth of Field Microscopy

    NASA Astrophysics Data System (ADS)

    Zahreddine, Ramzi Nicholas

    An inverse relationship exists in optical systems between the depth of field (DOF) and the minimum resolvable feature size. This trade-off is especially detrimental in high numerical aperture microscopy systems where resolution is pushed to the diffraction limit resulting in a DOF on the order of 500 nm. Many biological structures and processes of interest span over micron scales resulting in significant blurring during imaging. This thesis explores a two-step computational imaging technique known as hybrid imaging to create extended DOF (EDF) microscopy systems with minimal sacrifice in resolution. In the first step a mask is inserted at the pupil plane of the microscope to create a focus invariant system over 10 times the traditional DOF, albeit with reduced contrast. In the second step the contrast is restored via deconvolution. Several EDF pupil masks from the literature are quantitatively compared in the context of biological microscopy. From this analysis a new mask is proposed, the incoherently partitioned pupil with binary phase modulation (IPP-BPM), that combines the most advantageous properties from the literature. Total variation regularized deconvolution models are derived for the various noise conditions and detectors commonly used in biological microscopy. State of the art algorithms for efficiently solving the deconvolution problem are analyzed for speed, accuracy, and ease of use. The IPP-BPM mask is compared with the literature and shown to have the highest signal-to-noise ratio and lowest mean square error post-processing. A prototype of the IPP-BPM mask is fabricated using a combination of 3D femtosecond glass etching and standard lithography techniques. The mask is compared against theory and demonstrated in biological imaging applications.

  18. Theory of reflectivity blurring in seismic depth imaging

    NASA Astrophysics Data System (ADS)

    Thomson, C. J.; Kitchenside, P. W.; Fletcher, R. P.

    2016-05-01

    A subsurface extended image gather obtained during controlled-source depth imaging yields a blurred kernel of an interface reflection operator. This reflectivity kernel or reflection function is comprised of the interface plane-wave reflection coefficients and so, in principle, the gather contains amplitude versus offset or angle information. We present a modelling theory for extended image gathers that accounts for variable illumination and blurring, under the assumption of a good migration-velocity model. The method involves forward modelling as well as migration or back propagation so as to define a receiver-side blurring function, which contains the effects of the detector array for a given shot. Composition with the modelled incident wave and summation over shots then yields an overall blurring function that relates the reflectivity to the extended image gather obtained from field data. The spatial evolution or instability of blurring functions is a key concept and there is generally not just spatial blurring in the apparent reflectivity, but also slowness or angle blurring. Gridded blurring functions can be estimated with, for example, a reverse-time migration modelling engine. A calibration step is required to account for ad hoc band limitedness in the modelling and the method also exploits blurring-function reciprocity. To demonstrate the concepts, we show numerical examples of various quantities using the well-known SIGSBEE test model and a simple salt-body overburden model, both for 2-D. The moderately strong slowness/angle blurring in the latter model suggests that the effect on amplitude versus offset or angle analysis should be considered in more realistic structures. Although the description and examples are for 2-D, the extension to 3-D is conceptually straightforward. The computational cost of overall blurring functions implies their targeted use for the foreseeable future, for example, in reservoir characterization. The description is for scalar

  19. Effect of image bit depth on target acquisition modeling

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.; Reynolds, Joseph P.

    2008-04-01

    The impact of bit depth on human in the loop recognition and identification performance is of particular importance when considering trade-offs between resolution and band-width of sensor systems. This paper presents the results from two perception studies designed to measure the effects of quantization and finite bit depth on target acquisition performance. The results in this paper allow for the inclusion of limited bit depth and quantization as an additional noise term in NVESD sensor performance models.

  20. Laser speckle contrast imaging with extended depth of field for in-vivo tissue imaging

    PubMed Central

    Sigal, Iliya; Gad, Raanan; Caravaca-Aguirre, Antonio M.; Atchia, Yaaseen; Conkey, Donald B.; Piestun, Rafael; Levi, Ofer

    2013-01-01

    This work presents, to our knowledge, the first demonstration of the Laser Speckle Contrast Imaging (LSCI) technique with extended depth of field (DOF). We employ wavefront coding on the detected beam to gain quantitative information on flow speeds through a DOF extended two-fold compared to the traditional system. We characterize the system in-vitro using controlled microfluidic experiments, and apply it in-vivo to imaging the somatosensory cortex of a rat, showing improved ability to image flow in a larger number of vessels simultaneously. PMID:24466481

  1. Undetectable Changes in Image Resolution of Luminance-Contrast Gradients Affect Depth Perception

    PubMed Central

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Morita, Toshiya

    2016-01-01

    A great number of studies have suggested a variety of ways to get depth information from two dimensional images such as binocular disparity, shape-from-shading, size gradient/foreshortening, aerial perspective, and so on. Are there any other new factors affecting depth perception? A recent psychophysical study has investigated the correlation between image resolution and depth sensation of Cylinder images (A rectangle contains gradual luminance-contrast changes.). It was reported that higher resolution images facilitate depth perception. However, it is still not clear whether or not the finding generalizes to other kinds of visual stimuli, because there are more appropriate visual stimuli for exploration of depth perception of luminance-contrast changes, such as Gabor patch. Here, we further examined the relationship between image resolution and depth perception by conducting a series of psychophysical experiments with not only Cylinders but also Gabor patches having smoother luminance-contrast gradients. As a result, higher resolution images produced stronger depth sensation with both images. This finding suggests that image resolution affects depth perception of simple luminance-contrast differences (Gabor patch) as well as shape-from-shading (Cylinder). In addition, this phenomenon was found even when the resolution difference was undetectable. This indicates the existence of consciously available and unavailable information in our visual system. These findings further support the view that image resolution is a cue for depth perception that was previously ignored. It partially explains the unparalleled viewing experience of novel high resolution displays. PMID:26941693

  2. Depth-Resolved Multispectral Sub-Surface Imaging Using Multifunctional Upconversion Phosphors with Paramagnetic Properties.

    PubMed

    Ovanesyan, Zaven; Mimun, L Christopher; Kumar, Gangadharan Ajith; Yust, Brian G; Dannangoda, Chamath; Martirosyan, Karen S; Sardar, Dhiraj K

    2015-09-30

    Molecular imaging is very promising technique used for surgical guidance, which requires advancements related to properties of imaging agents and subsequent data retrieval methods from measured multispectral images. In this article, an upconversion material is introduced for subsurface near-infrared imaging and for the depth recovery of the material embedded below the biological tissue. The results confirm significant correlation between the analytical depth estimate of the material under the tissue and the measured ratio of emitted light from the material at two different wavelengths. Experiments with biological tissue samples demonstrate depth resolved imaging using the rare earth doped multifunctional phosphors. In vitro tests reveal no significant toxicity, whereas the magnetic measurements of the phosphors show that the particles are suitable as magnetic resonance imaging agents. The confocal imaging of fibroblast cells with these phosphors reveals their potential for in vivo imaging. The depth-resolved imaging technique with such phosphors has broad implications for real-time intraoperative surgical guidance. PMID:26322519

  3. Very high frame rate volumetric integration of depth images on mobile devices.

    PubMed

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation. PMID:26439825

  4. Performance of reduced bit-depth acquisition for optical frequency domain imaging

    PubMed Central

    Goldberg, Brian D.; Vakoc, Benjamin J.; Oh, Wang-Yuhl; Suter, Melissa J.; Waxman, Sergio; Freilich, Mark I.; Bouma, Brett E.; Tearney, Guillermo J.

    2009-01-01

    High-speed optical frequency domain imaging (OFDI) has enabled practical wide-field microscopic imaging in the biological laboratory and clinical medicine. The imaging speed of OFDI, and therefore the field of view, of current systems is limited by the rate at which data can be digitized and archived rather than the system sensitivity or laser performance. One solution to this bottleneck is to natively digitize OFDI signals at reduced bit depths, e.g., at 8-bit depth rather than the conventional 12–14 bit depth, thereby reducing overall bandwidth. However, the implications of reduced bit-depth acquisition on image quality have not been studied. In this paper, we use simulations and empirical studies to evaluate the effects of reduced depth acquisition on OFDI image quality. We show that image acquisition at 8-bit depth allows high system sensitivity with only a minimal drop in the signal-to-noise ratio compared to higher bit-depth systems. Images of a human coronary artery acquired in vivo at 8-bit depth are presented and compared with images at higher bit-depth acquisition. PMID:19770914

  5. Removing the cardboard effect in stereoscopic images using smoothed depth maps

    NASA Astrophysics Data System (ADS)

    Shimono, Koichi; Tam, Wa James; Vázquez, Carlos; Speranza, Filippo; Renaud, Ron

    2010-02-01

    Depth maps are important for generating images with new camera viewpoints from a single source image for stereoscopic applications. In this study we examined the usefulness of smoothing depth maps for reducing the cardboard effect that is sometimes observed in stereoscopic images with objects appearing flat like cardboard pieces. Six stereoscopic image pairs, manifesting different degrees of the cardboard effect, were tested. Depth maps for each scene were synthesized from the original left-eye images and then smoothed (low-pass filtered). The smoothed depth maps and the original left-eye images were then used to render new views to create new "processed" stereoscopic image pairs. Subjects were asked to assess the cardboard effect of the original stereoscopic images and the processed stereoscopic images on a continuous quality scale, using the doublestimulus method. In separate sessions, depth quality and visual comfort were also assessed. The results from 16 viewers indicated that the processed stereoscopic image pairs tended to exhibit a reduced cardboard effect, compared to the original stereoscopic image pairs. Although visual comfort was not compromised with the smoothing of the depth maps, depth quality was significantly reduced when compared to the original.

  6. The extended depth of field microscope imaging system with the phase pupil mask

    NASA Astrophysics Data System (ADS)

    Lyu, Qinghua; Zhai, Zhongsheng; Sharp, Martin; French, Paul

    2015-11-01

    A `0/π' phase pupil mask was developed to extend the depth of field of a circularly symmetric optical microscope imaging system. The modulation transfer function curves, the normalized point spread function figures and the spot diagrams of the imaging system with the optimal mask were analyzed and simulated. The results show that the large depth of field imaging system with the `0/π' phase pupil mask has a high resolution in a long frequency band and can obtain clear images without any post-processing. The experimental results also demonstrate that the depth of field of the imaging system is extended successfully.

  7. Instantaneous three-dimensional sensing using spatial light modulator illumination with extended depth of field imaging

    PubMed Central

    Quirin, Sean; Peterka, Darcy S.; Yuste, Rafael

    2013-01-01

    Imaging three-dimensional structures represents a major challenge for conventional microscopies. Here we describe a Spatial Light Modulator (SLM) microscope that can simultaneously address and image multiple targets in three dimensions. A wavefront coding element and computational image processing enables extended depth-of-field imaging. High-resolution, multi-site three-dimensional targeting and sensing is demonstrated in both transparent and scattering media over a depth range of 300-1,000 microns. PMID:23842387

  8. The Effects of Multimedia Learning on Thai Primary Pupils' Achievement in Size and Depth of Vocabulary Knowledge

    ERIC Educational Resources Information Center

    Jingjit, Mathukorn

    2015-01-01

    This study aims to obtain more insight regarding the effect of multimedia learning on third grade of Thai primary pupils' achievement in Size and Depth Vocabulary of English. A quasi-experiment is applied using "one group pretest-posttest design" combined with "time series design," as well as data triangulation. The sample…

  9. Imaging properties of extended depth of field microscopy through single-shot focus scanning

    PubMed Central

    Lu, Sheng-Huei; Hua, Hong

    2015-01-01

    Although the single-shot focus scanning technique (SSFS) has been experimentally demonstrated for extended depth of field (EDOF) imaging, few work has been performed to characterize its imaging properties and limitations. In this paper, based on an analytical model of a SSFS system, we examined the properties of the system response and the restored image quality in relation to the axial position of the object, scan range, and signal-to-noise ratio, and demonstrated the properties via a prototype of 10 × 0.25 NA microscope system. We quantified the full range of the achievable EDOF is equivalent to the focus scan range. We further demonstrated that the restored image quality can be improved by extending the focus scan range by a distance equivalent to twice of the standard DOF. For example, in a focus-scanning microscope with a ± 15 μm standard DOF, a 120 μm focus scan range can obtain a ± 60 μm EDOF, but a 150 μm scan range affords noticeably better EDOF images for the same EDOF range. These results provide guidelines for designing and implementing EDOF systems using SSFS technique. PMID:25969109

  10. Comparison of Curricular Breadth, Depth, and Recurrence and Physics Achievement of TIMSS Population 3 Countries

    ERIC Educational Resources Information Center

    Murdock, John

    2008-01-01

    This study is a secondary analysis of data from the 1995 administration of the Third International Mathematics and Science Study (TIMSS). The purpose is to compare the breadth, depth, and recurrence of the typical physics curriculum in the United States with the typical curricula in different countries and to determine whether there are…

  11. Depth resolved hyperspectral imaging spectrometer based on structured light illumination and Fourier transform interferometry.

    PubMed

    Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T; So, Peter T C

    2014-10-01

    A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367

  12. Depth resolved hyperspectral imaging spectrometer based on structured light illumination and Fourier transform interferometry

    PubMed Central

    Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.

    2014-01-01

    A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367

  13. Tripling the maximum imaging depth with third-harmonic generation microscopy.

    PubMed

    Yildirim, Murat; Durr, Nicholas; Ben-Yakar, Adela

    2015-09-01

    The growing interest in performing high-resolution, deep-tissue imaging has galvanized the use of longer excitation wavelengths and three-photon-based techniques in nonlinear imaging modalities. This study presents a threefold improvement in maximum imaging depth of ex vivo porcine vocal folds using third-harmonic generation (THG) microscopy at 1552-nm excitation wavelength compared to two-photon microscopy (TPM) at 776-nm excitation wavelength. The experimental, analytical, and Monte Carlo simulation results reveal that THG improves the maximum imaging depth observed in TPM significantly from 140 to 420 μm in a highly scattered medium, reaching the expected theoretical imaging depth of seven extinction lengths. This value almost doubles the previously reported normalized imaging depths of 3.5 to 4.5 extinction lengths using three-photon-based imaging modalities. Since tissue absorption is substantial at the excitation wavelength of 1552 nm, this study assesses the tissue thermal damage during imaging by obtaining the depth-resolved temperature distribution through a numerical simulation incorporating an experimentally obtained thermal relaxation time (τ). By shuttering the laser for a period of 2τ, the numerical algorithm estimates a maximum temperature increase of ∼2°C at the maximum imaging depth of 420 μm. The paper demonstrates that THG imaging using 1552 nm as an illumination wavelength with effective thermal management proves to be a powerful deep imaging modality for highly scattering and absorbing tissues, such as scarred vocal folds. PMID:26376941

  14. Tripling the maximum imaging depth with third-harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Yildirim, Murat; Durr, Nicholas; Ben-Yakar, Adela

    2015-09-01

    The growing interest in performing high-resolution, deep-tissue imaging has galvanized the use of longer excitation wavelengths and three-photon-based techniques in nonlinear imaging modalities. This study presents a threefold improvement in maximum imaging depth of ex vivo porcine vocal folds using third-harmonic generation (THG) microscopy at 1552-nm excitation wavelength compared to two-photon microscopy (TPM) at 776-nm excitation wavelength. The experimental, analytical, and Monte Carlo simulation results reveal that THG improves the maximum imaging depth observed in TPM significantly from 140 to 420 μm in a highly scattered medium, reaching the expected theoretical imaging depth of seven extinction lengths. This value almost doubles the previously reported normalized imaging depths of 3.5 to 4.5 extinction lengths using three-photon-based imaging modalities. Since tissue absorption is substantial at the excitation wavelength of 1552 nm, this study assesses the tissue thermal damage during imaging by obtaining the depth-resolved temperature distribution through a numerical simulation incorporating an experimentally obtained thermal relaxation time (τ). By shuttering the laser for a period of 2τ, the numerical algorithm estimates a maximum temperature increase of ˜2°C at the maximum imaging depth of 420 μm. The paper demonstrates that THG imaging using 1552 nm as an illumination wavelength with effective thermal management proves to be a powerful deep imaging modality for highly scattering and absorbing tissues, such as scarred vocal folds.

  15. Noncontact imaging of burn depth and extent in a porcine model using spatial frequency domain imaging

    PubMed Central

    Mazhar, Amaan; Saggese, Steve; Pollins, Alonda C.; Cardwell, Nancy L.; Nanney, Lillian; Cuccia, David J.

    2014-01-01

    Abstract. The standard of care for clinical assessment of burn severity and extent lacks a quantitative measurement. In this work, spatial frequency domain imaging (SFDI) was used to measure 48 thermal burns of graded severity (superficial partial, deep partial, and full thickness) in a porcine model. Functional (total hemoglobin and tissue oxygen saturation) and structural parameters (tissue scattering) derived from the SFDI measurements were monitored over 72 h for each burn type and compared to gold standard histological measurements of burn depth. Tissue oxygen saturation (stO2) and total hemoglobin (ctHbT) differentiated superficial partial thickness burns from more severe burn types after 2 and 72 h, respectively (p<0.01), but were unable to differentiate deep partial from full thickness wounds in the first 72 h. Tissue scattering parameters separated superficial burns from all burn types immediately after injury (p<0.01), and separated all three burn types from each other after 24 h (p<0.01). Tissue scattering parameters also showed a strong negative correlation to histological burn depth as measured by vimentin immunostain (r2>0.89). These results show promise for the use of SFDI-derived tissue scattering as a correlation to burn depth and the potential to assess burn depth via a combination of SFDI functional and structural parameters. PMID:25147961

  16. Pareto-depth for multiple-query image retrieval.

    PubMed

    Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O

    2015-02-01

    Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts. PMID:25494509

  17. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    PubMed

    Carlini, Lina; Holden, Seamus J; Douglass, Kyle M; Manley, Suliana

    2015-01-01

    Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467

  18. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging

    PubMed Central

    Manley, Suliana

    2015-01-01

    Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope’s pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467

  19. Compact and large depth of field image scanner for auto document feeder with compound eye system

    NASA Astrophysics Data System (ADS)

    Kawano, Hiroyuki; Okamoto, Tatsuki; Matsuzawa, Taku; Nakajima, Hajime; Makita, Junko; Toyoda, Yoshitaka; Funakura, Tetsuo; Nakanishi, Takahito; Kunieda, Tatsuya; Minobe, Tadashi

    2013-03-01

    We designed a compact and large depth of field image scanner targeted for auto document feeders (ADF) by using a compound eye system design with plural optical units in which the ray paths are folded by a reflective optics. Though we have previously proposed the principle concept, we advance the design using a free-form surface mirror to reduce the F-number for less illumination energy and to shrink its optical track width to 40 mm. We achieved large depth of field (DOF) of 1.2 mm, defined as a range exceeding 30% modulation transfer function (MTF) at 300 dpi, which is about twice as large as a conventional gradient index (GRIN) lens array contact image sensor (CIS). The aperture stop has a rectangular-shaped aperture, where one side length is as large as 4.0mm for collecting much light, and another side length is as small as 1.88mm for avoiding interference of folded ray paths.

  20. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  1. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU. PMID:24921860

  2. Cardiac image modelling: Breadth and depth in heart disease.

    PubMed

    Suinesiaputra, Avan; McCulloch, Andrew D; Nash, Martyn P; Pontre, Beau; Young, Alistair A

    2016-10-01

    With the advent of large-scale imaging studies and big health data, and the corresponding growth in analytics, machine learning and computational image analysis methods, there are now exciting opportunities for deepening our understanding of the mechanisms and characteristics of heart disease. Two emerging fields are computational analysis of cardiac remodelling (shape and motion changes due to disease) and computational analysis of physiology and mechanics to estimate biophysical properties from non-invasive imaging. Many large cohort studies now underway around the world have been specifically designed based on non-invasive imaging technologies in order to gain new information about the development of heart disease from asymptomatic to clinical manifestations. These give an unprecedented breadth to the quantification of population variation and disease development. Also, for the individual patient, it is now possible to determine biophysical properties of myocardial tissue in health and disease by interpreting detailed imaging data using computational modelling. For these population and patient-specific computational modelling methods to develop further, we need open benchmarks for algorithm comparison and validation, open sharing of data and algorithms, and demonstration of clinical efficacy in patient management and care. The combination of population and patient-specific modelling will give new insights into the mechanisms of cardiac disease, in particular the development of heart failure, congenital heart disease, myocardial infarction, contractile dysfunction and diastolic dysfunction. PMID:27349830

  3. MEMS scanner enabled real-time depth sensitive hyperspectral imaging of biological tissue.

    PubMed

    Wang, Youmin; Bish, Sheldon; Tunnell, James W; Zhang, Xiaojing

    2010-11-01

    We demonstrate a hyperspectral and depth sensitive diffuse optical imaging microsystem, where fast scanning is provided by a CMOS compatible 2-axis MEMS mirror. By using lissajous scanning patterns, large field-of-view (FOV) of 1.2 cmx1.2 cm images with lateral resolution of 100 µm can be taken at 1.3 frames-per-second (fps). Hyperspectral and depth-sensitive images were acquired on tissue simulating phantom samples containing quantum dots (QDs) patterned at various depths in Polydimethylsiloxane (PDMS). Device performance delivers 6 nm spectral resolution and 0.43 wavelengths per second acquisition speed. A sample of porcine epithelium with subcutaneously placed QDs was also imaged. Images of the biological sample were processed by spectral unmixing in order to qualitatively separate chromophores in the final images and demonstrate spectral performance of the imaging system. PMID:21164757

  4. Depth enhancement in spectral domain optical coherence tomography using bidirectional imaging modality with a single spectrometer

    NASA Astrophysics Data System (ADS)

    Ravichandran, Naresh Kumar; Wijesinghe, Ruchire Eranga; Shirazi, Muhammad Faizan; Park, Kibeom; Jeon, Mansik; Jung, Woonggyu; Kim, Jeehyun

    2016-07-01

    A method for depth enhancement is presented using a bidirectional imaging modality for spectral domain optical coherence tomography (SD-OCT). Two precisely aligned sample arms along with two reference arms were utilized in the optical configuration to scan the samples. Using exemplary images of the optical resolution target, Scotch tape, a silicon sheet with two needles, and a leaf, we demonstrated how the developed bidirectional SD-OCT imaging method increases the ability to characterize depth-enhanced images. The results of the developed system were validated by comparing the images with the standard OCT configuration (single-sample arm setup). Given the advantages of higher resolution and the ability to visualize deep morphological structures, this method can be utilized to increase the depth dependent fall-off in samples with limited thickness. Thus, the proposed bidirectional imaging modality is apt for cross-sectional imaging of entire samples, which has the potential capability to improve the diagnostic ability.

  5. Burn Depth Estimation Using Thermal Excitation and Imaging

    SciTech Connect

    Dickey, F.M.; Holswade, S.C.; Yee, M.L.

    1998-12-17

    Accurate estimation of the depth of partial-thickness burns and the early prediction of a need for surgical intervention are difficult. A non-invasive technique utilizing the difference in thermal relaxation time between burned and normal skin may be useful in this regard. In practice, a thermal camera would record the skin's response to heating or cooling by a small amount-roughly 5{degrees} Celsius for a short duration. The thermal stimulus would be provided by a heat lamp, hot or cold air, or other means. Processing of the thermal transients would reveal areas that returned to equilibrium at different rates, which should correspond to different burn depths. In deeper thickness burns, the outside layer of skin is further removed from the constant-temperature region maintained through blood flow. Deeper thickness areas should thus return to equilibrium more slowly than other areas. Since the technique only records changes in the skin's temperature, it is not sensitive to room temperature, the burn's location, or the state of the patient. Preliminary results are presented for analysis of a simulated burn, formed by applying a patch of biosynthetic wound dressing on top of normal skin tissue.

  6. Depth of field of diffraction-limited imaging system incorporating electronic devices

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kimiaki

    2014-11-01

    The depth of field is investigated for an imaging system in which optical imaging and electronic devices, such as an electronic sensor and a display, are combined. When the spatial frequency of pixels in the electronic devices is higher than the cut-off frequency of the optical system, it is shown that the depth of field is almost the same as that of the optical system itself. In the case where the spatial frequency is lower than the cut-off frequency of the optical system, the depth of field increases, and the features of the increase are shown in imaging systems both with and without an optical low-pass filter.

  7. Elimination of depth degeneracy in optical frequency-domain imaging through polarization-based optical demodulation.

    PubMed

    Vakoc, B J; Yun, S H; Tearney, G J; Bouma, B E

    2006-02-01

    A novel optical frequency-domain imaging system is demonstrated that employs a passive optical demodulation circuit and a chirped digital acquisition clock derived from a voltage-controlled oscillator. The demodulation circuit allows the separation of signals from positive and negative depths to better than 50 dB, thereby eliminating depth degeneracy and doubling the imaging depth range. Our system design is compatible with dual-balanced and polarization-diverse detection, important techniques in the practical biomedical application of optical frequency-domain imaging. PMID:16480209

  8. Depth-controlled 3D TV image coding

    NASA Astrophysics Data System (ADS)

    Chiari, Armando; Ciciani, Bruno; Romero, Milton; Rossi, Ricardo

    1998-04-01

    Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.

  9. Aerial image retargeting (AIR): achieving litho-friendly designs

    NASA Astrophysics Data System (ADS)

    Yehia Hamouda, Ayman; Word, James; Anis, Mohab; Karim, Karim S.

    2011-04-01

    In this work, we present a new technique to detect non-Litho-Friendly design areas based on their Aerial Image signature. The aerial image is calculated for the litho target (pre-OPC). This is followed by the fixing (retargeting) the design to achieve a litho friendly OPC target. This technique is applied and tested on 28 nm metal layer and shows a big improvement in the process window performance. For an optimized Aerial-Image-Retargeting (AIR) recipe is very computationally efficient and its runtime doesn't consume more than 1% of the OPC flow runtime.

  10. Study of a holographic TV system based on multi-view images and depth maps

    NASA Astrophysics Data System (ADS)

    Senoh, Takanori; Ichihashi, Yasuyuki; Oi, Ryutaro; Sasaki, Hisayuki; Yamamoto, Kenji

    2013-03-01

    Electronic holography technology is expected to be used for realizing an ideal 3DTV system in the future, providing perfect 3D images. Since the amount of fringe data is huge, however, it is difficult to broadcast or transmit it directly. To resolve this problem, we investigated a method of generating holograms from depth images. Since computer generated holography (CGH) generates huge fringe patterns from a small amount of data for the coordinates and colors of 3D objects, it solves half of this problem, mainly for computer generated objects (artificial objects). For the other half of the problem (how to obtain 3D models for a natural scene), we propose a method of generating holograms from multi-view images and associated depth maps. Multi-view images are taken by multiple cameras. The depth maps are estimated from the multi-view images by introducing an adaptive matching error selection algorithm in the stereo-matching process. The multi-view images and depth maps are compressed by a 2D image coding method that converts them into Global View and Depth (GVD) format. The fringe patterns are generated from the decoded data and displayed on 8K×4K liquid crystal on silicon (LCOS) display panels. The reconstructed holographic image quality is compared using uncompressed and compressed images.

  11. Increase of penetration depth in real-time clinical epi-optoacoustic imaging: clutter reduction and aberration correction

    NASA Astrophysics Data System (ADS)

    Jaeger, Michael; Gashi, Kujtim; Peeters, Sara; Held, Gerrit; Preisser, Stefan; Gruenig, Michael; Frenz, Martin

    2014-03-01

    Optoacoustic (OA) imaging will experience broadest clinical application if implemented in epi-style with the irradiation optics and the acoustic probe integrated in a single probe. This will allow most flexible imaging of the human body in a combined system together with echo ultrasound (US). In such a multimodal combination, the OA signal could provide functional information within the anatomical context shown in the US image, similar to what is already done with colour flow imaging. Up to date, successful deep epi-OA imaging was difficult to achieve, owing to clutter and acoustic aberrations. Clutter signals arise from strong optical absorption in the region of tissue irradiation and strongly reduce contrast and imaging depth. Acoustic aberrations are caused by the inhomogeneous speed of sound and degrade the spatial resolution of deep tissue structures, further reducing contrast and thus imaging depth. In past years we have developed displacement-compensated averaging (DCA) for clutter reduction based on the clutter decorrelation that occurs when palpating the tissue using the ultrasound probe. We have now implemented real-time DCA on a research ultrasound system to evaluate its clutter reduction performance in freehand scanning of human volunteers. Our results confirm that DCA significantly improves image contrast and imaging depth, making clutter reduction a basic requirement for a clinically successful combination of epi-OA and US imaging. In addition we propose a novel technique which allows automatic full aberration correction of OA images, based on measuring the effect of aberration spatially resolved using echo US. Phantom results demonstrate that this technique allows spatially invariant diffraction-limited resolution in presence of a strong aberrator.

  12. Exploring High-Achieving Students' Images of Mathematicians

    ERIC Educational Resources Information Center

    Aguilar, Mario Sánchez; Rosas, Alejandro; Zavaleta, Juan Gabriel Molina; Romo-Vázquez, Avenilde

    2016-01-01

    The aim of this study is to describe the images that a group of high-achieving Mexican students hold of mathematicians. For this investigation, we used a research method based on the Draw-A-Scientist Test (DAST) with a sample of 63 Mexican high school students. The group of students' pictorial and written descriptions of mathematicians assisted us…

  13. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793

  14. Self-Motion and Depth Estimation from Image Sequences

    NASA Technical Reports Server (NTRS)

    Perrone, John

    1999-01-01

    An image-based version of a computational model of human self-motion perception (developed in collaboration with Dr. Leland S. Stone at NASA Ames Research Center) has been generated and tested. The research included in the grant proposal sought to extend the utility of the self-motion model so that it could be used for explaining and predicting human performance in a greater variety of aerospace applications. The model can now be tested with video input sequences (including computer generated imagery) which enables simulation of human self-motion estimation in a variety of applied settings.

  15. No-Reference Depth Assessment Based on Edge Misalignment Errors for T+D Images.

    PubMed

    Xiang, Sen; Yu, Li; Chen, Chang Wen

    2016-03-01

    The quality of depth is crucial in all depth-based applications. Unfortunately, the error-free ground truth is often unattainable for depth. Therefore, no-reference quality assessment is very much desired. This paper presents a novel depth quality assessment scheme that is completely different from conventional approaches. In particular, this scheme focuses on depth edge misalignment errors in texture-plus-depth (T + D) images and develops a robust method to detect them. Based on the detected misalignments, a no-reference metric is calculated to evaluate the quality of depth maps. In the proposed scheme, misalignments are detected by matching texture and depth edges through three constraints: 1) spatial similarity; 2) edge orientation similarity; and 3) segment length similarity. Furthermore, the matching is performed on edge segments instead of individual pixels, which enables robust edge matching. Experimental results demonstrate that the proposed scheme can detect misalignment errors accurately. The proposed no-reference depth quality metric is highly consistent with the full-reference metric, and is also well-correlated with the quality of synthesized virtual views. Moreover, the proposed scheme can also use the detected edge misalignments to facilitate depth enhancement in various practical texture-plus-depth-based applications. PMID:26841393

  16. Depth estimation and occlusion boundary recovery from a single outdoor image

    NASA Astrophysics Data System (ADS)

    Zhang, Shihui; Yan, Shuo

    2012-08-01

    A novel depth estimation and occlusion boundary recovery approach for a single outdoor image is described. This work is distinguished by three contributions. The first contribution is the introduction of a new depth estimation model, which takes the camera rotation and pitch into account, thus improving the depth estimation accuracy. The second contribution is a depth estimation algorithm, in which we classify the standing object region with visible ground-contact points into three cases according to the information of vanishing point for the first time, meanwhile, we propose the depth reference line concept for estimating the depth of the region with depth change. Two advantages can thereby be obtained: improving the depth estimation accuracy further and avoiding the occlusion mismarked phenomenon. The third contribution is the depth estimation method for the standing object region without visible ground-contact points, which takes the mean of minimum and maximum depth estimation result as region depth and prevents the missing phenomenon of occlusion boundaries. Extensive experiments show that our works are better than previously published results.

  17. High performance computational integral imaging system using multi-view video plus depth representation

    NASA Astrophysics Data System (ADS)

    Shi, Shasha; Gioia, Patrick; Madec, Gérard

    2012-12-01

    Integral imaging is an attractive auto-stereoscopic three-dimensional (3D) technology for next-generation 3DTV. But its application is obstructed by poor image quality, huge data volume and high processing complexity. In this paper, a new computational integral imaging (CII) system using multi-view video plus depth (MVD) representation is proposed to solve these problems. The originality of this system lies in three aspects. Firstly, a particular depth-image-based rendering (DIBR) technique is used in encoding process to exploit the inter-view correlation between different sub-images (SIs). Thereafter, the same DIBR method is applied in the display side to interpolate virtual SIs and improve the reconstructed 3D image quality. Finally, a novel parallel group projection (PGP) technique is proposed to simplify the reconstruction process. According to experimental results, the proposed CII system improves compression efficiency and displayed image quality, while reducing calculation complexity. [Figure not available: see fulltext.

  18. Depth Imaging by Combining Time-of-Flight and On-Demand Stereo

    NASA Astrophysics Data System (ADS)

    Hahne, Uwe; Alexa, Marc

    In this paper we present a framework for computing depth images at interactive rates. Our approach is based on combining time-of-flight (TOF) range data with stereo vision. We use a per-frame confidence map extracted from the TOF sensor data in two ways for improving the disparity estimation in the stereo part: first, together with the TOF range data for initializing and constraining the disparity range; and, second, together with the color image information for segmenting the data into depth continuous areas, enabling the use of adaptive windows for the disparity search. The resulting depth images are more accurate than from either of the sensors. In an example application we use the depth map to initialize the z-buffer so that virtual objects can be occluded by real objects in an augmented reality scenario.

  19. The critical evaluation of laser Doppler imaging in determining burn depth

    PubMed Central

    Gill, Parneet

    2013-01-01

    This review article discusses the use of laser Doppler imaging as a clinimetric tool to determine burn depth in patients presenting to hospital. Laser Doppler imaging is a very sensitive and specific tool to measure burn depth, easy to use, reliable and acceptable to the patient due to its quick and non-invasive nature. Improvements in validity, cost and reproducibility would improve its use in clinical practice however it is difficult to satisfy the entire evaluation criterion all the time. It remains a widely accepted tool to assess burn depth, with an ever-increasing body of evidence to support its use, as discussed in this review. Close collaboration between clinicians, statisticians, epidemiologists and psychologists is necessary in order to develop the evidence base for the use of laser Doppler imaging as standard in burn depth assessment and therefore act as an influencing factor in management decisions. PMID:23638324

  20. Penetration depth measurement of near-infrared hyperspectral imaging light for milk powder

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The increasingly common application of near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging ligh...

  1. Single-Photon Depth Imaging Using a Union-of-Subspaces Model

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2015-12-01

    Light detection and ranging systems reconstruct scene depth from time-of-flight measurements. For low light-level depth imaging applications, such as remote sensing and robot vision, these systems use single-photon detectors that resolve individual photon arrivals. Even so, they must detect a large number of photons to mitigate Poisson shot noise and reject anomalous photon detections from background light. We introduce a novel framework for accurate depth imaging using a small number of detected photons in the presence of an unknown amount of background light that may vary spatially. It employs a Poisson observation model for the photon detections plus a union-of-subspaces constraint on the discrete-time flux from the scene at any single pixel. Together, they enable a greedy signal-pursuit algorithm to rapidly and simultaneously converge on accurate estimates of scene depth and background flux, without any assumptions on spatial correlations of the depth or background flux. Using experimental single-photon data, we demonstrate that our proposed framework recovers depth features with 1.7 cm absolute error, using 15 photons per image pixel and an illumination pulse with 6.7-cm scaled root-mean-square length. We also show that our framework outperforms the conventional pixelwise log-matched filtering, which is a computationally-efficient approximation to the maximum-likelihood solution, by a factor of 6.1 in absolute depth error.

  2. Extended depth-of-field imaging through radially symmetrical conjugate phase masks

    NASA Astrophysics Data System (ADS)

    Chen, Shouqian; Le, Van Nhu; Fan, Zhigang; Tran, Hong Cam

    2015-11-01

    We proposed a radially symmetrical conjugate phase mask (PM) pair to yield an invariant imaging property for extending depth-of-field imaging. This conjugate PM pair is a two-radially symmetrical phase function with opposite orientation of the phase modulation. Compared with a single-radially symmetrical PM, the proposed conjugate PM pair shows a symmetrically imaging property on both sides of the focal plane and high magnitude of modulation transfer function (MTF). The quartic phase mask (QPM) with optimized phase parameters is employed to demonstrate our concept. Several evaluation approaches, including point-spread function, MTF, and image simulation, are used to realize the performance comparison among a traditional imaging system, an original QPM system, and a conjugate QPM. The results are proof that the proposed conjugate PM has a superior performance in extending depth of field imaging.

  3. Depth map generation using a single image sensor with phase masks.

    PubMed

    Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki

    2016-06-13

    Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors. PMID:27410306

  4. Design of high-performance adaptive objective lens with large optical depth scanning range for ultrabroad near infrared microscopic imaging

    PubMed Central

    Lan, Gongpu; Mauger, Thomas F.; Li, Guoqiang

    2015-01-01

    We report on the theory and design of adaptive objective lens for ultra broadband near infrared light imaging with large dynamic optical depth scanning range by using an embedded tunable lens, which can find wide applications in deep tissue biomedical imaging systems, such as confocal microscope, optical coherence tomography (OCT), two-photon microscopy, etc., both in vivo and ex vivo. This design is based on, but not limited to, a home-made prototype of liquid-filled membrane lens with a clear aperture of 8mm and the thickness of 2.55mm ~3.18mm. It is beneficial to have an adaptive objective lens which allows an extended depth scanning range larger than the focal length zoom range, since this will keep the magnification of the whole system, numerical aperture (NA), field of view (FOV), and resolution more consistent. To achieve this goal, a systematic theory is presented, for the first time to our acknowledgment, by inserting the varifocal lens in between a front and a back solid lens group. The designed objective has a compact size (10mm-diameter and 15mm-length), ultrabroad working bandwidth (760nm - 920nm), a large depth scanning range (7.36mm in air) — 1.533 times of focal length zoom range (4.8mm in air), and a FOV around 1mm × 1mm. Diffraction-limited performance can be achieved within this ultrabroad bandwidth through all the scanning depth (the resolution is 2.22 μm - 2.81 μm, calculated at the wavelength of 800nm with the NA of 0.214 - 0.171). The chromatic focal shift value is within the depth of focus (field). The chromatic difference in distortion is nearly zero and the maximum distortion is less than 0.05%. PMID:26417508

  5. Resident identification using kinect depth image data and fuzzy clustering techniques.

    PubMed

    Banerjee, Tanvi; Keller, James M; Skubic, Marjorie

    2012-01-01

    As a part of our passive fall risk assessment research in home environments, we present a method to identify older residents using features extracted from their gait information from a single depth camera. Depth images have been collected continuously for about eight months from several apartments at a senior housing facility. Shape descriptors such as bounding box information and image moments were extracted from silhouettes of the depth images. The features were then clustered using Possibilistic C Means for resident identification. This technology will allow researchers and health professionals to gather more information on the individual residents by filtering out data belonging to non-residents. Gait related information belonging exclusively to the older residents can then be gathered. The data can potentially help detect changes in gait patterns which can be used to analyze fall risk for elderly residents by passively observing them in their home environments. PMID:23367076

  6. Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    So, Peter T.

    2016-03-01

    Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens.

  7. Macroscopic optical imaging technique for wide-field estimation of fluorescence depth in optically turbid media for application in brain tumor surgical guidance

    NASA Astrophysics Data System (ADS)

    Kolste, Kolbein K.; Kanick, Stephen C.; Valdés, Pablo A.; Jermyn, Michael; Wilson, Brian C.; Roberts, David W.; Paulsen, Keith D.; Leblond, Frederic

    2015-02-01

    A diffuse imaging method is presented that enables wide-field estimation of the depth of fluorescent molecular markers in turbid media by quantifying the deformation of the detected fluorescence spectra due to the wavelength-dependent light attenuation by overlying tissue. This is achieved by measuring the ratio of the fluorescence at two wavelengths in combination with normalization techniques based on diffuse reflectance measurements to evaluate tissue attenuation variations for different depths. It is demonstrated that fluorescence topography can be achieved up to a 5 mm depth using a near-infrared dye with millimeter depth accuracy in turbid media having optical properties representative of normal brain tissue. Wide-field depth estimates are made using optical technology integrated onto a commercial surgical microscope, making this approach feasible for real-world applications.

  8. Macroscopic optical imaging technique for wide-field estimation of fluorescence depth in optically turbid media for application in brain tumor surgical guidance

    PubMed Central

    Kolste, Kolbein K.; Kanick, Stephen C.; Valdés, Pablo A.; Jermyn, Michael; Wilson, Brian C.; Roberts, David W.; Paulsen, Keith D.; Leblond, Frederic

    2015-01-01

    Abstract. A diffuse imaging method is presented that enables wide-field estimation of the depth of fluorescent molecular markers in turbid media by quantifying the deformation of the detected fluorescence spectra due to the wavelength-dependent light attenuation by overlying tissue. This is achieved by measuring the ratio of the fluorescence at two wavelengths in combination with normalization techniques based on diffuse reflectance measurements to evaluate tissue attenuation variations for different depths. It is demonstrated that fluorescence topography can be achieved up to a 5 mm depth using a near-infrared dye with millimeter depth accuracy in turbid media having optical properties representative of normal brain tissue. Wide-field depth estimates are made using optical technology integrated onto a commercial surgical microscope, making this approach feasible for real-world applications. PMID:25652704

  9. Depth-Enhanced Integral Imaging with a Stepped Lens Array or a Composite Lens Array for Three-Dimensional Display

    NASA Astrophysics Data System (ADS)

    Choi, Heejin; Park, Jae-Hyeung; Hong, Jisoo; Lee, Byoungho

    2004-08-01

    In spite of the many advantages of integral imaging, the depth of reconstructed three-dimensional (3D) image is limited to around the only one image plane. Here, we propose a novel method for increasing the depth of a reconstructed image using a stepped lens array (SLA) or a composite lens array (CLA). We confirm our idea by fabricating SLA and CLA with two image planes each. By using a SLA or a CLA, it is possible to form the 3D image around several image planes and to increase the depth of the reconstructed 3D image.

  10. Multispectral upconversion luminescence intensity ratios for ascertaining the tissue imaging depth

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Wang, Yu; Kong, Xianggui; Liu, Xiaomin; Zhang, Youlin; Tu, Langping; Ding, Yadan; Aalders, Maurice C. G.; Buma, Wybren Jan; Zhang, Hong

    2014-07-01

    Upconversion nanoparticles (UCNPs) have in recent years emerged as excellent contrast agents for in vivo luminescence imaging of deep tissues. But information abstracted from these images is in most cases restricted to 2-dimensions, without the depth information. In this work, a simple method has been developed to accurately ascertain the tissue imaging depth based on the relative luminescence intensity ratio of multispectral NaYF4:Yb3+,Er3+ UCNPs. A theoretical mode was set up, where the parameters in the quantitative relation between the relative intensities of the upconversion luminescence spectra and the depth of the UCNPs were determined using tissue mimicking liquid phantoms. The 540 nm and 650 nm luminescence intensity ratios (G/R ratio) of NaYF4:Yb3+,Er3+ UCNPs were monitored following excitation path (Ex mode) and emission path (Em mode) schemes, respectively. The model was validated by embedding NaYF4:Yb3+,Er3+ UCNPs in layered pork muscles, which demonstrated a very high accuracy of measurement in the thickness up to centimeter. This approach shall promote significantly the power of nanotechnology in medical optical imaging by expanding the imaging information from 2-dimensional to real 3-dimensional.Upconversion nanoparticles (UCNPs) have in recent years emerged as excellent contrast agents for in vivo luminescence imaging of deep tissues. But information abstracted from these images is in most cases restricted to 2-dimensions, without the depth information. In this work, a simple method has been developed to accurately ascertain the tissue imaging depth based on the relative luminescence intensity ratio of multispectral NaYF4:Yb3+,Er3+ UCNPs. A theoretical mode was set up, where the parameters in the quantitative relation between the relative intensities of the upconversion luminescence spectra and the depth of the UCNPs were determined using tissue mimicking liquid phantoms. The 540 nm and 650 nm luminescence intensity ratios (G/R ratio) of NaYF4:Yb3

  11. Penetration Depth Measurement of Near-Infrared Hyperspectral Imaging Light for Milk Powder.

    PubMed

    Huang, Min; Kim, Moon S; Chao, Kuanglin; Qin, Jianwei; Mo, Changyeun; Esquerre, Carlos; Delwiche, Stephen; Zhu, Qibing

    2016-01-01

    The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging light for milk powder. Hyperspectral NIR reflectance images were collected for eight different milk powder products that included five brands of non-fat milk powder and three brands of whole milk powder. For each milk powder, five different powder depths ranging from 1 mm-5 mm were prepared on the top of a base layer of melamine, to test spectral-based detection of the melamine through the milk. A relationship was established between the NIR reflectance spectra (937.5-1653.7 nm) and the penetration depth was investigated by means of the partial least squares-discriminant analysis (PLS-DA) technique to classify pixels as being milk-only or a mixture of milk and melamine. With increasing milk depth, classification model accuracy was gradually decreased. The results from the 1-mm, 2-mm and 3-mm models showed that the average classification accuracy of the validation set for milk-melamine samples was reduced from 99.86% down to 94.93% as the milk depth increased from 1 mm-3 mm. As the milk depth increased to 4 mm and 5 mm, model performance deteriorated further to accuracies as low as 81.83% and 58.26%, respectively. The results suggest that a 2-mm sample depth is recommended for the screening/evaluation of milk powders using an online NIR hyperspectral imaging system similar to that used in this study. PMID:27023555

  12. Penetration Depth Measurement of Near-Infrared Hyperspectral Imaging Light for Milk Powder

    PubMed Central

    Huang, Min; Kim, Moon S.; Chao, Kuanglin; Qin, Jianwei; Mo, Changyeun; Esquerre, Carlos; Delwiche, Stephen; Zhu, Qibing

    2016-01-01

    The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging light for milk powder. Hyperspectral NIR reflectance images were collected for eight different milk powder products that included five brands of non-fat milk powder and three brands of whole milk powder. For each milk powder, five different powder depths ranging from 1 mm–5 mm were prepared on the top of a base layer of melamine, to test spectral-based detection of the melamine through the milk. A relationship was established between the NIR reflectance spectra (937.5–1653.7 nm) and the penetration depth was investigated by means of the partial least squares-discriminant analysis (PLS-DA) technique to classify pixels as being milk-only or a mixture of milk and melamine. With increasing milk depth, classification model accuracy was gradually decreased. The results from the 1-mm, 2-mm and 3-mm models showed that the average classification accuracy of the validation set for milk-melamine samples was reduced from 99.86% down to 94.93% as the milk depth increased from 1 mm–3 mm. As the milk depth increased to 4 mm and 5 mm, model performance deteriorated further to accuracies as low as 81.83% and 58.26%, respectively. The results suggest that a 2-mm sample depth is recommended for the screening/evaluation of milk powders using an online NIR hyperspectral imaging system similar to that used in this study. PMID:27023555

  13. Depth-sensitive subsurface imaging of polymer nanocomposites using second harmonic Kelvin probe force microscopy.

    PubMed

    Castañeda-Uribe, Octavio Alejandro; Reifenberger, Ronald; Raman, Arvind; Avila, Alba

    2015-03-24

    We study the depth sensitivity and spatial resolution of subsurface imaging of polymer nanocomposites using second harmonic mapping in Kelvin Probe Force Microscopy (KPFM). This method allows the visualization of the clustering and percolation of buried Single Walled Carbon Nanotubes (SWCNTs) via capacitance gradient (∂C/∂z) maps. We develop a multilayered sample where thin layers of neat Polyimide (PI) (∼80 nm per layer) are sequentially spin-coated on well-dispersed SWCNT/Polyimide (PI) nanocomposite films. The multilayer nanocomposite system allows the acquisition of ∂C/∂z images of three-dimensional percolating networks of SWCNTs at different depths in the same region of the sample. We detect CNTs at a depth of ∼430 nm, and notice that the spatial resolution progressively deteriorates with increasing depth of the buried CNTs. Computational trends of ∂C/∂z vs CNT depth correlate the sensitivity and depth resolution with field penetration and spreading, and enable a possible approach to three-dimensional subsurface structure reconstruction. The results open the door to nondestructive, three-dimensional tomography and nanometrology techniques for nanocomposite applications. PMID:25591106

  14. Depth elemental imaging of forensic samples by confocal micro-XRF method.

    PubMed

    Nakano, Kazuhiko; Nishi, Chihiro; Otsuki, Kazunori; Nishiwaki, Yoshinori; Tsuji, Kouichi

    2011-05-01

    Micro-XRF is a significant tool for the analysis of small regions. A micro-X-ray beam can be created in the laboratory by various focusing X-ray optics. Previously, nondestructive 3D-XRF analysis had not been easy because of the high penetration of fluorescent X-rays emitted into the sample. A recently developed confocal micro-XRF technique combined with polycapillary X-ray lenses enables depth-selective analysis. In this paper, we applied a new tabletop confocal micro-XRF system to analyze several forensic samples, that is, multilayered automotive paint fragments and leather samples, for use in the criminaliztics. Elemental depth profiles and mapping images of forensic samples were successfully obtained by the confocal micro-XRF technique. Multilayered structures can be distinguished in forensic samples by their elemental depth profiles. However, it was found that some leather sheets exhibited heterogeneous distribution. To confirm the validity, the result of a conventional micro-XRF of the cross section was compared with that of the confocal micro-XRF. The results obtained by the confocal micro-XRF system were in approximate agreement with those obtained by the conventional micro-XRF. Elemental depth imaging was performed on the paint fragments and leather sheets to confirm the homogeneity of the respective layers of the sample. The depth images of the paint fragment showed homogeneous distribution in each layer expect for Fe and Zn. In contrast, several components in the leather sheets were predominantly localized. PMID:21438498

  15. Evaluation of optical imaging and spectroscopy approaches for cardiac tissue depth assessment

    SciTech Connect

    Lin, B; Matthews, D; Chernomordik, V; Gandjbakhche, A; Lane, S; Demos, S G

    2008-02-13

    NIR light scattering from ex vivo porcine cardiac tissue was investigated to understand how imaging or point measurement approaches may assist development of methods for tissue depth assessment. Our results indicate an increase of average image intensity as thickness increases up to approximately 2 mm. In a dual fiber spectroscopy configuration, sensitivity up to approximately 3 mm with an increase to 6 mm when spectral ratio between selected wavelengths was obtained. Preliminary Monte Carlo results provided reasonable fit to the experimental data.

  16. Removing the depth-degeneracy in optical frequency domain imaging with frequency shifting

    PubMed Central

    Yun, S. H.; Tearney, G. J.; de Boer, J. F.; Bouma, B. E.

    2009-01-01

    A novel technique using an acousto-optic frequency shifter in optical frequency domain imaging (OFDI) is presented. The frequency shift eliminates the ambiguity between positive and negative differential delays, effectively doubling the interferometric ranging depth while avoiding image cross-talk. A signal processing algorithm is demonstrated to accommodate nonlinearity in the tuning slope of the wavelength-swept OFDI laser source. PMID:19484034

  17. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  18. Using a piezoelectric fiber stretcher to remove the depth ambiguity in optical Fourier domain imaging

    NASA Astrophysics Data System (ADS)

    Vergnole, Sébastien; Lamouche, Guy; Dufour, Marc; Gauthier, Bruno

    2007-07-01

    This paper reports the study of an Optical Fourier Domain Imaging (OFDI) setup for optical coherence tomography. One of the main drawbacks of OFDI is its inability to differentiate positive and negative depths. Some setups have already been proposed to remove this depth ambiguity by introducing a modulation by means of electro-optic or acousto-optic modulators. In our setup, we implement a piezoelectric fiber stretcher to generate a periodic phase shift between successive A-scans, thus introducing a transverse modulation. The depth ambiguity is then resolved by performing a Fourier treatment in the transverse direction before processing the data in the axial direction. It is similar to the B-M mode scanning already proposed for Spectral-Domain OCT1 but with a more efficient experimental setup. We discuss the advantages and the drawbacks of our technique compared to the technique based on acousto-optics modulators by comparing images of an onion obtained with both techniques.

  19. Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera.

    PubMed

    Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Chang-Kun; Lee, Byoungho

    2015-12-10

    In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods. PMID:26836855

  20. Two-photon instant structured illumination microscopy improves the depth penetration of super-resolution imaging in thick scattering samples

    PubMed Central

    Winter, Peter W.; York, Andrew G.; Nogare, Damian Dalle; Ingaramo, Maria; Christensen, Ryan; Chitnis, Ajay; Patterson, George H.; Shroff, Hari

    2014-01-01

    Fluorescence imaging methods that achieve spatial resolution beyond the diffraction limit (super-resolution) are of great interest in biology. We describe a super-resolution method that combines two-photon excitation with structured illumination microscopy (SIM), enabling three-dimensional interrogation of live organisms with ~150 nm lateral and ~400 nm axial resolution, at frame rates of ~1 Hz. By performing optical rather than digital processing operations to improve resolution, our microscope permits super-resolution imaging with no additional cost in acquisition time or phototoxicity relative to the point-scanning two-photon microscope upon which it is based. Our method provides better depth penetration and inherent optical sectioning than all previously reported super-resolution SIM implementations, enabling super-resolution imaging at depths exceeding 100 μm from the coverslip surface. The capability of our system for interrogating thick live specimens at high resolution is demonstrated by imaging whole nematode embryos and larvae, and tissues and organs inside zebrafish embryos. PMID:25485291

  1. Improving Resolution and Depth of Astronomical Observations via Modern Mathematical Methods for Image Analysis

    NASA Astrophysics Data System (ADS)

    Castellano, M.; Ottaviani, D.; Fontana, A.; Merlin, E.; Pilo, S.; Falcone, M.

    2015-09-01

    In the past years modern mathematical methods for image analysis have led to a revolution in many fields, from computer vision to scientific imaging. However, some recently developed image processing techniques successfully exploited by other sectors have been rarely, if ever, experimented on astronomical observations. We present here tests of two classes of variational image enhancement techniques: "structure-texture decomposition" and "super-resolution" showing that they are effective in improving the quality of observations. Structure-texture decomposition allows to recover faint sources previously hidden by the background noise, effectively increasing the depth of available observations. Super-resolution yields an higher-resolution and a better sampled image out of a set of low resolution frames, thus mitigating problematics in data analysis arising from the difference in resolution/sampling between different instruments, as in the case of EUCLID VIS and NIR imagers.

  2. Analysis of airborne imaging spectrometer data for the Ruby Mountains, Montana, by use of absorption-band-depth images

    NASA Technical Reports Server (NTRS)

    Brickey, David W.; Crowley, James K.; Rowan, Lawrence C.

    1987-01-01

    Airborne Imaging Spectrometer-1 (AIS-1) data were obtained for an area of amphibolite grade metamorphic rocks that have moderate rangeland vegetation cover. Although rock exposures are sparse and patchy at this site, soils are visible through the vegetation and typically comprise 20 to 30 percent of the surface area. Channel averaged low band depth images for diagnostic soil rock absorption bands. Sets of three such images were combined to produce color composite band depth images. This relative simple approach did not require extensive calibration efforts and was effective for discerning a number of spectrally distinctive rocks and soils, including soils having high talc concentrations. The results show that the high spectral and spatial resolution of AIS-1 and future sensors hold considerable promise for mapping mineral variations in soil, even in moderately vegetated areas.

  3. Development of Extended-Depth Swept Source Optical Coherence Tomography for Applications in Ophthalmic Imaging of the Anterior and Posterior Eye

    NASA Astrophysics Data System (ADS)

    Dhalla, Al-Hafeez Zahir

    Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for

  4. Analytic expression of fluorescence ratio detection correlates with depth in multi-spectral sub-surface imaging

    PubMed Central

    Leblond, F; Ovanesyan, Z; Davis, S C; Valdés, P A; Kim, A; Hartov, A; Wilson, B C; Pogue, B W; Paulsen, K D; Roberts, D W

    2016-01-01

    Here we derived analytical solutions to diffuse light transport in biological tissue based on spectral deformation of diffused near-infrared measurements. These solutions provide a closed-form mathematical expression which predicts that the depth of a fluorescent molecule distribution is linearly related to the logarithm of the ratio of fluorescence at two different wavelengths. The slope and intercept values of the equation depend on the intrinsic values of absorption and reduced scattering of tissue. This linear behavior occurs if the following two conditions are satisfied: the depth is beyond a few millimeters, and the tissue is relatively homogenous. We present experimental measurements acquired with a broad-beam non-contact multi-spectral fluorescence imaging system using a hemoglobin-containing diffusive phantom. Preliminary results confirm that a significant correlation exists between the predicted depth of a distribution of protoporphyrin IX (PpIX) molecules and the measured ratio of fluorescence at two different wavelengths. These results suggest that depth assessment of fluorescence contrast can be achieved in fluorescence-guided surgery to allow improved intra-operative delineation of tumor margins. PMID:21971201

  5. Quantitative comparison of wavelength dependence on penetration depth and imaging contrast for ultrahigh-resolution optical coherence tomography using supercontinuum sources at five wavelength regions

    NASA Astrophysics Data System (ADS)

    Ishida, S.; Nishizawa, N.

    2012-01-01

    Optical coherence tomography (OCT) is a non invasive optical imaging technology for micron-scale cross-sectional imaging of biological tissue and materials. We have been investigating ultrahigh resolution optical coherence tomography (UHR-OCT) using fiber based supercontinuum sources. Although ultrahigh longitudinal resolution was achieved in several center wavelength regions, its low penetration depth is a serious limitation for other applications. To realize ultrahigh resolution and deep penetration depth simultaneously, it is necessary to choose the proper wavelength to maximize the light penetration and enhance the image contrast at deeper depths. Recently, we have demonstrated the wavelength dependence of penetration depth and imaging contrast for ultrahigh resolution OCT at 0.8 μm, 1.3 μm, and 1.7 μm wavelength ranges. In this paper, additionally we used SC sources at 1.06 μm and 1.55 μm, and we have investigated the wavelength dependence of UHR-OCT at five wavelength regions. The image contrast and penetration depth have been discussed in terms of the scattering coefficient and water absorption of samples. Almost the same optical characteristics in longitudinal and lateral resolution, sensitivity, and incident optical power at all wavelength regions were demonstrated. We confirmed the enhancement of image contrast and decreased ambiguity of deeper epithelioid structure at longer wavelength region.

  6. Visualizing depth and thickness of a local blood region in skin tissue using diffuse reflectance images.

    PubMed

    Nishidate, Izumi; Maeda, Takaaki; Aizu, Yoshihisa; Niizeki, Kyuichi

    2007-01-01

    A method is proposed for visualizing the depth and thickness distribution of a local blood region in skin tissue using diffuse reflectance images at three isosbestic wavelengths of hemoglobin: 420, 585, and 800 nm. Monte Carlo simulation of light transport specifies a relation among optical densities, depth, and thickness of the region under given concentrations of melanin in epidermis and blood in dermis. Experiments with tissue-like agar gel phantoms indicate that a simple circular blood region embedded in scattering media can be visualized with errors of 6% for the depth and 22% for the thickness to the given values. In-vivo measurements on human veins demonstrate that results from the proposed method agree within errors of 30 and 19% for the depth and thickness, respectively, with values obtained from the same veins by the conventional ultrasound technique. Numerical investigation with the Monte Carlo simulation of light transport in the skin tissue is also performed to discuss effects of deviation in scattering coefficients of skin tissue and absorption coefficients of the local blood region from the typical values of the results. The depth of the local blood region is over- or underestimated as the scattering coefficients of epidermis and dermis decrease or increase, respectively, while the thickness of the region agrees well with the given values below 1.2 mm. Decreases or increases of hematocrit value give over- or underestimation of the thickness, but they have almost no influence on the depth. PMID:17994894

  7. 24 mm depth range discretely swept optical frequency domain imaging in dentistry

    NASA Astrophysics Data System (ADS)

    Kakuma, Hideo; Choi, DongHak; Furukawa, Hiroyuki; Hiro-Oka, Hideaki; Ohbayashi, Kohji

    2009-02-01

    A large depth range is needed if optical coherence tomography (OCT) is to be used to observe multiple teeth simultaneously. A discretely swept optical frequency domain imaging system with a 24-mm depth range was made by using a superstructure-grating distributed Bragg reflector (SSG-DBR) laser as the light source and setting the frequencystep interval to be 3.13 GHz (λ ~ 0.026 nm). The swept wavelength range was 40 nm centered at 1580 nm, the resolution was 29 μm, and the A-scan rate was 1.3 kHz. Application of the OCT system to a dental phantom was demonstrated.

  8. Depth-resolved holographic optical coherence imaging using a high-sensitivity photorefractive polymer device

    NASA Astrophysics Data System (ADS)

    Salvador, M.; Prauzner, J.; Köber, S.; Meerholz, K.; Jeong, K.; Nolte, D. D.

    2008-12-01

    We present coherence-gated holographic imaging using a highly sensitive photorefractive (PR) polymer composite as the recording medium. Due to the high sensitivity of the composite holographic recording at intensities as low as 5 mW/cm2 allowed for a frame exposure time of only 500ms. Motivated by regenerative medical applications, we demonstrate optical depth sectioning of a polymer foam for use as a cell culture matrix. An axial resolution of 18 μm and a transverse resolution of 30 μm up to a depth of 600 μm was obtained using an off-axis recording geometry.

  9. Theoretical study of multispectral structured illumination for depth resolved imaging of non-stationary objects: focus on retinal imaging

    PubMed Central

    Gruppetta, Steve; Chetty, Sabah

    2011-01-01

    Current implementations of structured illumination microscopy for depth-resolved (three-dimensional) imaging have limitations that restrict its use; specifically, they are not applicable to non-stationary objects imaged with relatively poor condenser optics and in non-fluorescent mode. This includes in-vivo retinal imaging. A novel implementation of structured illumination microscopy is presented that overcomes these issues. A three-wavelength illumination technique is used to obtain the three sub-images required for structured illumination simultaneously rather than sequentially, enabling use on non-stationary objects. An illumination method is presented that produces an incoherent pattern through interference, bypassing the limitations imposed by the aberrations of the condenser lens and thus enabling axial sectioning in non-fluorescent imaging. The application to retinal imaging can lead to a device with similar sectioning capabilities to confocal microscopy without the optical complexity (and cost) required for scanning systems. PMID:21339871

  10. Image formation in vibro-acoustography with depth-of-field effects.

    PubMed

    Silva, Glauber T; Frery, Alejandro C; Fatemi, Mostafa

    2006-07-01

    We study the image formation of vibro-acoustography systems based on a concave sector array transducer taking into account depth-of-field effects. The system point-spread function (PSF) is defined in terms of the acoustic emission of a point-target in response to the dynamic radiation stress of ultrasound. The PSF on the focal plane and the axis of the transducer are presented. To extend the obtained PSF to the 3D-space, we assume it is a separable function in the axial direction and the focal plane of the transducer. In this model, an image is formed through the 3D convolution of the PSF with an object function. Experimental vibro-acoustography images of a breast phantom with lesion-like inclusions were compared with simulated images. Results show that the experimental images are in good agreement with the proposed model. PMID:16949793

  11. In-vivo full depth of eye imaging spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Dai, Cuixia; Zhou, Chuanqing; Jiao, Shuliang; Xi, Peng; Ren, Qiushi

    2011-09-01

    It is necessary to apply the spectral-domain optical coherence tomography (SD-OCT) to image the whole eye segment for practically iatrical application, but the imaging depth of SD-OCT is limited by the spectral resolution of the spectrometer. By now, no result about this research has been reported. In our study, a new dual channel dual focus OCT system is adopted to image the whole eye segment. The cornea and the crystalline lens are simultaneously imaged by using full range complex spectral-domain OCT in one channel, the retina is detected by the other. The new system was successfully tested in imaging of the volunteer' eye in vivo. The preliminary results presented in this paper demonstrated the feasibility of this approach.

  12. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    PubMed Central

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-01-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids. PMID:26576666

  13. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    NASA Astrophysics Data System (ADS)

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-11-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids.

  14. Full-range imaging of eye accommodation by high-speed long-depth range optical frequency domain imaging

    PubMed Central

    Furukawa, Hiroyuki; Hiro-Oka, Hideaki; Satoh, Nobuyuki; Yoshimura, Reiko; Choi, Donghak; Nakanishi, Motoi; Igarashi, Akihito; Ishikawa, Hitoshi; Ohbayashi, Kohji; Shimizu, Kimiya

    2010-01-01

    We describe a high-speed long-depth range optical frequency domain imaging (OFDI) system employing a long-coherence length tunable source and demonstrate dynamic full-range imaging of the anterior segment of the eye including from the cornea surface to the posterior capsule of the crystalline lens with a depth range of 12 mm without removing complex conjugate image ambiguity. The tunable source spanned from 1260 to 1360 nm with an average output power of 15.8 mW. The fast A-scan rate of 20,000 per second provided dynamic OFDI and dependence of the whole anterior segment change on time following abrupt relaxation from the accommodated to the relaxed status, which was measured for a healthy eye and that with an intraocular lens. PMID:21258564

  15. Double peacock eye optical element for extended focal depth imaging with ophthalmic applications

    NASA Astrophysics Data System (ADS)

    Romero, Lenny A.; Millán, María S.; Jaroszewicz, Zbigniew; Kolodziejczyk, Andrzej

    2012-04-01

    The aged human eye is commonly affected by presbyopia, and therefore, it gradually loses its capability to form images of objects placed at different distances. Extended depth of focus (EDOF) imaging elements can overcome this inability, despite the introduction of a certain amount of aberration. This paper evaluates the EDOF imaging performance of the so-called peacock eye phase diffractive element, which focuses an incident plane wave into a segment of the optical axis and explores the element's potential use for ophthalmic presbyopia compensation optics. Two designs of the element are analyzed: the single peacock eye, which produces one focal segment along the axis, and the double peacock eye, which is a spatially multiplexed element that produces two focal segments with partial overlapping along the axis. The performances of the peacock eye elements are compared with those of multifocal lenses through numerical simulations as well as optical experiments in the image space. The results demonstrate that the peacock eye elements form sharper images along the focal segment than the multifocal lenses and, therefore, are more suitable for presbyopia compensation. The extreme points of the depth of field in the object space, which represent the remote and the near object points, have been experimentally obtained for both the single and the double peacock eye optical elements. The double peacock eye element has better imaging quality for relatively short and intermediate distances than the single peacock eye, whereas the latter seems better for far distance vision.

  16. An integral imaging method for depth extraction with lens array in an optical tweezer system

    NASA Astrophysics Data System (ADS)

    Wang, Shulu; Liu, Wei-Wei; Wang, Anting; Li, Yinmei; Ming, Hai

    2014-10-01

    In this paper, a new integral imaging method is proposed for depth extraction in an optical tweezer system. A mutual coherence algorithm of stereo matching are theoretically analyzed and demonstrated feasible by virtual simulation. In our design, optical tweezer technique is combined with integral imaging in a single microscopy system by inserting a lens array into the optical train. On one hand, the optical tweezer subsystem is built based on the modulated light field from a solid laser, and the strong focused beam forms a light trap to capture tiny specimens. On the other hand, through parameters optimization, the microscopic integral imaging subsystem is composed of a microscope objective, a lens array (150x150 array with 0.192mm unit size and 9mm focal length) and a single lens reflex (SLR). Pre-magnified by the microscope objective, the specimens formed multiple images through the lens array. A single photograph of a series of multiple sub-images has recorded perspective views of the specimens. The differences between adjacent sub-images have been analyzed for depth extraction with the mutual coherence algorithm. The experimental results show that the axial resolution can reach to 1μm -1 and lateral resolution can reach to 2 μm -1.

  17. Salt flank imaging by integrated prestack depth migration of VSP and surface

    NASA Astrophysics Data System (ADS)

    Jang, Seonghyung; Kim, Tae-yeon

    2014-05-01

    Since Vertical Seismic Profile (VSP) data include wavefields which can measure directly physical properties between surface and geological interfaces, it is usually used for detecting dip, anisotropy, and reflection amplitude or waveform with respect to incidence angles. Though VSP covers the vicinity of the borehole comparing to the surface seismic, it gives high resolution and it is helpful to find the precise location of a well in the 3-D image from surface seismic data. Normally VSP data are smaller Fresnel zone and wider bandwidth than surface seismic data due to less absorption of the higher frequencies. It gives high fidelity reservoir image for effective reservoir monitoring such as 4D time-lapse seismic and carbon capture and storage. Prestack reverse time migration (RTM) is widely used for imaging the complex subsurface structures. RTM is a method for imaging the subsurface in depth domain using inner product of source wavefield extrapolation in forward and receiver wavefield extrapolation in backward. Since RTM is applicable to any source-receiver geometry, we can apply the same algorithm to VSP and surface seismic data. In this study RTM is implemented the integrated depth imaging of walk-away VSP and surface seismic data in order to have high resolution salt flank image. A synthetic test example includes a schematic flank of salt body with horizontal layers. The model - 8 km wide by 4 km depth - represents a simple salt body and background with velocity of 3.0 km/s for salt body and background velocity of 2.0 km/s. The source wavelet is zero-phase with a central frequency of 10 Hz for surface seismic and 20 Hz for VSP data. VSP data were recorded in the central borehole located 4.0 km from the left side of the model and the 151 receivers in central borehole were on a 20 m spacing between the depth of 0.5 km and 3.5 km. We acquired the surface seismic data using 101 surface sources on 40 m spacing between 2.3 km and 6.3 km. The 101 receivers on the

  18. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  19. Improving depth resolution in digital breast tomosynthesis by iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Roth, Erin G.; Kraemer, David N.; Sidky, Emil Y.; Reiser, Ingrid S.; Pan, Xiaochuan

    2015-03-01

    Digital breast tomosynthesis (DBT) is currently enjoying tremendous growth in its application to screening for breast cancer. This is because it addresses a major weakness of mammographic projection imaging; namely, a cancer can be hidden by overlapping fibroglandular tissue structures or the same normal structures can mimic a malignant mass. DBT addresses these issues by acquiring few projections over a limited angle scanning arc that provides some depth resolution. As DBT is a relatively new device, there is potential to improve its performance significantly with improved image reconstruction algorithms. Previously, we reported a variation of adaptive steepest descent - projection onto convex sets (ASD-POCS) for DBT, which employed a finite differencing filter to enhance edges for improving visibility of tissue structures and to allow for volume-of-interest reconstruction. In the present work we present a singular value decomposition (SVD) analysis to demonstrate the gain in depth resolution for DBT afforded by use of the finite differencing filter.

  20. Underwater depth imaging using time-correlated single-photon counting.

    PubMed

    Maccarone, Aurora; McCarthy, Aongus; Ren, Ximing; Warburton, Ryan E; Wallace, Andy M; Moffat, James; Petillot, Yvan; Buller, Gerald S

    2015-12-28

    A depth imaging system, based on the time-of-flight approach and the time-correlated single-photon counting (TCSPC) technique, was investigated for use in highly scattering underwater environments. The system comprised a pulsed supercontinuum laser source, a monostatic scanning transceiver, with a silicon single-photon avalanche diode (SPAD) used for detection of the returned optical signal. Depth images were acquired in the laboratory at stand-off distances of up to 8 attenuation lengths, using per-pixel acquisition times in the range 0.5 to 100 ms, at average optical powers in the range 0.8 nW to 950 μW. In parallel, a LiDAR model was developed and validated using experimental data. The model can be used to estimate the performance of the system under a variety of scattering conditions and system parameters. PMID:26832050

  1. Pixel-based parametric source depth map for Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Altabella, L.; Boschi, F.; Spinelli, A. E.

    2016-01-01

    Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.

  2. Three-dimensional passive millimeter-wave imaging and depth estimation

    NASA Astrophysics Data System (ADS)

    Yeom, Seokwon; Lee, Dong-Su; Lee, Hyoung; Son, Jung-Young; Guschin, Vladimir P.

    2010-04-01

    We address three-dimensional passive millimeter-wave imaging (MMW) and depth estimation for remote objects. The MMW imaging is very useful for the harsh environment such as fog, smoke, snow, sandstorm, and drizzle. Its penetrating property into clothing provides a great advantage to security and defense systems. In this paper, the featurebased passive MMW stereo-matching process is proposed to estimate the distance of the concealed object under clothing. It will be shown that the proposed method can estimate the distance of the concealed object.

  3. Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy

    PubMed Central

    Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S.; Yuste, Rafael; Ahrens, Misha B.

    2016-01-01

    Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning—removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416 × 832 × 160 µm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain. PMID:26974063

  4. Depth-weighted Inverse and Imaging methods to study the Earth's Crust in Southern Italy

    NASA Astrophysics Data System (ADS)

    Fedi, M.

    2012-04-01

    Inversion means solving a set of geophysical equations for a spatial distribution of parameters (or functions) which could have produced an observed set of measurements. Imaging is instead a transformation of magnetometric data into a scaled 3D model resembling the true geometry of subsurface geologic features. While inversion theory allows many additional constraints, such as depth weighting, positivity, physical property bounds, smoothness, focusing, imaging methods of magnetic data derived under different theories are all found to reduce to either simple upward continuation or a depth-weighted upward continuation, with weights expressed in the general form of a power law of the altitude, with the half of the structural index as exponent. Note however that specifying the appropriate level of depth weighting is not just a problem in these imaging techniques but should also be considered in standard inversion methods. We will also investigate the relationship between imaging methods and multiscale methods. A multiscale analysis is well suitable to study potential fields because the way potential fields convey source information is strictly related to the scale of analysis. The stability of multiscale methods results from mixing, in a single operator, the wavenumber low-pass behaviour of the upward continuation transformation of the field with the enhancement high-pass properties of n-order derivative transformations. So, the complex reciprocal interference of several field components may be efficiently faced at several scales of the analysis and the depth to the sources may be estimated together with the homogeneity degrees of the field. We will describe the main aspects of both the kinds of interpretation under the study of multi-source models and apply either inversion or imaging techniques to the magnetic data of complex crustal areas of Southern Italy, such as the Campanian volcanic district and the Southern Apennines. The studied area includes a Pleistocene

  5. Noninvasive determination of burn depth in children by digital infrared thermal imaging

    NASA Astrophysics Data System (ADS)

    Medina-Preciado, Jose David; Kolosovas-Machuca, Eleazar Samuel; Velez-Gomez, Ezequiel; Miranda-Altamirano, Ariel; González, Francisco Javier

    2013-06-01

    Digital infrared thermal imaging is used to assess noninvasively the severity of burn wounds in 13 pediatric patients. A delta-T (ΔT) parameter obtained by subtracting the temperature of a healthy contralateral region from the temperature of the burn wound is compared with the burn depth measured histopathologically. Thermal imaging results show that superficial dermal burns (IIa) show increased temperature compared with their contralateral healthy region, while deep dermal burns (IIb) show a lower temperature than their contralateral healthy region. This difference in temperature is statistically significant (p<0.0001) and provides a way of distinguishing deep dermal from superficial dermal burns. These results show that digital infrared thermal imaging could be used as a noninvasive procedure to assess burn wounds. An additional advantage of using thermal imaging, which can image a large skin surface area, is that it can be used to identify regions with different burn depths and estimate the size of the grafts needed for deep dermal burns.

  6. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement. PMID:26928458

  7. Diffuse Optical Imaging and Spectroscopy of the Human Breast for Quantitative Oximetry with Depth Resolution

    NASA Astrophysics Data System (ADS)

    Yu, Yang

    Near-infrared spectral imaging for breast cancer diagnostics and monitoring has been a hot research topic for the past decade. Here we present instrumentation for diffuse optical imaging of breast tissue with tandem scan of a single source-detector pair with broadband light in transmission geometry for tissue oximetry. The efforts to develop the continuous-wave (CW) domain instrument have been described, and a frequency-domain (FD) system is also used to measure the bulk tissue optical properties and the breast thickness distribution. We also describe the efforts to improve the data processing codes in the 2D spatial domain for better noise suppression, contrast enhancement, and spectral analysis. We developed a paired-wavelength approach, which is based on finding pairs of wavelength that feature the same optical contrast, to quantify the tissue oxygenation for the absorption structures detected in the 2D structural image. A total of eighteen subjects, two of whom were bearing breast cancer on their right breasts, were measured with this hybrid CW/FD instrument and processed with the improved algorithms. We obtained an average tissue oxygenation value of 87% +/- 6% from the healthy breasts, significantly higher than that measured in the diseased breasts (69% +/- 14%) (p < 0.01). For the two diseased breasts, the tumor areas bear hypoxia signatures versus the remainder of the breast, with oxygenation values of 49 +/- 11% (diseased region) vs. 61 +/- 16% (healthy regions) for the breast with invasive ductal carcinoma, and 58 +/- 8% (diseased region) vs 77 +/- 11% (healthy regions) for ductal carcinoma in situ. Our subjects came from various ethnical/racial backgrounds, and two-thirds of our subjects were less than thirty years old, indicating a potential to apply the optical mammography to a broad population. The second part of this thesis covers the topic of depth discrimination, which is lacking with our single source-detector scan system. Based on an off

  8. X-ray imaging using avalanche multiplication in amorphous selenium: Investigation of depth dependent avalanche noise

    SciTech Connect

    Hunt, D. C.; Tanioka, Kenkichi; Rowlands, J. A.

    2007-03-15

    The past decade has seen the swift development of the flat-panel detector (FPD), also known as the active matrix flat-panel imager, for digital radiography. This new technology is applicable to other modalities, such as fluoroscopy, which require the acquisition of multiple images, but could benefit from some improvements. In such applications where more than one image is acquired less radiation is available to form each image and amplifier noise becomes a serious problem. Avalanche multiplication in amorphous selenium (a-Se) can provide the necessary amplification prior to read out so as to reduce the effect of electronic noise of the FPD. However, in direct conversion detectors avalanche multiplication can lead to a new source of gain fluctuation noise called depth dependent avalanche noise. A theoretical model was developed to understand depth dependent avalanche noise. Experiments were performed on a direct imaging system implementing avalanche multiplication in a layer of a-Se to validate the theory. For parameters appropriate for a diagnostic imaging FPD for fluoroscopy the detective quantum efficiency (DQE) was found to drop by as much as 50% with increasing electric field, as predicted by the theoretical model. This drop in DQE can be eliminated by separating the collection and avalanche regions. For example by having a region of low electric field where x rays are absorbed and converted into charge that then drifts into a region of high electric field where the x-ray generated charge undergoes avalanche multiplication. This means quantum noise limited direct conversion FPD for low exposure imaging techniques are a possibility.

  9. Impact of the optical depth of field on cytogenetic image quality.

    PubMed

    Qiu, Yuchen; Chen, Xiaodong; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Wei R; Liu, Hong

    2012-09-01

    In digital pathology, clinical specimen slides are converted into digital images by microscopic image scanners. Since random vibration and mechanical drifting are unavoidable on even high-precision moving stages, the optical depth of field (DOF) of microscopic systems may affect image quality, in particular when using an objective lens with high magnification power. The DOF of a microscopic system was theoretically analyzed and experimentally validated using standard resolution targets under 60× dry and 100× oil objective lenses, respectively. Then cytogenetic samples were imaged at in-focused and off-focused states to analyze the impact of DOF on the acquired image qualities. For the investigated system equipped with the 60× dry and 100× oil objective lenses, the theoretical estimation of the DOF are 0.855 μm and 0.703 μm, and the measured DOF are 3.0 μm and 1.8 μm, respectively. The observation reveals that the chromosomal bands of metaphase cells are distinguishable when images are acquired up to approximately 1.5 μm or 1 μm out of focus using the 60× dry and 100× oil objective lenses, respectively. The results of this investigation provide important designing trade-off parameters to optimize the digital microscopic image scanning systems in the future. PMID:23085918

  10. Mobile phone imaging module with extended depth of focus based on axial irradiance equalization phase coding

    NASA Astrophysics Data System (ADS)

    Sung, Hsin-Yueh; Chen, Po-Chang; Chang, Chuan-Chung; Chang, Chir-Weei; Yang, Sidney S.; Chang, Horng

    2011-01-01

    This paper presents a mobile phone imaging module with extended depth of focus (EDoF) by using axial irradiance equalization (AIE) phase coding. From radiation energy transfer along optical axis with constant irradiance, the focal depth enhancement solution is acquired. We introduce the axial irradiance equalization phase coding to design a two-element 2-megapixel mobile phone lens for trade off focus-like aberrations such as field curvature, astigmatism and longitudinal chromatic defocus. The design results produce modulation transfer functions (MTF) and phase transfer functions (PTF) with substantially similar characteristics at different field and defocus positions within Nyquist pass band. Besides, the measurement results are shown. Simultaneously, the design results and measurement results are compared. Next, for the EDoF mobile phone camera imaging system, we present a digital decoding design method and calculate a minimum mean square error (MMSE) filter. Then, the filter is applied to correct the substantially similar blur image. Last, the blur and de-blur images are demonstrated.

  11. Airborne imaging spectrometer data of the Ruby Mountains, Montana: Mineral discrimination using relative absorption band-depth images

    USGS Publications Warehouse

    Crowley, J.K.; Brickey, D.W.; Rowan, L.C.

    1989-01-01

    Airborne imaging spectrometer data collected in the near-infrared (1.2-2.4 ??m) wavelength range were used to study the spectral expression of metamorphic minerals and rocks in the Ruby Mountains of southwestern Montana. The data were analyzed by using a new data enhancement procedure-the construction of relative absorption band-depth (RBD) images. RBD images, like bandratio images, are designed to detect diagnostic mineral absorption features, while minimizing reflectance variations related to topographic slope and albedo differences. To produce an RBD image, several data channels near an absorption band shoulder are summed and then divided by the sum of several channels located near the band minimum. RBD images are both highly specific and sensitive to the presence of particular mineral absorption features. Further, the technique does not distort or subdue spectral features as sometimes occurs when using other data normalization methods. By using RBD images, a number of rock and soil units were distinguished in the Ruby Mountains including weathered quartz - feldspar pegmatites, marbles of several compositions, and soils developed over poorly exposed mica schists. The RBD technique is especially well suited for detecting weak near-infrared spectral features produced by soils, which may permit improved mapping of subtle lithologic and structural details in semiarid terrains. The observation of soils rich in talc, an important industrial commodity in the study area, also indicates that RBD images may be useful for mineral exploration. ?? 1989.

  12. Three-Dimensional Image Cytometer Based on Widefield Structured Light Microscopy and High-Speed Remote Depth Scanning

    PubMed Central

    Choi, Heejin; Wadduwage, Dushan N.; Tu, Ting Yuan; Matsudaira, Paul; So, Peter T. C.

    2014-01-01

    A high throughput 3D image cytometer have been developed that improves imaging speed by an order of magnitude over current technologies. This imaging speed improvement was realized by combining several key components. First, a depth-resolved image can be rapidly generated using a structured light reconstruction algorithm that requires only two wide field images, one with uniform illumination and the other with structured illumination. Second, depth scanning is implemented using the high speed remote depth scanning. Finally, the large field of view, high NA objective lens and the high pixelation, high frame rate sCMOS camera enable high resolution, high sensitivity imaging of a large cell population. This system can image at 800 cell/sec in 3D at submicron resolution corresponding to imaging 1 million cells in 20 min. The statistical accuracy of this instrument is verified by quantitatively measuring rare cell populations with ratio ranging from 1:1 to 1:105. PMID:25352187

  13. Depth-resolved rhodopsin molecular contrast imaging for functional assessment of photoreceptors

    NASA Astrophysics Data System (ADS)

    Liu, Tan; Wen, Rong; Lam, Byron L.; Puliafito, Carmen A.; Jiao, Shuliang

    2015-09-01

    Rhodopsin, the light-sensing molecule in the outer segments of rod photoreceptors, is responsible for converting light into neuronal signals in a process known as phototransduction. Rhodopsin is thus a functional biomarker for rod photoreceptors. Here we report a novel technology based on visible-light optical coherence tomography (VIS-OCT) for in vivo molecular imaging of rhodopsin. The depth resolution of OCT allows the visualization of the location where the change of optical absorption occurs and provides a potentially accurate assessment of rhodopsin content by segmentation of the image at the location. Rhodopsin OCT can be used to quantitatively image rhodopsin distribution and thus assess the distribution of functional rod photoreceptors in the retina. Rhodopsin OCT can bring significant impact into ophthalmic clinics by providing a tool for the diagnosis and severity assessment of a variety of retinal conditions.

  14. Depth-resolved rhodopsin molecular contrast imaging for functional assessment of photoreceptors

    PubMed Central

    Liu, Tan; Wen, Rong; Lam, Byron L.; Puliafito, Carmen A.; Jiao, Shuliang

    2015-01-01

    Rhodopsin, the light-sensing molecule in the outer segments of rod photoreceptors, is responsible for converting light into neuronal signals in a process known as phototransduction. Rhodopsin is thus a functional biomarker for rod photoreceptors. Here we report a novel technology based on visible-light optical coherence tomography (VIS-OCT) for in vivo molecular imaging of rhodopsin. The depth resolution of OCT allows the visualization of the location where the change of optical absorption occurs and provides a potentially accurate assessment of rhodopsin content by segmentation of the image at the location. Rhodopsin OCT can be used to quantitatively image rhodopsin distribution and thus assess the distribution of functional rod photoreceptors in the retina. Rhodopsin OCT can bring significant impact into ophthalmic clinics by providing a tool for the diagnosis and severity assessment of a variety of retinal conditions. PMID:26358529

  15. Warping error analysis and reduction for depth-image-based rendering in 3DTV

    NASA Astrophysics Data System (ADS)

    Do, Luat; Zinger, Sveta; de With, Peter H. N.

    2011-03-01

    Interactive free-viewpoint selection applied to a 3D multi-view video signal is an attractive feature of the rapidly developing 3DTV media. In recent years, significant research has been done on free-viewpoint rendering algorithms which mostly have similar building blocks. In our previous work, we have analyzed the principal building blocks of most recent rendering algorithms and their contribution to the overall rendering quality. We have discovered that the first step, Warping determines the basic quality level of the complete rendering chain. In this paper, we have analyzed the warping step in more detail since it leads to ways for improvement. We have observed that the accuracy of warping is mainly determined by two factors which are sampling and rounding errors when performing pixel-based warping and quantization errors of depth maps. For each error factor, we have proposed a technique that can reduce the errors and thus increase the warping quality. Pixel-based warping errors are reduced by employing supersampling at the reference and virtual images and we decrease depth map errors by creating depth maps with more quantization levels. The new techniques are evaluated with two series of experiments using real-life and synthetic data. From these experiments, we have observed that reducing warping errors may increases the overall rendering quality and that the impact of errors due to pixel-based warping is much larger than that of errors due to depth quantization.

  16. Self-consistent depth profiling and imaging of GaN-based transistors using ion microbeams

    NASA Astrophysics Data System (ADS)

    Redondo-Cubero, A.; Corregidor, V.; Vázquez, L.; Alves, L. C.

    2015-04-01

    Using an ion microprobe, a comprehensive lateral and in-depth characterization of a single GaN-based high electron mobility transistor is carried out by means of Rutherford backscattering spectrometry (RBS) in combination with particle induced X-ray emission (PIXE). Elemental distribution was obtained for every individual section of the device (wafer, gate and source contact), identifying the basic constituents of the transistor (including the detection of the passivant layer) and checking its homogeneity. A self-consistent analysis of each individual regions of the transistor was carried out with a simultaneous fit of RBS and PIXE spectra with two different beam conditions. Following this approach, the quantification of the atomic content and the layer thicknesses was successfully achieved overcoming the mass-depth ambiguity of certain elements.

  17. Imaging with depth extension: where are the limits in fixed- focus cameras?

    NASA Astrophysics Data System (ADS)

    Bakin, Dmitry; Keelan, Brian

    2008-08-01

    The integration of novel optics designs, miniature CMOS sensors, and powerful digital processing into a single imaging module package is driving progress in handset camera systems in terms of performance, size (thinness) and cost. The miniature cameras incorporating high resolution sensors and fixed-focus Extended Depth of Field (EDOF) optics allow close-range reading of printed material (barcode patterns, business cards), while providing high quality imaging in more traditional applications. These cameras incorporate modified optics and digital processing to recover the soft-focus images and restore sharpness over a wide range of object distances. The effects a variety of parameters of the imaging module on the EDOF range were analyzed for a family of high resolution CMOS modules. The parameters include various optical properties of the imaging lens, and the characteristics of the sensor. The extension factors for the EDOF imaging module were defined in terms of an improved absolute resolution in object space while maintaining focus at infinity. This definition was applied for the purpose of identifying the minimally resolvable object details in mobile cameras with bar-code reading feature.

  18. Subjective quality and depth assessment in stereoscopic viewing of volume-rendered medical images

    NASA Astrophysics Data System (ADS)

    Rousson, Johanna; Couturou, Jeanne; Vetsuypens, Arnout; Platisa, Ljiljana; Kumcu, Asli; Kimpe, Tom; Philips, Wilfried

    2014-03-01

    No study to-date explored the relationship between perceived image quality (IQ) and perceived depth (DP) in stereoscopic medical images. However, this is crucial to design objective quality metrics suitable for stereoscopic medical images. This study examined this relationship using volume-rendered stereoscopic medical images for both dual- and single-view distortions. The reference image was modified to simulate common alterations occurring during the image acquisition stage or at the display side: added white Gaussian noise, Gaussian filtering, changes in luminance, brightness and contrast. We followed a double stimulus five-point quality scale methodology to conduct subjective tests with eight non-expert human observers. The results suggested that DP was very robust to luminance, contrast and brightness alterations and insensitive to noise distortions until standard deviation σ=20 and crosstalk rates of 7%. In contrast, IQ seemed sensitive to all distortions. Finally, for both DP and IQ, the Friedman test indicated that the quality scores for dual-view distortions were significantly worse than scores for single-view distortions for multiple blur levels and crosstalk impairments. No differences were found for most levels of brightness, contrast and noise distortions. So, DP and IQ didn't react equivalently to identical impairments, and both depended whether dual- or single-view distortions were applied.

  19. Depth profiling and imaging capabilities of an ultrashort pulse laser ablation time of flight mass spectrometer

    PubMed Central

    Cui, Yang; Moore, Jerry F.; Milasinovic, Slobodan; Liu, Yaoming; Gordon, Robert J.; Hanley, Luke

    2012-01-01

    An ultrafast laser ablation time-of-flight mass spectrometer (AToF-MS) and associated data acquisition software that permits imaging at micron-scale resolution and sub-micron-scale depth profiling are described. The ion funnel-based source of this instrument can be operated at pressures ranging from 10−8 to ∼0.3 mbar. Mass spectra may be collected and stored at a rate of 1 kHz by the data acquisition system, allowing the instrument to be coupled with standard commercial Ti:sapphire lasers. The capabilities of the AToF-MS instrument are demonstrated on metal foils and semiconductor wafers using a Ti:sapphire laser emitting 800 nm, ∼75 fs pulses at 1 kHz. Results show that elemental quantification and depth profiling are feasible with this instrument. PMID:23020378

  20. 3-D resistivity imaging of buried concrete infrastructure with application to unknown bridge foundation depth determination

    NASA Astrophysics Data System (ADS)

    Everett, M. E.; Arjwech, R.; Briaud, J.; Hurlebaus, S.; Medina-Cetina, Z.; Tucker, S.; Yousefpour, N.

    2010-12-01

    Bridges are always vulnerable to scour and those mainly older ones with unknown foundations constitute a significant risk to public safety. Geophysical testing of bridge foundations using 3-D resistivity imaging is a promising non-destructive technology but its execution and reliable interpretation remains a challenging task. A major difficulty to diagnosing foundation depth is that a single linear electrode profile generally does not provide adequate 3—D illumination to provide a useful image of the bottom of the foundation. To further explore the capabilities of resistivity tomography, we conducted a 3—D resistivity survey at a geotechnical test area which includes groups of buried, steel—reinforced concrete structures, such as slabs and piles, with cylindrical and square cross—sections that serve as proxies for bridge foundations. By constructing a number of 3—D tomograms using selected data subsets and comparing the resulting images, we have identified efficient combinations of data acquired in the vicinity of a given foundation which enable the most cost-effective and reliable depth determination. The numerous issues that are involved in adapting this methodology to actual bridge sites is discussed.

  1. Noninvasive measurement of burn wound depth applying infrared thermal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jaspers, Mariëlle E.; Maltha, Ilse M.; Klaessens, John H.; Vet, Henrica C.; Verdaasdonk, Rudolf M.; Zuijlen, Paul P.

    2016-02-01

    In burn wounds early discrimination between the different depths plays an important role in the treatment strategy. The remaining vasculature in the wound determines its healing potential. Non-invasive measurement tools that can identify the vascularization are therefore considered to be of high diagnostic importance. Thermography is a non-invasive technique that can accurately measure the temperature distribution over a large skin or tissue area, the temperature is a measure of the perfusion of that area. The aim of this study was to investigate the clinimetric properties (i.e. reliability and validity) of thermography for measuring burn wound depth. In a cross-sectional study with 50 burn wounds of 35 patients, the inter-observer reliability and the validity between thermography and Laser Doppler Imaging were studied. With ROC curve analyses the ΔT cut-off point for different burn wound depths were determined. The inter-observer reliability, expressed by an intra-class correlation coefficient of 0.99, was found to be excellent. In terms of validity, a ΔT cut-off point of 0.96°C (sensitivity 71%; specificity 79%) differentiates between a superficial partial-thickness and deep partial-thickness burn. A ΔT cut-off point of -0.80°C (sensitivity 70%; specificity 74%) could differentiate between a deep partial-thickness and a full-thickness burn wound. This study demonstrates that thermography is a reliable method in the assessment of burn wound depths. In addition, thermography was reasonably able to discriminate among different burn wound depths, indicating its potential use as a diagnostic tool in clinical burn practice.

  2. Achieving molecular selectivity in imaging using multiphoton Raman spectroscopy techniques

    SciTech Connect

    Holtom, Gary R. ); Thrall, Brian D. ); Chin, Beek Yoke ); Wiley, H Steven ); Colson, Steven D. )

    2000-12-01

    In the case of most imaging methods, contrast is generated either by physical properties of the sample (Differential Image Contrast, Phase Contrast), or by fluorescent labels that are localized to a particular protein or organelle. Standard Raman and infrared methods for obtaining images are based upon the intrinsic vibrational properties of molecules, and thus obviate the need for attached flurophores. Unfortunately, they have significant limitations for live-cell imaging. However, an active Raman method, called Coherent Anti-Stokes Raman Scattering (CARS), is well suited for microscopy, and provides a new means for imaging specific molecules. Vibrational imaging techniques, such as CARS, avoid problems associated with photobleaching and photo-induced toxicity often associated with the use of fluorescent labels with live cells. Because the laser configuration needed to implement CARS technology is similar to that used in other multiphoton microscopy methods, such as two -photon fluorescence and harmonic generation, it is possible to combine imaging modalities, thus generating simultaneous CARS and fluorescence images. A particularly powerful aspect of CARS microscopy is its ability to selectively image deuterated compounds, thus allowing the visualization of molecules, such as lipids, that are chemically indistinguishable from the native species.

  3. Shading correction of camera captured document image with depth map information

    NASA Astrophysics Data System (ADS)

    Wu, Chyuan-Tyng; Allebach, Jan P.

    2015-01-01

    Camera modules have become more popular in consumer electronics and office products. As a consequence, people have many opportunities to use a camera-based device to record a hardcopy document in their daily lives. However, it is easy to let undesired shading into the captured document image through the camera. Sometimes, this non-uniformity may degrade the readability of the contents. In order to mitigate this artifact, some solutions have been developed. But most of them are only suitable for particular types of documents. In this paper, we introduce a content-independent and shape-independent method that will lessen the shading effects in captured document images. We want to reconstruct the image such that the result will look like a document image captured under a uniform lighting source. Our method utilizes the 3D depth map of the document surface and a look-up table strategy. We will first discuss the model and the assumptions that we used for the approach. Then, the process of creating and utilizing the look-up table will be described in the paper. We implement this algorithm with our prototype 3D scanner, which also uses a camera module to capture a 2D image of the object. Some experimental results will be presented to show the effectiveness of our method. Both flat and curved surface document examples will be included.

  4. Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters

    NASA Technical Reports Server (NTRS)

    Bos, Brent; Memarsadeghi, Nargess; Kizhner, Semion; Antonille, Scott

    2013-01-01

    A large depth-of-field particle image velocimeter (PIV) is designed to characterize dynamic dust environments on planetary surfaces. This instrument detects lofted dust particles, and senses the number of particles per unit volume, measuring their sizes, velocities (both speed and direction), and shape factors when the particles are large. To measure these particle characteristics in-flight, the instrument gathers two-dimensional image data at a high frame rate, typically >4,000 Hz, generating large amounts of data for every second of operation, approximately 6 GB/s. To characterize a planetary dust environment that is dynamic, the instrument would have to operate for at least several minutes during an observation period, easily producing more than a terabyte of data per observation. Given current technology, this amount of data would be very difficult to store onboard a spacecraft, and downlink to Earth. Since 2007, innovators have been developing an autonomous image analysis algorithm architecture for the PIV instrument to greatly reduce the amount of data that it has to store and downlink. The algorithm analyzes PIV images and automatically reduces the image information down to only the particle measurement data that is of interest, reducing the amount of data that is handled by more than 10(exp 3). The state of development for this innovation is now fairly mature, with a functional algorithm architecture, along with several key pieces of algorithm logic, that has been proven through field test data acquired with a proof-of-concept PIV instrument.

  5. Analysis of the depth of field in hexagonal array integral imaging systems based on modulation transfer function and Strehl ratio.

    PubMed

    Karimzadeh, Ayatollah

    2016-04-10

    Integral imaging is a technique for displaying three-dimensional images using microlens arrays. In this paper, a method for calculating root mean squared wavefront error and modulation transfer function (MTF) of a defocused integral imaging capture system with hexagonal aperture microlens arrays is introduced. Also, maximum allowable depth of field with Century MTF analyzing and Strehl criterion are obtained. PMID:27139873

  6. Probing depth and dynamic response of speckles in near infrared region for spectroscopic blood flow imaging

    NASA Astrophysics Data System (ADS)

    Yokoi, Naomichi; Aizu, Yoshihisa

    2016-04-01

    Imaging method based on bio-speckles is a useful means for blood flow visualization of living bodies and, it has been utilized for analyzing the condition or the health state of living bodies. Actually, the sensitivity of blood flow is influenced by tissue optical properties, which depend on the wavelength of illuminating laser light. In the present study, we experimentally investigate characteristics of the blood flow images obtained with two wavelengths of 780 nm and 830 nm in the near-infrared region. Experiments are conducted for sample models using a pork layer, horse blood layer and mirror, and for a human wrist and finger, to investigate optical penetration depth and dynamic response of speckles to the blood flow velocity for two wavelengths.

  7. Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding

    NASA Astrophysics Data System (ADS)

    Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito

    2015-02-01

    Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.

  8. Depth and all-in-focus images obtained by multi-line-scan light-field approach

    NASA Astrophysics Data System (ADS)

    Štolc, Svorad; Huber-Mörk, Reinhold; Holländer, Branislav; Soukup, Daniel

    2014-03-01

    We present a light-field multi-line-scan image acquisition and processing system intended for the 2.5/3-D inspection of fine surface structures, such as small parts, security print, etc. in an industrial environment. The system consists of an area-scan camera, that allows for a small number of sensor lines to be extracted at high frame rates, and a mechanism for transporting the inspected object at a constant speed. During the acquisition, the object is moved orthogonally to the camera's optical axis as well as the orientation of the sensor lines. In each time step, a predefined subset of lines is read out from the sensor and stored. Afterward, by collecting all corresponding lines acquired over time, a 3-D light field is generated, which consists of multiple views of the object observed from different viewing angles while transported w.r.t. the acquisition device. This structure allows for the construction of so-called epipolar plane images (EPIs) and subsequent EPI-based analysis in order to achieve two main goals: (i) the reliable estimation of a dense depth model and (ii) the construction of an all-in-focus intensity image. Beside specifics of our hardware setup, we also provide a detailed description of algorithmic solutions for the mentioned tasks. Two alternative methods for EPI-based analysis are compared based on artificial and real-world data.

  9. Numerical simulation of phase images and depth reconstruction in pulsed phase thermography

    NASA Astrophysics Data System (ADS)

    Hernandez-Valle, Saul; Peters, Kara

    2015-11-01

    In this work we apply the finite element (FE) method to simulate the results of pulsed phase thermography experiments on laminated composite plates. Specifically, the goal is to simulate the phase component of reflected thermal waves and therefore verify the calculation of defect depth through the identification of the defect blind frequency. The calculation of phase components requires a higher spatial and temporal resolution than that of the calculation of the reflected temperature. An FE modeling strategy is presented, including the estimation of the defect thermal properties, which in this case is represented as a foam insert impregnated with epoxy resin. A comparison of meshing strategies using tetrahedral and hexahedral elements reveals that temperature errors in the tetrahedral results are amplified in the calculation of phase images and blind frequencies. Finally, we investigate the linearity of the measured diffusion length (based on the blind frequency) as a function of defect depth. The simulations demonstrate a nonlinear relationship between the defect depth and diffusion length, calculated from the blind frequency, consistent with previous experimental observations.

  10. Imaging photoplethysmography for clinical assessment of cutaneous microcirculation at two different depths

    NASA Astrophysics Data System (ADS)

    Marcinkevics, Zbignevs; Rubins, Uldis; Zaharans, Janis; Miscuks, Aleksejs; Urtane, Evelina; Ozolina-Moll, Liga

    2016-03-01

    The feasibility of bispectral imaging photoplethysmography (iPPG) system for clinical assessment of cutaneous microcirculation at two different depths is proposed. The iPPG system has been developed and evaluated for in vivo conditions during various tests: (1) topical application of vasodilatory liniment on the skin, (2) skin local heating, (3) arterial occlusion, and (4) regional anesthesia. The device has been validated by the measurements of a laser Doppler imager (LDI) as a reference. The hardware comprises four bispectral light sources (530 and 810 nm) for uniform illumination of skin, video camera, and the control unit for triggering of the system. The PPG signals were calculated and the changes of perfusion index (PI) were obtained during the tests. The results showed convincing correlations for PI obtained by iPPG and LDI at (1) topical liniment (r=0.98) and (2) heating (r=0.98) tests. The topical liniment and local heating tests revealed good selectivity of the system for superficial microcirculation monitoring. It is confirmed that the iPPG system could be used for assessment of cutaneous perfusion at two different depths, morphologically and functionally different vascular networks, and thus utilized in clinics as a cost-effective alternative to the LDI.

  11. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  12. Complex wavelets for extended depth-of-field: a new method for the fusion of multichannel microscopy images.

    PubMed

    Forster, Brigitte; Van De Ville, Dimitri; Berent, Jesse; Sage, Daniel; Unser, Michael

    2004-09-01

    Microscopy imaging often suffers from limited depth-of-field. However, the specimen can be "optically sectioned" by moving the object along the optical axis. Then different areas appear in focus in different images. Extended depth-of-field is a fusion algorithm that combines those images into one single sharp composite. One promising method is based on the wavelet transform. Here, we show how the wavelet-based image fusion technique can be improved and easily extended to multichannel data. First, we propose the use of complex-valued wavelet bases, which seem to outperform traditional real-valued wavelet transforms. Second, we introduce a way to apply this technique for multichannel images that suppresses artifacts and does not introduce false colors, an important requirement for multichannel optical microscopy imaging. We evaluate our method on simulated image stacks and give results relevant to biological imaging. PMID:15570586

  13. The optimal polarizations for achieving maximum contrast in radar images

    NASA Technical Reports Server (NTRS)

    Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.

    1988-01-01

    There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.

  14. Broadband optical mammography instrument for depth-resolved imaging and local dynamic measurements

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Nishanth; Kainerstorfer, Jana M.; Sassaroli, Angelo; Anderson, Pamela G.; Fantini, Sergio

    2016-02-01

    We present a continuous-wave instrument for non-invasive diffuse optical imaging of the breast in a parallel-plate transmission geometry. The instrument measures continuous spectra in the wavelength range 650-1000 nm, with an intensity noise level <1.5% and a spatial sampling rate of 5 points/cm in the x- and y-directions. We collect the optical transmission at four locations, one collinear and three offset with respect to the illumination optical fiber, to recover the depth of optical inhomogeneities in the tissue. We imaged a tissue-like, breast shaped, silicone phantom (6 cm thick) with two embedded absorbing structures: a black circle (1.7 cm in diameter) and a black stripe (3 mm wide), designed to mimic a tumor and a blood vessel, respectively. The use of a spatially multiplexed detection scheme allows for the generation of on-axis and off-axis projection images simultaneously, as opposed to requiring multiple scans, thus decreasing scan-time and motion artifacts. This technique localizes detected inhomogeneities in 3D and accurately assigns their depth to within 1 mm in the ideal conditions of otherwise homogeneous tissue-like phantoms. We also measured induced hemodynamic changes in the breast of a healthy human subject at a selected location (no scanning). We applied a cyclic, arterial blood pressure perturbation by alternating inflation (to a pressure of 200 mmHg) and deflation of a pneumatic cuff around the subject's thigh at a frequency of 0.05 Hz, and measured oscillations with amplitudes up to 1 μM and 0.2 μM in the tissue concentrations of oxyhemoglobin and deoxyhemoglobin, respectively. These hemodynamic oscillations provide information about the vascular structure and functional integrity in tissue, and may be used to assess healthy or abnormal perfusion in a clinical setting.

  15. Broadband optical mammography instrument for depth-resolved imaging and local dynamic measurements.

    PubMed

    Krishnamurthy, Nishanth; Kainerstorfer, Jana M; Sassaroli, Angelo; Anderson, Pamela G; Fantini, Sergio

    2016-02-01

    We present a continuous-wave instrument for non-invasive diffuse optical imaging of the breast in a parallel-plate transmission geometry. The instrument measures continuous spectra in the wavelength range 650-1000 nm, with an intensity noise level <1.5% and a spatial sampling rate of 5 points/cm in the x- and y-directions. We collect the optical transmission at four locations, one collinear and three offset with respect to the illumination optical fiber, to recover the depth of optical inhomogeneities in the tissue. We imaged a tissue-like, breast shaped, silicone phantom (6 cm thick) with two embedded absorbing structures: a black circle (1.7 cm in diameter) and a black stripe (3 mm wide), designed to mimic a tumor and a blood vessel, respectively. The use of a spatially multiplexed detection scheme allows for the generation of on-axis and off-axis projection images simultaneously, as opposed to requiring multiple scans, thus decreasing scan-time and motion artifacts. This technique localizes detected inhomogeneities in 3D and accurately assigns their depth to within 1 mm in the ideal conditions of otherwise homogeneous tissue-like phantoms. We also measured induced hemodynamic changes in the breast of a healthy human subject at a selected location (no scanning). We applied a cyclic, arterial blood pressure perturbation by alternating inflation (to a pressure of 200 mmHg) and deflation of a pneumatic cuff around the subject's thigh at a frequency of 0.05 Hz, and measured oscillations with amplitudes up to 1 μM and 0.2 μM in the tissue concentrations of oxyhemoglobin and deoxyhemoglobin, respectively. These hemodynamic oscillations provide information about the vascular structure and functional integrity in tissue, and may be used to assess healthy or abnormal perfusion in a clinical setting. PMID:26931870

  16. Quantum dot imaging in the second near-infrared optical window: studies on reflectance fluorescence imaging depths by effective fluence rate and multiple image acquisition

    NASA Astrophysics Data System (ADS)

    Jung, Yebin; Jeong, Sanghwa; Nayoun, Won; Ahn, Boeun; Kwag, Jungheon; Geol Kim, Sang; Kim, Sungjee

    2015-04-01

    Quantum dot (QD) imaging capability was investigated by the imaging depth at a near-infrared second optical window (SOW; 1000 to 1400 nm) using time-modulated pulsed laser excitations to control the effective fluence rate. Various media, such as liquid phantoms, tissues, and in vivo small animals, were used and the imaging depths were compared with our predicted values. The QD imaging depth under excitation of continuous 20 mW/cm2 laser was determined to be 10.3 mm for 2 wt% hemoglobin phantom medium and 5.85 mm for 1 wt% intralipid phantom, which were extended by more than two times on increasing the effective fluence rate to 2000 mW/cm2. Bovine liver and porcine skin tissues also showed similar enhancement in the contrast-to-noise ratio (CNR) values. A QD sample was inserted into the abdomen of a mouse. With a higher effective fluence rate, the CNR increased more than twofold and the QD sample became clearly visualized, which was completely undetectable under continuous excitation. Multiple acquisitions of QD images and averaging process pixel by pixel were performed to overcome the thermal noise issue of the detector in SOW, which yielded significant enhancement in the imaging capability, showing up to a 1.5 times increase in the CNR.

  17. Optimized non-integer order phase mask to extend the depth of field of an imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Jiang; Miao, Erlong; Sui, Yongxin; Yang, Huaijiang

    2016-09-01

    Wavefront coding is an effective optical technique used to extend the depth of field for an incoherent imaging system. Through introducing an optimized phase mask to the pupil plane, the modulated optical transfer function is defocus-invariant. In this paper, we proposed a new form phase mask using non-integer order and signum function to extend the depth of field. The performance of the phase mask is evaluated by comparing defocused modulation transfer function invariant and Fisher information with other phase masks. Defocused imaging simulation is also carried out. The results demonstrate the advantages of non-integer order phase mask and its effectiveness on the depth of field extension.

  18. Anatomy of the western Java plate interface from depth-migrated seismic images

    USGS Publications Warehouse

    Kopp, H.; Hindle, D.; Klaeschen, D.; Oncken, O.; Reichert, C.; Scholl, D.

    2009-01-01

    Newly pre-stack depth-migrated seismic images resolve the structural details of the western Java forearc and plate interface. The structural segmentation of the forearc into discrete mechanical domains correlates with distinct deformation styles. Approximately 2/3 of the trench sediment fill is detached and incorporated into frontal prism imbricates, while the floor sequence is underthrust beneath the d??collement. Western Java, however, differs markedly from margins such as Nankai or Barbados, where a uniform, continuous d??collement reflector has been imaged. In our study area, the plate interface reveals a spatially irregular, nonlinear pattern characterized by the morphological relief of subducted seamounts and thicker than average patches of underthrust sediment. The underthrust sediment is associated with a low velocity zone as determined from wide-angle data. Active underplating is not resolved, but likely contributes to the uplift of the large bivergent wedge that constitutes the forearc high. Our profile is located 100 km west of the 2006 Java tsunami earthquake. The heterogeneous d??collement zone regulates the friction behavior of the shallow subduction environment where the earthquake occurred. The alternating pattern of enhanced frictional contact zones associated with oceanic basement relief and weak material patches of underthrust sediment influences seismic coupling and possibly contributed to the heterogeneous slip distribution. Our seismic images resolve a steeply dipping splay fault, which originates at the d??collement and terminates at the sea floor and which potentially contributes to tsunami generation during co-seismic activity. ?? 2009 Elsevier B.V.

  19. Subduction of European continental crust to 70 km depth imaged in the Western Alps

    NASA Astrophysics Data System (ADS)

    Paul, Anne; Zhao, Liang; Guillot, Stéphane; Solarino, Stefano

    2015-04-01

    The first conclusive evidence in support of the burial (and exhumation) of continental crust to depths larger than 90 km was provided by the discovery of coesite-bearing metamorphic rocks in the Dora Maira massif of the Western Alps (Chopin, 1984). Since then, even though similar outcrops of exhumed HP/UHP rocks have been recognized in a number of collisional belts, direct seismic evidences for subduction of continental crust in the mantle of the upper plate remain rare. In the Western Alps, the greatest depth ever recorded for the European Moho is 55 km by wide-angle seismic reflection (ECORS-CROP DSS Group, 1989). In an effort to image the European Moho at greater depth, and unravel the very complex lithospheric structure of the W-Alps, we have installed the CIFALPS temporary seismic array across the Southwestern Alps for 14 months (2012-2013). The almost linear array runs from the Rhône valley (France) to the Po plain (Italy) across the Dora Maira massif where exhumed HP/UHP metamorphic rocks of continental origin were first discovered. We used the receiver function processing technique that enhances P-to-S converted waves at velocity boundaries beneath the array. The receiver function records were migrated to depth using 4 different 1-D velocity models to account for the strongest structural changes along the profile. They were then stacked using the classical common-conversion point technique. Beneath the Southeast basin and the external zones, the obtained seismic section displays a clear converted phase on the European Moho, dipping gently to the ENE from ~35 km at the western end of the profile, to ~40 km beneath the Frontal Penninic thrust (FPT). The Moho dip then noticeably increases beneath the internal zones, while the amplitude of the converted phase weakens. The weak European Moho signal may be traced to 70-75 km depth beneath the eastern Dora Maira massif and the westernmost Po plain. At shallower level (20-40 km), we observe a set of strong

  20. Depth to Diameter Ratios of New Martian Craters from HiRISE Images

    NASA Astrophysics Data System (ADS)

    Daubar, Ingrid; McEwen, A. S.

    2009-09-01

    More than 90 new primary impact sites have been identified by the Context camera on the Mars Reconnaissance Orbiter and confirmed with HiRISE. Before and after images date these impacts at only months to decades old. The 25-cm pixel scale of HiRISE data allows us to study the morphology of these extremely young and extremely small (10 meter-scale) craters. 142 craters were measured at 44 separate new impact sites. About half of those sites are single-crater impacts, while the rest consist of clusters of craters, which were measured individually when large enough to resolve. Depths were calculated from shadow measurements assuming a simple parabolic shape using the technique of Chappelow & Sharpton (2002). The measurements follow a power law fit: d=0.28*D0.97. No statistical difference was found between single-crater sites and cluster sites. These new impacts seem to result in slightly deeper craters on average than previously found for older, larger simple craters, although the fit converges with previous studies at larger sizes. If the difference is statistically significant, it could be related to their extreme youth, their small sizes, uncertainties in the depth measurements due to non-parabolic shapes, or the target material properties. Since detection relies on low-resolution identification of dark blast zones where surface dust has been disturbed, these new craters are concentrated in areas of uniform dust cover and often significant mantling. Morphological features such as flat floors and benches may provide a measure of the depth of this mantling and an estimate of its effect on the observed d/D.

  1. The Impact of New York's School Libraries on Student Achievement and Motivation: Phase II--In-Depth Study

    ERIC Educational Resources Information Center

    Small, Ruth V.; Snyder, Jaime

    2009-01-01

    This article reports the results of the second phase of a three-phase study on the impact of the New York State's school libraries' services and resources on student achievement and motivation. A representative sample of more than 1,600 classroom teachers, students, and school library media specialists (SMLSs) from 47 schools throughout New York…

  2. Development of a large-angle pinhole gamma camera with depth-of-interaction capability for small animal imaging

    NASA Astrophysics Data System (ADS)

    Baek, C.-H.; An, S. J.; Kim, H.-I.; Choi, Y.; Chung, Y. H.

    2012-01-01

    A large-angle gamma camera was developed for imaging small animal models used in medical and biological research. The simulation study shows that a large field of view (FOV) system provides higher sensitivity with respect to a typical pinhole gamma cameras by reducing the distance between the pinhole and the object. However, this gamma camera suffers from the degradation of the spatial resolution at the periphery region due to parallax error by obliquely incident photons. We propose a new method to measure the depth of interaction (DOI) using three layers of monolithic scintillators to reduce the parallax error. The detector module consists of three layers of monolithic CsI(Tl) crystals with dimensions of 50.0 × 50.0 × 2.0 mm3, a Hamamatsu H8500 PSPMT and a large-angle pinhole collimator with an acceptance angle of 120°. The 3-dimensional event positions were determined by the maximum-likelihood position-estimation (MLPE) algorithm and the pre-generated look up table (LUT). The spatial resolution (FWHM) of a Co-57 point-like source was measured at different source position with the conventional method (Anger logic) and with DOI information. We proved that high sensitivity can be achieved without degradation of spatial resolution using a large-angle pinhole gamma camera: this system can be used as a small animal imaging tool.

  3. Lidar Waveform-Based Analysis of Depth Images Constructed Using Sparse Single-Photon Data.

    PubMed

    Altmann, Yoann; Ren, Ximing; McCarthy, Aongus; Buller, Gerald S; McLaughlin, Steve

    2016-05-01

    This paper presents a new Bayesian model and algorithm used for depth and reflectivity profiling using full waveforms from the time-correlated single-photon counting measurement in the limit of very low photon counts. The proposed model represents each Lidar waveform as a combination of a known impulse response, weighted by the target reflectivity, and an unknown constant background, corrupted by Poisson noise. Prior knowledge about the problem is embedded through prior distributions that account for the different parameter constraints and their spatial correlation among the image pixels. In particular, a gamma Markov random field (MRF) is used to model the joint distribution of the target reflectivity, and a second MRF is used to model the distribution of the target depth, which are both expected to exhibit significant spatial correlations. An adaptive Markov chain Monte Carlo algorithm is then proposed to perform Bayesian inference. This algorithm is equipped with a stochastic optimization adaptation mechanism that automatically adjusts the parameters of the MRFs by maximum marginal likelihood estimation. Finally, the benefits of the proposed methodology are demonstrated through a series of experiments using real data. PMID:26886984

  4. Nanoscale β-nuclear magnetic resonance depth imaging of topological insulators

    NASA Astrophysics Data System (ADS)

    Koumoulis, Dimitrios; Morris, Gerald D.; He, Liang; Kou, Xufeng; King, Danny; Wang, Dong; Hossain, Masrur D.; Wang, Kang L.; Fiete, Gregory A.; Kanatzidis, Mercouri G.; Bouchard, Louis-S.

    2015-07-01

    Considerable evidence suggests that variations in the properties of topological insulators (TIs) at the nanoscale and at interfaces can strongly affect the physics of topological materials. Therefore, a detailed understanding of surface states and interface coupling is crucial to the search for and applications of new topological phases of matter. Currently, no methods can provide depth profiling near surfaces or at interfaces of topologically inequivalent materials. Such a method could advance the study of interactions. Herein, we present a noninvasive depth-profiling technique based on β-detected NMR (β-NMR) spectroscopy of radioactive 8Li+ ions that can provide "one-dimensional imaging" in films of fixed thickness and generates nanoscale views of the electronic wavefunctions and magnetic order at topological surfaces and interfaces. By mapping the 8Li nuclear resonance near the surface and 10-nm deep into the bulk of pure and Cr-doped bismuth antimony telluride films, we provide signatures related to the TI properties and their topological nontrivial characteristics that affect the electron-nuclear hyperfine field, the metallic shift, and magnetic order. These nanoscale variations in β-NMR parameters reflect the unconventional properties of the topological materials under study, and understanding the role of heterogeneities is expected to lead to the discovery of novel phenomena involving quantum materials.

  5. Nanoscale β-nuclear magnetic resonance depth imaging of topological insulators.

    PubMed

    Koumoulis, Dimitrios; Morris, Gerald D; He, Liang; Kou, Xufeng; King, Danny; Wang, Dong; Hossain, Masrur D; Wang, Kang L; Fiete, Gregory A; Kanatzidis, Mercouri G; Bouchard, Louis-S

    2015-07-14

    Considerable evidence suggests that variations in the properties of topological insulators (TIs) at the nanoscale and at interfaces can strongly affect the physics of topological materials. Therefore, a detailed understanding of surface states and interface coupling is crucial to the search for and applications of new topological phases of matter. Currently, no methods can provide depth profiling near surfaces or at interfaces of topologically inequivalent materials. Such a method could advance the study of interactions. Herein, we present a noninvasive depth-profiling technique based on β-detected NMR (β-NMR) spectroscopy of radioactive (8)Li(+) ions that can provide "one-dimensional imaging" in films of fixed thickness and generates nanoscale views of the electronic wavefunctions and magnetic order at topological surfaces and interfaces. By mapping the (8)Li nuclear resonance near the surface and 10-nm deep into the bulk of pure and Cr-doped bismuth antimony telluride films, we provide signatures related to the TI properties and their topological nontrivial characteristics that affect the electron-nuclear hyperfine field, the metallic shift, and magnetic order. These nanoscale variations in β-NMR parameters reflect the unconventional properties of the topological materials under study, and understanding the role of heterogeneities is expected to lead to the discovery of novel phenomena involving quantum materials. PMID:26124141

  6. Estimating Lunar Pyroclastic Deposit Depth from Imaging Radar Data: Applications to Lunar Resource Assessment

    NASA Technical Reports Server (NTRS)

    Campbell, B. A.; Stacy, N. J.; Campbell, D. B.; Zisk, S. H.; Thompson, T. W.; Hawke, B. R.

    1992-01-01

    Lunar pyroclastic deposits represent one of the primary anticipated sources of raw materials for future human settlements. These deposits are fine-grained volcanic debris layers produced by explosive volcanism contemporaneous with the early stage of mare infilling. There are several large regional pyroclastic units on the Moon (for example, the Aristarchus Plateau, Rima Bode, and Sulpicius Gallus formations), and numerous localized examples, which often occur as dark-halo deposits around endogenic craters (such as in the floor of Alphonsus Crater). Several regional pyroclastic deposits were studied with spectral reflectance techniques: the Aristarchus Plateau materials were found to be a relatively homogeneous blanket of iron-rich glasses. One such deposit was sampled at the Apollo 17 landing site, and was found to have ferrous oxide and titanium dioxide contents of 12 percent and 5 percent, respectively. While the areal extent of these deposits is relatively well defined from orbital photographs, their depths have been constrained only by a few studies of partially filled impact craters and by imaging radar data. A model for radar backscatter from mantled units applicable to both 70-cm and 12.6-cm wavelength radar data is presented. Depth estimates from such radar observations may be useful in planning future utilization of lunar pyroclastic deposits.

  7. Examining the Choroid in Ocular Inflammation: A Focus on Enhanced Depth Imaging

    PubMed Central

    Baltmr, Abeir; Lightman, Sue; Tomkins-Netzer, Oren

    2014-01-01

    The choroid is the vascular layer that supplies the outer retina and is involved in the pathogenesis of several ocular conditions including choroidal tumors, age related macular degeneration, central serous chorioretinopathy, diabetic retinopathy, and uveitis. Nevertheless, difficulties in the visualization of the choroid have limited our understanding of its exact role in ocular pathology. Enhanced depth imaging optical coherent topography (EDI-OCT) is a novel, noninvasive technique that is used to evaluate choroidal thickness and morphology in these diseases. The technique provides detailed objective in vivo visualization of the choroid and can be used to characterize posterior segment inflammatory disorders, monitor disease activity, and evaluate efficacy of treatment. In this review we summarize the current application of this technique in ocular inflammatory disorders and highlight its utility as an additional tool in monitoring choroidal involvement in ocular inflammation. PMID:25024846

  8. Depth-resolved Optical Imaging and Microscopy of Vascular Compartment Dynamics During Somatosensory Stimulation

    PubMed Central

    Hillman, Elizabeth M. C.; Devor, Anna; Bouchard, Matthew; Dunn, Andrew K.; Krauss, GW; Skoch, Jesse; Bacskai, Brian J.; Dale, Anders M.; Boas, David A.

    2007-01-01

    The cortical hemodynamic response to somatosensory stimulus is investigated at the level of individual vascular compartments using both depth-resolved optical imaging and in-vivo two-photon microscopy. We utilize a new imaging and spatiotemporal analysis approach that exploits the different characteristic dynamics of responding arteries, arterioles, capillaries and veins to isolate their three-dimensional spatial extent within the cortex. This spatial delineation is validated using vascular casts. Temporal delineation is supported by in-vivo two-photon microscopy of the temporal dynamics and vascular mechanisms of the arteriolar and venous responses. Using these techniques we have been able to characterize the roles of the different vascular compartments in generating and controlling the hemodynamic response to somatosensory stimulus. We find that changes in arteriolar total hemoglobin concentration agree well with arteriolar dilation dynamics, which in turn correspond closely with changes in venous blood flow. For four-second stimuli, we see only small changes in venous hemoglobin concentration, and do not detect measurable dilation or ballooning in the veins. Instead, we see significant evidence of capillary hyperemia. We compare our findings to historical observations of the composite hemodynamic response from other modalities including functional magnetic resonance imaging. Implications of our results are discussed with respect to mathematical models of cortical hemodynamics, and to current theories on the mechanisms underlying neurovascular coupling. We also conclude that our spatiotemporal analysis approach is capable of isolating and localizing signals from the capillary bed local to neuronal activation, and holds promise for improving the specificity of other hemodynamic imaging modalities. PMID:17222567

  9. Mapping permeable fractures at depth in crystalline metamorphic shield rocks using borehole seismic, logging, and imaging

    NASA Astrophysics Data System (ADS)

    Chan, J.; Schmitt, D. R.; Nieuwenhuis, G.; Poureslami Ardakani, E.; Kueck, J.; Abasolo, M. R.

    2012-04-01

    The presence of major fluid pathways in subsurface exploration can be identified by understanding the effects of fractures, cracks, and microcracks in the subsurface. Part of a feasibility study of geothermal development in Northern Alberta consists of the investigation of subsurface fluid pathways in the Precambrian basement rocks. One of the selected sites for this study is in the Fort McMurray area, where the deepest well drilled in the oilsands region in Northeastern Alberta is located. This deep borehole has a depth of 2.3 km which offers substantial depth coverage to study the metamorphic rocks in the Precambrian crystalline basement of this study area. Seismic reflection profiles adjacent to the borehole reveal NW-SE dipping reflectors within the metamorphic shield rocks some of which appear to intersect the wellbore. An extensive logging and borehole seismic program was carried out in the borehole in July, 2011. Gamma ray, magnetic susceptibility, acoustic televiewer, electrical resistivity, and full-waveform sonic logs were acquired to study the finer scale structure of the rock formations, with vertical resolutions in the range of 0.05 cm to 80 cm. These logs supplement earlier electrical microscanner images obtained by the well operator when it was drilled. In addition, we are also interested in identifying other geological features such as zones of fractures that could provide an indication of enhanced fluid flow potential - a necessary component for any geothermal systems to be viable. The interpretation of the borehole logs reveals a highly conductive 13 m thick zone at 1409 m depth that may indicate communication of natural brines in fractures with the wellbore fluid. The photoelectric factor and magnetic susceptibility also appear anomalous in this zone. Formation MicroImager (FMI) log was used to verify the presence of fractures in the borehole in this conductive zone. This fracture zone may coincide with the dipping seismic reflectors in the

  10. In vivo volumetric depth-resolved vasculature imaging of human limbus and sclera with 1 μm swept source phase-variance optical coherence angiography

    NASA Astrophysics Data System (ADS)

    Poddar, Raju; Zawadzki, Robert J.; Cortés, Dennis E.; Mannis, Mark J.; Werner, John S.

    2015-06-01

    We present in vivo volumetric depth-resolved vasculature images of the anterior segment of the human eye acquired with phase-variance based motion contrast using a high-speed (100 kHz, 105 A-scans/s) swept source optical coherence tomography system (SSOCT). High phase stability SSOCT imaging was achieved by using a computationally efficient phase stabilization approach. The human corneo-scleral junction and sclera were imaged with swept source phase-variance optical coherence angiography and compared with slit lamp images from the same eyes of normal subjects. Different features of the rich vascular system in the conjunctiva and episclera were visualized and described. This system can be used as a potential tool for ophthalmological research to determine changes in the outflow system, which may be helpful for identification of abnormalities that lead to glaucoma.

  11. Low-Achieving Readers, High Expectations: Image Theatre Encourages Critical Literacy

    ERIC Educational Resources Information Center

    Rozansky, Carol Lloyd; Aagesen, Colleen

    2010-01-01

    Students in an eighth-grade, urban, low-achieving reading class were introduced to critical literacy through engagement in Image Theatre. Developed by liberatory dramatist Augusto Boal, Image Theatre gives participants the opportunity to examine texts in the triple role of interpreter, artist, and sculptor (i.e., image creator). The researchers…

  12. Depth-selective imaging of macroscopic objects hidden behind a scattering layer using low-coherence and wide-field interferometry

    NASA Astrophysics Data System (ADS)

    Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Ko, Hakseok; Choi, Wonshik

    2016-08-01

    Imaging systems targeting macroscopic objects tend to have poor depth selectivity. In this Letter, we present a 3D imaging system featuring a depth resolution of 200 μm, depth scanning range of more than 1 m, and view field larger than 70×70 mm2. For depth selectivity, we set up an off-axis digital holographic imaging system using a light source with a coherence length of 400 μm. A prism pair was installed in the reference beam path for long-range depth scanning. We performed imaging macroscopic targets with multiple different layers and also demonstrated imaging targets hidden behind a scattering layer.

  13. Controlling electron trap depth to enhance optical properties of persistent luminescence nanoparticles for in vivo imaging.

    PubMed

    Maldiney, Thomas; Lecointre, Aurélie; Viana, Bruno; Bessière, Aurélie; Bessodes, Michel; Gourier, Didier; Richard, Cyrille; Scherman, Daniel

    2011-08-01

    Focusing on the use of nanophosphors for in vivo imaging and diagnosis applications, we used thermally stimulated luminescence (TSL) measurements to study the influence of trivalent lanthanide Ln(3+) (Ln = Dy, Pr, Ce, Nd) electron traps on the optical properties of Mn(2+)-doped diopside-based persistent luminescence nanoparticles. This work reveals that Pr(3+) is the most suitable Ln(3+) electron trap in the diopside lattice, providing optimal trap depth for room temperature afterglow and resulting in the most intense luminescence decay curve after X-ray irradiation. This luminescence dependency toward the electron trap is maintained through additional doping with Eu(2+), allowing UV-light excitation, critical for bioimaging applications in living animals. We finally identify a novel composition (CaMgSi(2)O(6):Eu(2+),Mn(2+),Pr(3+)) for in vivo imaging, displaying a strong near-infrared afterglow centered on 685 nm, and present evidence that intravenous injection of such persistent luminescence nanoparticles in mice allows not only improved but highly sensitive detection through living tissues. PMID:21702453

  14. Calibrating remotely sensed river bathymetry in the absence of field measurements: Flow REsistance Equation-Based Imaging of River Depths (FREEBIRD)

    NASA Astrophysics Data System (ADS)

    Legleiter, Carl J.

    2015-04-01

    Remote sensing could enable high-resolution mapping of long river segments, but realizing this potential will require new methods for inferring channel bathymetry from passive optical image data without using field measurements for calibration. As an alternative to regression-based approaches, this study introduces a novel framework for Flow REsistance Equation-Based Imaging of River Depths (FREEBIRD). This technique allows for depth retrieval in the absence of field data by linking a linear relation between an image-derived quantity X and depth d to basic equations of open channel flow: continuity and flow resistance. One FREEBIRD algorithm takes as input an estimate of the channel aspect (width/depth) ratio A and a series of cross-sections extracted from the image and returns the coefficients of the X versus d relation. A second algorithm calibrates this relation so as to match a known discharge Q. As an initial test of FREEBIRD, these procedures were applied to panchromatic satellite imagery and publicly available aerial photography of a clear-flowing gravel-bed river. Accuracy assessment based on independent field surveys indicated that depth retrieval performance was comparable to that achieved by direct, field-based calibration methods. Sensitivity analyses suggested that FREEBIRD output was not heavily influenced by misspecification of A or Q, or by selection of other input parameters. By eliminating the need for simultaneous field data collection, these methods create new possibilities for large-scale river monitoring and analysis of channel change, subject to the important caveat that the underlying relationship between X and d must be reasonably strong.

  15. Anterior segment biometry during accommodation imaged with ultra-long scan depth optical coherence tomography

    PubMed Central

    Du, Chixin; Shen, Meixiao; Li, Ming; Zhu, Dexi; Wang, Michael R.; Wang, Jianhua

    2012-01-01

    Purpose To measure by ultra-long scan depth optical coherence tomography (UL-OCT) dimensional changes in the anterior segment of human eyes during accommodation. Design Evaluation of diagnostic test or technology. Participants Forty-one right eyes of healthy subjects with a mean age of 34 years (range, 22–41 years) and a mean refraction of −2.5±2.6 diopters (D) were imaged in two repeated measurements at minimal and maximal accommodation. Methods A specially adapted designed UL-OCT instrument was used to image from the front surface of the cornea to the back surface of the crystalline lens. Custom software corrected the optical distortion of the images and yielded the biometric measurements. The coefficient of repeatability (COR) and the intraclass correlation coefficient (ICC) were calculated to evaluate the repeatability and reliability. Main Outcome Measures Anterior segment parameters and associated repeatability and reliability upon accommodation. The dimensional results included central corneal thickness (CCT), anterior chamber depth and width (ACD, ACW), pupil diameter (PD), lens thickness (LT), anterior segment length (ASL=ACD+LT), lens central position (LCP=ACD+1/2LT) and horizontal radii of the lens anterior and posterior surface curvatures (LAC, LPC). Results Repeated measurements of each variable within each accommodative state did not differ significantly (P>0.05). The CORs and ICCs for CCT, ACW, ACD, LT, LCP, and ASL were excellent (1.2% to 3.59% and 0.998 to 0.877, respectively). They were higher for PD (18.90% to 21.63% and 0.880 to 0.874, respectively), and moderate for LAC and LPC (34.86% to 42.72% and 0.669 to 0.251, respectively) in the two accommodative states. Compared to minimal accommodation, PD, ACD, LAC, LPC, and LCP decreased and LT and ASL increased significantly at maximal accommodation (P<0.05), while CCT and ACW did not change (P>0.05). Conclusions UL-OCT measured changes in anterior segment dimensions during accommodation with

  16. Large field-of-view and depth-specific cortical microvascular imaging underlies regional differences in ischemic brain

    NASA Astrophysics Data System (ADS)

    Qin, Jia; Shi, Lei; Dziennis, Suzan; Wang, Ruikang K.

    2014-02-01

    Ability to non-invasively monitor and quantify of blood flow, blood vessel morphology, oxygenation and tissue morphology is important for improved diagnosis, treatment and management of various neurovascular disorders, e.g., stroke. Currently, no imaging technique is available that can satisfactorily extract these parameters from in vivo microcirculatory tissue beds, with large field of view and sufficient resolution at defined depth without any harm to the tissue. In order for more effective therapeutics, we need to determine the area of brain that is damaged but not yet dead after focal ischemia. Here we develop an integrated multi-functional imaging system, in which SDW-LSCI (synchronized dual wavelength laser speckle imaging) is used as a guiding tool for OMAG (optical microangiography) to investigate the fine detail of tissue hemodynamics, such as vessel flow, profile, and flow direction. We determine the utility of the integrated system for serial monitoring afore mentioned parameters in experimental stroke, middle cerebral artery occlusion (MCAO) in mice. For 90 min MCAO, onsite and 24 hours following reperfusion, we use SDW-LSCI to determine distinct flow and oxygenation variations for differentiation of the infarction, peri-infarct, reduced flow and contralateral regions. The blood volumes are quantifiable and distinct in afore mentioned regions. We also demonstrate the behaviors of flow and flow direction in the arterials connected to MCA play important role in the time course of MCAO. These achievements may improve our understanding of vascular involvement under pathologic and physiological conditions, and ultimately facilitate clinical diagnosis, monitoring and therapeutic interventions of neurovascular diseases, such as ischemic stroke.

  17. Performance comparison between 8- and 14-bit-depth imaging in polarization-sensitive swept-source optical coherence tomography.

    PubMed

    Lu, Zenghai; Kasaragod, Deepa K; Matcher, Stephen J

    2011-01-01

    Recently the effects of reduced bit-depth acquisition on swept-source optical coherence tomography (SS-OCT) image quality have been evaluated by using simulations and empirical studies, showing that image acquisition at 8-bit depth allows high system sensitivity with only a minimal drop in the signal-to-noise ratio compared to higher bit-depth systems. However, in these studies the 8-bit data is actually 12- or 14-bit ADC data numerically truncated to 8 bits. In practice, a native 8-bit ADC could actually possess a true bit resolution lower than this due to the electronic jitter in the converter etc. We compare true 8- and 14-bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of equine tendon indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artifacts due to strong Fresnel reflection. PMID:21483604

  18. Imaging the Carboneras fault zone at depth: preliminary results from reflection/refraction seismic tomography

    NASA Astrophysics Data System (ADS)

    Nippress, S.; Rietbrock, A.; Faulkner, D. R.; Rutter, E.; Haberland, C. A.; Teixido, T.

    2009-12-01

    Understanding and characterizing fault zone structure at depth is vital to predicting the slip behaviour of faults in the brittle crust. We aim to combine detailed field mapping and laboratory velocity/physical property determinations with seismic measurements on the Carboneras fault zone (S.E. Spain) to improve our knowledge of how fault zone structure affects seismic signals. The CFZ is a large offset (10s of km) strike-slip fault that constitutes part of the diffuse plate boundary between Africa and Iberia. It has been largely passively exhumed from ca. 4 to 6 km depth. The friable fault zone components are excellently preserved in the region’s semi-arid climate, and consist of multiple strands of phyllosilicate-rich fault gouge ranging from 1 to 20 m in thickness. In May 2009 we conducted 4 high-resolution seismic reflection and refraction/first break tomography lines. Two of these lines (~1km long) crossed the entire fault zone while the remaining lines (~150 and ~300m long) concentrated on individual fault strands and associated damage zones. For each of the lines a 2 m-geophone spacing was used with a combination of accelerated drop weight, sledgehammer and 100g explosives as seismic sources. Initial seismic reflection processing has been carried out on each of the 4 lines. First breaks have been picked for each of the shot gathers and inputted into a 2D traveltime inversion and amplitude-modeling package (Zelt & Smith, 1992) to obtain first break tomography images. During this field campaign we also carried out numerous fault zone guided wave experiments on two of the dense seismic lines. At the larger offsets (~600-700m) we observe low frequency guided waves. These experiments will capture the various length scales involved in a mature fault zone and will enable the surface mapping and petrophysical studies to be linked to the seismic field observations.

  19. Three-dimensional anterior segment imaging in patients with type 1 Boston Keratoprosthesis with switchable full depth range swept source optical coherence tomography

    PubMed Central

    Poddar, Raju; Cortés, Dennis E.; Werner, John S.; Mannis, Mark J.

    2013-01-01

    Abstract. A high-speed (100 kHz A-scans/s) complex conjugate resolved 1 μm swept source optical coherence tomography (SS-OCT) system using coherence revival of the light source is suitable for dense three-dimensional (3-D) imaging of the anterior segment. The short acquisition time helps to minimize the influence of motion artifacts. The extended depth range of the SS-OCT system allows topographic analysis of clinically relevant images of the entire depth of the anterior segment of the eye. Patients with the type 1 Boston Keratoprosthesis (KPro) require evaluation of the full anterior segment depth. Current commercially available OCT systems are not suitable for this application due to limited acquisition speed, resolution, and axial imaging range. Moreover, most commonly used research grade and some clinical OCT systems implement a commercially available SS (Axsun) that offers only 3.7 mm imaging range (in air) in its standard configuration. We describe implementation of a common swept laser with built-in k-clock to allow phase stable imaging in both low range and high range, 3.7 and 11.5 mm in air, respectively, without the need to build an external MZI k-clock. As a result, 3-D morphology of the KPro position with respect to the surrounding tissue could be investigated in vivo both at high resolution and with large depth range to achieve noninvasive and precise evaluation of success of the surgical procedure. PMID:23912759

  20. Pre-stack depth migration for improved imaging under seafloor canyons: 2D case study of Browse Basin, Australia*

    NASA Astrophysics Data System (ADS)

    Debenham, Helen 124Westlake, Shane

    2014-06-01

    In the Browse Basin, as in many areas of the world, complex seafloor topography can cause problems with seismic imaging. This is related to complex ray paths, and sharp lateral changes in velocity. This paper compares ways in which 2D Kirchhoff imaging can be improved below seafloor canyons, using both time and depth domain processing. In the time domain, to improve on standard pre-stack time migration (PSTM) we apply removable seafloor static time shifts in order to reduce the push down effect under seafloor canyons before migration. This allows for better event continuity in the seismic imaging. However this approach does not fully solve the problem, still giving sub-optimal imaging, leaving amplitude shadows and structural distortion. Only depth domain processing with a migration algorithm that honours the paths of the seismic energy as well as a detailed velocity model can provide improved imaging under these seafloor canyons, and give confidence in the structural components of the exploration targets in this area. We therefore performed depth velocity model building followed by pre-stack depth migration (PSDM), the result of which provided a step change improvement in the imaging, and provided new insights into the area.

  1. Three-dimensional image cytometer based on widefield structured light microscopy and high-speed remote depth scanning.

    PubMed

    Choi, Heejin; Wadduwage, Dushan N; Tu, Ting Yuan; Matsudaira, Paul; So, Peter T C

    2015-01-01

    A high throughput 3D image cytometer have been developed that improves imaging speed by an order of magnitude over current technologies. This imaging speed improvement was realized by combining several key components. First, a depth-resolved image can be rapidly generated using a structured light reconstruction algorithm that requires only two wide field images, one with uniform illumination and the other with structured illumination. Second, depth scanning is implemented using the high speed remote depth scanning. Finally, the large field of view, high NA objective lens and the high pixelation, high frame rate sCMOS camera enable high resolution, high sensitivity imaging of a large cell population. This system can image at 800 cell/sec in 3D at submicron resolution corresponding to imaging 1 million cells in 20 min. The statistical accuracy of this instrument is verified by quantitatively measuring rare cell populations with ratio ranging from 1:1 to 1:10(5) . © 2014 International Society for Advancement of Cytometry. PMID:25352187

  2. Detailed imaging of flowing structures at depth using microseismicity: a tool for site investigation?

    NASA Astrophysics Data System (ADS)

    Pytharouli, S.; Lunn, R. J.; Shipton, Z. K.

    2011-12-01

    Field evidence shows that faults and fractures can act as focused pathways or barriers for fluid migration. This is an important property for modern engineering problems, e.g., CO2 sequestration, geological radioactive waste disposal, geothermal energy exploitation, land reclamation and remediation. For such applications the detailed characterization of the location, orientation and hydraulic properties of existing fractures is necessary. These investigations are expensive, requiring the hire of expensive equipment (excavator or drill rigs), which incur standing charges when not in use. In addition, they only provide information for discrete sample 'windows'. Non-intrusive methods have the ability to gather information across an entire area. Methods including electrical resistivity/conductivity and ground penetrating radar (GRP), have been used as tools for site investigations. Their imaging ability is often restricted due to unfavourable on-site conditions e.g. GRP is not useful in cases where a layer of clay or reinforced concrete is present. Our research has shown that high quality seismic data can be successfully used in the detailed imaging of sub-surface structures at depth; using induced microseismicity data recorded beneath the Açu reservoir in Brazil we identified orientations and values of average permeability of open shear fractures at depths up to 2.5km. Could microseismicity also provide information on the fracture width in terms of stress drops? First results from numerical simulations showed that higher stress drop values correspond to narrower fractures. These results were consistent with geological field observations. This study highlights the great potential of using microseismicity data as a supplementary tool for site investigation. Individual large-scale shear fractures in large rock volumes cannot currently be identified by any other geophysical dataset. The resolution of the method is restricted by the detection threshold of the local

  3. Trap depth optimization to improve optical properties of diopside-based nanophosphors for medical imaging

    NASA Astrophysics Data System (ADS)

    Maldiney, Thomas; Lecointre, Aurélie; Viana, Bruno; Bessière, Aurélie; Gourier, Didier; Bessodes, Michel; Richard, Cyrille; Scherman, Daniel

    2012-02-01

    Regarding its ability to circumvent the autofluorescence signal, persistent luminescence was recently shown to be a powerful tool for in vivo imaging and diagnosis applications in living animal. The concept was introduced with lanthanide-doped persistent luminescence nanoparticles (PLNP), from a lanthanide-doped silicate host Ca0.2Zn0.9Mg0.9Si2O6:Eu2+, Mn2+, Dy3+ emitting in the near-infrared window. In order to improve the behaviour of these probes in vivo and favour diagnosis applications, we showed that biodistribution could be controlled by varying the hydrodynamic diameter, but also the surface charges and functional groups. Stealth PLNP, with neutral surface charge obtained by polyethylene glycol (PEG) coating, can circulate for longer time inside the mice body before being uptaken by the reticulo-endothelial system. However, the main drawback of this first generation of PLNP was the inability to witness long-term monitoring, mainly due to the decay kinetic after several decades of minutes, unveiling the need to work on new materials with improved optical characteristics. We investigated a modified silicate host, diopside CaMgSi2O6, and increased its persistent luminescence properties by studying various Ln3+ dopants (for instance Ce, Pr, Nd, Tm, Ho). Such dopants create electron traps that control the long lasting phosphorescence (LLP). We showed that Pr3+ was the most suitable Ln3+ electron trap in diopside lattice, providing optimal trap depth, and resulting in the most intense luminescence decay curve after UV irradiation. A novel composition CaMgSi2O6:Eu2+,Mn2+,Pr3+ was obtained for in vivo imaging, displaying a strong near-infrared persistent luminescence centred on 685 nm, allowing improved and sensitive detection through living tissues.

  4. A method of extending the depth of focus of the high-resolution X-ray imaging system employing optical lens and scintillator: a phantom study

    PubMed Central

    2015-01-01

    Background The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons. Methods In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image. Results By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator. Conclusions The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion. PMID:25602532

  5. Multistep joint bilateral depth upsampling

    NASA Astrophysics Data System (ADS)

    Riemens, A. K.; Gangwal, O. P.; Barenbrug, B.; Berretty, R.-P. M.

    2009-01-01

    Depth maps are used in many applications, e.g. 3D television, stereo matching, segmentation, etc. Often, depth maps are available at a lower resolution compared to the corresponding image data. For these applications, depth maps must be upsampled to the image resolution. Recently, joint bilateral filters are proposed to upsample depth maps in a single step. In this solution, a high-resolution output depth is computed as a weighted average of surrounding low-resolution depth values, where the weight calculation depends on spatial distance function and intensity range function on the related image data. Compared to that, we present two novel ideas. Firstly, we apply anti-alias prefiltering on the high-resolution image to derive an image at the same low resolution as the input depth map. The upsample filter uses samples from both the high-resolution and the low-resolution images in the range term of the bilateral filter. Secondly, we propose to perform the upsampling in multiple stages, refining the resolution by a factor of 2×2 at each stage. We show experimental results on the consequences of the aliasing issue, and we apply our method to two use cases: a high quality ground-truth depth map and a real-time generated depth map of lower quality. For the first use case a relatively small filter footprint is applied; the second use case benefits from a substantially larger footprint. These experiments show that the dual image resolution range function alleviates the aliasing artifacts and therefore improves the temporal stability of the output depth map. On both use cases, we achieved comparable or better image quality with respect to upsampling with the joint bilateral filter in a single step. On the former use case, we feature a reduction of a factor of 5 in computational cost, whereas on the latter use case, the cost saving is a factor of 50.

  6. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine.

    PubMed

    Xia, Tian; Patel, Shriji N; Szirth, Ben C; Kolomeyer, Anton M; Khouri, Albert S

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  7. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine

    PubMed Central

    Xia, Tian; Patel, Shriji N.; Szirth, Ben C.

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  8. Choroidal changes observed with enhanced depth imaging optical coherence tomography in patients with mild Graves orbitopathy.

    PubMed

    Özkan, B; Koçer, Ç A; Altintaş, Ö; Karabaş, L; Acar, A Z; Yüksel, N

    2016-07-01

    PurposeTo evaluate the choroidal thickness in patients with Graves orbitopathy (GO) using enhanced depth imaging-optical coherence tomography (EDI-OCT).MethodsThirty-one patients with GO were evaluated prospectively. All subjects underwent ophthalmologic examination including best-corrected visual acuity, intraocular pressure measurement, biomicroscopic, and fundus examination. Choroidal thickness was measured at the central fovea. In addition, visual evoked potential measurement and visual field evaluation were performed.ResultsThe mean choroidal thickness was 377.8±7.4 μ in the GO group, and 334±13.7 μ in the control group. (P=0.004). There was a strong correlation between the choridal thickness and the clinical activity scores (CAS) of the patients (r=0.281, P=0.027). Additionally, there was a correlation between the choroidal thickness and the visual-evoked potential (VEP) P100 latency measurements of the patients (r=0.439, P=0.001).ConclusionsThe results of this study demonstrate that choroid is thicker in patients with GO. The choroidal thickness is also correlated with the CAS and VEP P100 latency measurements in these patients. PMID:27315349

  9. Active probing of cloud thickness and optical depth using wide-angle imaging LIDAR.

    SciTech Connect

    Love, Steven P.; Davis, A. B.; Rohde, C. A.; Tellier, L. L.; Ho, Cheng,

    2002-01-01

    At most optical wavelengths, laser light in a cloud lidar experiment is not absorbed but merely scattered out of the beam, eventually escaping the cloud via multiple scattering. There is much information available in this light scattered far from the input beam, information ignored by traditional 'on-beam' lidar. Monitoring these off-beam returns in a fully space- and time-resolved manner is the essence of our unique instrument, Wide Angle Imaging Lidar (WAIL). In effect, WAIL produces wide-field (60{sup o} full-angle) 'movies' of the scattering process and records the cloud's radiative Green functions. A direct data product of WAIL is the distribution of photon path lengths resulting from multiple scattering in the cloud. Following insights from diffusion theory, we can use the measured Green functions to infer the physical thickness and optical depth of the cloud layer. WAIL is notable in that it is applicable to optically thick clouds, a regime in which traditional lidar is reduced to ceilometry. Section 2 covers the up-to-date evolution of the nighttime WAIL instrument at LANL. Section 3 reports our progress towards daytime capability for WAIL, an important extension to full diurnal cycle monitoring by means of an ultra-narrow magneto-optic atomic line filter. Section 4 describes briefly how the important cloud properties can be inferred from WAIL signals.

  10. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2008-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  11. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2010-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  12. Adaptive switching filter for noise removal in highly corrupted depth maps from Time-of-Flight image sensors

    NASA Astrophysics Data System (ADS)

    Lee, Seunghee; Bae, Kwanghyuk; Kyung, Kyu-min; Kim, Tae-Chan

    2012-03-01

    In this work, we present an adaptive switching filter for noise reduction and sharpness preservation in depth maps provided by Time-of-Flight (ToF) image sensors. Median filter and bilateral filter are commonly used in cost-sensitive applications where low computational complexity is needed. However, median filter blurs fine details and edges in depth map while bilateral filter works poorly with impulse noise present in the image. Since the variance of depth is inversely proportional to amplitude, we suggest an adaptive filter that switches between median filter and bilateral filter based on the level of amplitude. If a region of interest has low amplitude indicating low confidence level of measured depth data, then median filter is applied on the depth at the position while regions with high level of amplitude is processed with bilateral filter using Gaussian kernel with adaptive weights. Results show that the suggested algorithm performs surface smoothing and detail preservation as well as median filter and bilateral filter, respectively. By using the suggested algorithm, significant gain in visual quality is obtained in depth maps while low computational cost is maintained.

  13. Depth-resolved imaging of colon tumor using optical coherence tomography and fluorescence laminar optical tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tang, Qinggong; Frank, Aaron; Wang, Jianting; Chen, Chao-wei; Jin, Lily; Lin, Jon; Chan, Joanne M.; Chen, Yu

    2016-03-01

    Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is unable to detect those subsurface lesions. Since cancer development is associated with both morphological and molecular alterations, imaging technologies that can quantitative image tissue's morphological and molecular biomarkers and assess the depth extent of a lesion in real time, without the need for tissue excision, would be a major advance in GI cancer diagnostics and therapy. In this research, we investigated the feasibility of multi-modal optical imaging including high-resolution optical coherence tomography (OCT) and depth-resolved high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. APC (adenomatous polyposis coli) mice model were imaged using OCT and FLOT and the correlated histopathological diagnosis was obtained. Quantitative structural (the scattering coefficient) and molecular imaging parameters (fluorescence intensity) from OCT and FLOT images were developed for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 87.4% (87.3%) for sensitivity (specificity) which gives the most optimal diagnosis (the largest area under receiver operating characteristic (ROC) curve). This project results in a new non-invasive multi-modal imaging platform for improved GI cancer detection, which is expected to have a major impact on detection, diagnosis, and characterization of GI cancers, as well as a wide range of epithelial cancers.

  14. Using computer aided system to determine the maximum depth of visualization of B-Mode diagnostic ultrasound image

    NASA Astrophysics Data System (ADS)

    Maslebu, G.; Adi, K.; Suryono

    2016-03-01

    In the service unit of radiology, ultrasound modality is widely used because it has advantages over other modalities, such as relatively inexpensive, non-invasive, does not use ionizing radiation, and portable. Until now, the method for determining the depth visualization on quality control program is through the visual observation of ultrasound image on the monitor. The purpose of this study is to develop a computer-aided system to determine maximum depth of visualization. Data acquisition was done by using B-Mode Diagnostic Ultrasound machine and Multi-purpose Multi-tissue Ultrasound Phantom model 040GSE. Phantom was scanned at fixed frequency of 1,8 MHz, 2,2 MHz, 3,6 MHz and 5,0 MHz with a gain variation of 30 dB, 45 dB, and 60 dB. Global thresholding and Euclidean distance method were used to determine maximum visualization depth. From this study, it is proved that the visualization depth using computer aided provide deeper visualization than visual interpretation. The differences between expert verification and the result of image processing are <6%. Thus, computer aided system can be used for the purpose of quality control in determining maximum visualization depth of B-Mode diagnostic ultrasound image.

  15. Application of Depth of Investigation index method to process resistivity imaging models from glacier forfield

    NASA Astrophysics Data System (ADS)

    Glazer, Michał; Dobinski, Wojciech; Grabiec, Mariusz

    2015-04-01

    At the end of August 2014 ERT measurements were carried out at the Storglaciären glacier forefield (Tarfala Valley, Northern Sweden) to study permafrost occurrence. This glacier has been retreating since 1910. It is one of the most well studied mountain glaciers in the world due to initiation of the first continuous glacier mass balance research program. Near the vicinity of its frontal margin three perpendicular and two parallel resistivity profile lines were located. They varied in terms of number of roll-along extensions and used electrode spacing. At least Schlumberger and dipole-dipole protocols were utilized on every measurement site. Surface of glacier forefield is characterized by occurrence of large moraine deposits which consists of rock blocks with air voids on one hand and voids filled with clay material on the other. It caused large variations of electrodes contact resistance on profile line. Furthermore, possibility of using only weak currents in the research, and presence of high resistivity contrast structures in geological medium made inversion process and interpretation of received resistivity models demanding. To stabilize inversion process efforts were made to erase most noisy and systematic error data. In order to assess the reliability of resistivity models at depth and in terms of the presence of artifacts left by the inversion process Depth of Investigation (DOI) index was applied. It describes accuracy of prepared model with respect to variable parameters of inversion. For preparing DOI maps two inversions on the same data set using different reference models are necessary. Then the results are compared to each other. In regions where the model depend strongly on data DOI will take values near zero, while in regions where resistivity values depend more on inversion parameters DOI will rise. Additionally several synthetic models were made which led to better understanding of resistivity images of some geological structures observed on the

  16. Head pose estimation from a 2D face image using 3D face morphing with depth parameters.

    PubMed

    Kong, Seong G; Mbouna, Ralph Oyini

    2015-06-01

    This paper presents estimation of head pose angles from a single 2D face image using a 3D face model morphed from a reference face model. A reference model refers to a 3D face of a person of the same ethnicity and gender as the query subject. The proposed scheme minimizes the disparity between the two sets of prominent facial features on the query face image and the corresponding points on the 3D face model to estimate the head pose angles. The 3D face model used is morphed from a reference model to be more specific to the query face in terms of the depth error at the feature points. The morphing process produces a 3D face model more specific to the query image when multiple 2D face images of the query subject are available for training. The proposed morphing process is computationally efficient since the depth of a 3D face model is adjusted by a scalar depth parameter at feature points. Optimal depth parameters are found by minimizing the disparity between the 2D features of the query face image and the corresponding features on the morphed 3D model projected onto 2D space. The proposed head pose estimation technique was evaluated on two benchmarking databases: 1) the USF Human-ID database for depth estimation and 2) the Pointing'04 database for head pose estimation. Experiment results demonstrate that head pose estimation errors in nodding and shaking angles are as low as 7.93° and 4.65° on average for a single 2D input face image. PMID:25706638

  17. The Relationship between University Students' Academic Achievement and Perceived Organizational Image

    ERIC Educational Resources Information Center

    Polat, Soner

    2011-01-01

    The purpose of present study was to determine the relationship between university students' academic achievement and perceived organizational image. The sample of the study was the senior students at the faculties and vocational schools in Umuttepe Campus at Kocaeli University. Because the development of organizational image is a long process, the…

  18. Wavelet image processing applied to optical and digital holography: past achievements and future challenges

    NASA Astrophysics Data System (ADS)

    Jones, Katharine J.

    2005-08-01

    The link between wavelets and optics goes back to the work of Dennis Gabor who both invented holography and developed Gabor decompositions. Holography involves 3-D images. Gabor decompositions involves 1-D signals. Gabor decompositions are the predecessors of wavelets. Wavelet image processing of holography, both optical holography and digital holography, will be examined with respect to past achievements and future challenges.

  19. Real-time depth image-based rendering with layered dis-occlusion compensation and aliasing-free composition

    NASA Astrophysics Data System (ADS)

    Smirnov, Sergey; Gotchev, Atanas

    2015-03-01

    Depth Image-based Rendering (DIBR) is a popular view synthesis technique which utilizes the RGB+D image format, also referred to as view-plus-depth scene representation. Classical DIBR is prone to dis-occlusion artefacts, caused by the lack of information in areas behind foreground objects, which appear visible in the synthesized images. A number of recently proposed compensation techniques have addressed the problem of hole filling. However, their computational complexity does not allow for real-time view synthesis and may require additional user input. In this work, we propose a hole-compensation technique, which works fully automatically and in a perceptually-correct manner. The proposed technique applies a two-layer model of the given RGB+D imagery, which is specifically tailored for rendering with free viewpoint selection. The main two components of the proposed technique are an adaptive layering of depth into relative 'foreground' and 'background' layers to be rendered separately and an additional blending filtering aimed at creating a blending function for aliasing cancellation during the process of view composition. The proposed real-time implementation turns ordinary view-plus-depth images to true 3D scene representations, which allow visualization in the y-around manner.

  20. Performance comparison between 8 and 14 bit-depth imaging in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.

    2011-03-01

    We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.

  1. Double depth-enhanced 3D integral imaging in projection-type system without diffuser

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jiao, Xiao-xue; Sun, Yu; Xie, Yan; Liu, Shao-peng

    2015-05-01

    Integral imaging is a three dimensional (3D) display technology without any additional equipment. A new system is proposed in this paper which consists of the elemental images of real images in real mode (RIRM) and the ones of virtual images in real mode (VIRM). The real images in real mode are the same as the conventional integral images. The virtual images in real mode are obtained by changing the coordinates of the corresponding points in elemental images which can be reconstructed by the lens array in virtual space. In order to reduce the spot size of the reconstructed images, the diffuser in conventional integral imaging is given up in the proposed method. Then the spot size is nearly 1/20 of that in the conventional system. And an optical integral imaging system is constructed to confirm that our proposed method opens a new way for the application of the passive 3D display technology.

  2. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  3. Fine Structure of Near-Surface Solar Wind Depth Profile by SNMS/SEM Imaging

    NASA Astrophysics Data System (ADS)

    Baryshev, S. V.; Zinovev, A. V.; Tripa, C. E.; Pellin, M. J.; Burnett, D. S.; Veryovkin, I. V.

    2012-03-01

    In this work, we report results of Genesis Si coupons investigations conducted by laser post-ionization secondary neutral mass spectrometry (LPI SNMS) based on dual beam depth profiling with low energy normal incidence sputtering (lenisDB).

  4. Novel dental dynamic depth profilometric imaging using simultaneous frequency-domain infrared photothermal radiometry and laser luminescence.

    PubMed

    Nicolaides, L; Mandelis, A; Abrams, S H

    2000-01-01

    A high-spatial-resolution dynamic experimental imaging setup, which can provide simultaneous measurements of laser-induced frequency-domain infrared photothermal radiometric and luminescence signals from defects in teeth, has been developed for the first time. The major findings of this work are (i) radiometric images are complementary to (anticorrelated with) luminescence images, as a result of the nature of the two physical signal generation processes; (ii) the radiometric amplitude exhibits much superior dynamic (signal resolution) range to luminescence in distinguishing between intact and cracked sub-surface structures in the enamel; (iii) the radiometric signal (amplitude and phase) produces dental images with much better defect localization, delineation, and resolution; (iv) radiometric images (amplitude and phase) at a fixed modulation frequency are depth profilometric, whereas luminescence images are not; and (v) luminescence frequency responses from enamel and hydroxyapatite exhibit two relaxation lifetimes, the longer of which (approximately ms) is common to all and is not sensitive to the defect state and overall quality of the enamel. Simultaneous radiometric and luminescence frequency scans for the purpose of depth profiling were performed and a quantitative theoretical two-lifetime rate model of dental luminescence was advanced. PMID:10938763

  5. Subsurface diffuse optical tomography can localize absorber and fluorescent objects but recovered image sensitivity is nonlinear with depth

    NASA Astrophysics Data System (ADS)

    Kepshire, Dax S.; Davis, Scott C.; Dehghani, Hamid; Paulsen, Keith D.; Pogue, Brian W.

    2007-04-01

    Subsurface tomography with diffuse light has been investigated with a noncontact approach to characterize the performance of absorption and fluorescence imaging. Using both simulations and experiments, the reconstruction of local subsurface heterogeneity is demonstrated, but the recovery of target size and fluorophore concentration is not linear when changes in depth occur, whereas the mean position of the object for experimental fluorescent and absorber targets is accurate to within 0.5 and 1.45 mm when located within the first 10 mm below the surface. Improvements in the linearity of the response with depth appear to remain challenging and may ultimately limit the approach to detection rather than characterization applications. However, increases in tissue curvature and/or the addition of prior information are expected to improve the linearity of the response. The potential for this type of imaging technique to serve as a surgical guide is highlighted.

  6. Utility of spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI) to non-invasively diagnose burn depth in a porcine model.

    PubMed

    Burmeister, David M; Ponticorvo, Adrien; Yang, Bruce; Becerra, Sandra C; Choi, Bernard; Durkin, Anthony J; Christy, Robert J

    2015-09-01

    Surgical intervention of second degree burns is often delayed because of the difficulty in visual diagnosis, which increases the risk of scarring and infection. Non-invasive metrics have shown promise in accurately assessing burn depth. Here, we examine the use of spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI) for predicting burn depth. Contact burn wounds of increasing severity were created on the dorsum of a Yorkshire pig, and wounds were imaged with SFDI/LSI starting immediately after-burn and then daily for the next 4 days. In addition, on each day the burn wounds were biopsied for histological analysis of burn depth, defined by collagen coagulation, apoptosis, and adnexal/vascular necrosis. Histological results show that collagen coagulation progressed from day 0 to day 1, and then stabilized. Results of burn wound imaging using non-invasive techniques were able to produce metrics that correlate to different predictors of burn depth. Collagen coagulation and apoptosis correlated with SFDI scattering coefficient parameter [Formula: see text] and adnexal/vascular necrosis on the day of burn correlated with blood flow determined by LSI. Therefore, incorporation of SFDI scattering coefficient and blood flow determined by LSI may provide an algorithm for accurate assessment of the severity of burn wounds in real time. PMID:26138371

  7. Enhanced depth imaging optical coherence tomography of choroidal osteoma with secondary neovascular membranes: report of two cases.

    PubMed

    Mello, Patrícia Correa de; Berensztejn, Patricia; Brasil, Oswaldo Ferreira Moura

    2016-01-01

    We report enhanced depth imaging optical coherence tomography (EDI-OCT) features based on clinical and imaging data from two newly diagnosed cases of choroidal osteoma presenting with recent visual loss secondary to choroidal neovascular membranes. The features described in the two cases, compression of the choriocapillaris and disorganization of the medium and large vessel layers, are consistent with those of previous reports. We noticed a sponge-like pattern previously reported, but it was subtle. Both lesions had multiple intralesional layers and a typical intrinsic transparency with visibility of the sclerochoroidal junction. PMID:27463635

  8. Penetration depth in tissue-mimicking phantoms from hyperspectral imaging in SWIR in transmission and reflection geometry

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Salo, Daniel; Kim, David M.; Berezin, Mikhail Y.

    2016-03-01

    We explored the depth penetration in tissue-mimicking intralipid-based phantoms in SWIR (800-1650 nm) using a hyperspectral imaging system composed from a 2D CCD camera coupled to a microscope. Hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 minutes or less that minimized artifacts from sample drying. Michelson spatial contrast was used as a metric to evaluate light penetration. Results from both transmission and reflection geometries consistently revealed the highest spatial contrast in the wavelength range of 1300 to 1350 nm.

  9. Multi-Mode Electromagnetic Ultrasonic Lamb Wave Tomography Imaging for Variable-Depth Defects in Metal Plates.

    PubMed

    Huang, Songling; Zhang, Yu; Wang, Shen; Zhao, Wei

    2016-01-01

    This paper proposes a new cross-hole tomography imaging (CTI) method for variable-depth defects in metal plates based on multi-mode electromagnetic ultrasonic Lamb waves (LWs). The dispersion characteristics determine that different modes of LWs are sensitive to different thicknesses of metal plates. In this work, the sensitivities to thickness variation of A0- and S0-mode LWs are theoretically studied. The principles and procedures for the cooperation of A0- and S0-mode LW CTI are proposed. Moreover, the experimental LW imaging system on an aluminum plate with a variable-depth defect is set up, based on A0- and S0-mode EMAT (electromagnetic acoustic transducer) arrays. For comparison, the traditional single-mode LW CTI method is used in the same experimental platform. The imaging results show that the computed thickness distribution by the proposed multi-mode method more accurately reflects the actual thickness variation of the defect, while neither the S0 nor the A0 single-mode method was able to distinguish thickness variation in the defect region. Moreover, the quantification of the defect's thickness variation is more accurate with the multi-mode method. Therefore, theoretical and practical results prove that the variable-depth defect in metal plates can be successfully quantified and visualized by the proposed multi-mode electromagnetic ultrasonic LW CTI method. PMID:27144571

  10. Multi-Mode Electromagnetic Ultrasonic Lamb Wave Tomography Imaging for Variable-Depth Defects in Metal Plates

    PubMed Central

    Huang, Songling; Zhang, Yu; Wang, Shen; Zhao, Wei

    2016-01-01

    This paper proposes a new cross-hole tomography imaging (CTI) method for variable-depth defects in metal plates based on multi-mode electromagnetic ultrasonic Lamb waves (LWs). The dispersion characteristics determine that different modes of LWs are sensitive to different thicknesses of metal plates. In this work, the sensitivities to thickness variation of A0- and S0-mode LWs are theoretically studied. The principles and procedures for the cooperation of A0- and S0-mode LW CTI are proposed. Moreover, the experimental LW imaging system on an aluminum plate with a variable-depth defect is set up, based on A0- and S0-mode EMAT (electromagnetic acoustic transducer) arrays. For comparison, the traditional single-mode LW CTI method is used in the same experimental platform. The imaging results show that the computed thickness distribution by the proposed multi-mode method more accurately reflects the actual thickness variation of the defect, while neither the S0 nor the A0 single-mode method was able to distinguish thickness variation in the defect region. Moreover, the quantification of the defect’s thickness variation is more accurate with the multi-mode method. Therefore, theoretical and practical results prove that the variable-depth defect in metal plates can be successfully quantified and visualized by the proposed multi-mode electromagnetic ultrasonic LW CTI method. PMID:27144571

  11. Imaging subducted high velocity slabs beneath the sea of Okhotsk using depth phases

    NASA Astrophysics Data System (ADS)

    Bai, K.; Li, D.; Helmberger, D. V.; Sun, D.; Wei, S.

    2014-12-01

    A recent study of a shallow Kuril subduction zone event displays significant waveform multi-pathing for paths propagating down the slab towards Europe(Zhan,Zhongwen 2014). Relatively fast structures (5%) are invoked to simulate such observations requiring numerical methods to capture such proportional distortions. Here, we present results from the reverse direction that is the effects on depth phases of deep events propagating up the slab. In particular the Mw6.7 Sea of Okhotsk deep earthquake occurred at a depth of 640 km is believed to be near the bottom of the slab structure and produced an abundance of depth phases. Differential travel time sP-P analysis shows a systematic decrease of up to 5 seconds from Europe to Australia and then to Pacific which is indicative of a dipping high velocity layer above the source region. Multiple simulations using WKM(An upgraded variation of the traditional WKBJ method) and finite difference methods were conducted in an effort to assess the effects of sharp structure on the whole wave-field. Results obtained from analytical methods, by the WKM code become questionable compared against the finite difference method due to its inability to handle the diffraction phases which become crucial in complex structures. In this example, seismicity clustered within a 45 degree dipping benioff zone at shallow depth but became blurred beyond 400 km. Finite difference simulations showed that a slab shapped structure that follows the benioff zone at shallow depth and steepens beyond 400 km produces a model that can account for the sP-P differential travel times of our 5s for oceanic paths.

  12. Comparison of Coincident Multiangle Imaging Spectroradiometer and Moderate Resolution Imaging Spectroradiometer Aerosol Optical Depths over Land and Ocean Scenes Containing Aerosol Robotic Network Sites

    NASA Technical Reports Server (NTRS)

    Abdou, Wedad A.; Diner, David J.; Martonchik, John V.; Bruegge, Carol J.; Kahn, Ralph A.; Gaitley, Barbara J.; Crean, Kathleen A.; Remer, Lorraine A.; Holben, Brent

    2005-01-01

    The Multiangle Imaging Spectroradiometer (MISR) and the Moderate Resolution Imaging Spectroradiometer (MODIS), launched on 18 December 1999 aboard the Terra spacecraft, are making global observations of top-of-atmosphere (TOA) radiances. Aerosol optical depths and particle properties are independently retrieved from these radiances using methodologies and algorithms that make use of the instruments corresponding designs. This paper compares instantaneous optical depths retrieved from simultaneous and collocated radiances measured by the two instruments at locations containing sites within the Aerosol Robotic Network (AERONET). A set of 318 MISR and MODIS images, obtained during the months of March, June, and September 2002 at 62 AERONET sites, were used in this study. The results show that over land, MODIS aerosol optical depths at 470 and 660 nm are larger than those retrieved from MISR by about 35% and 10% on average, respectively, when all land surface types are included in the regression. The differences decrease when coastal and desert areas are excluded. For optical depths retrieved over ocean, MISR is on average about 0.1 and 0.05 higher than MODIS in the 470 and 660 nm bands, respectively. Part of this difference is due to radiometric calibration and is reduced to about 0.01 and 0.03 when recently derived band-to-band adjustments in the MISR radiometry are incorporated. Comparisons with AERONET data show similar patterns.

  13. SU-E-I-11: Cascaded Linear System Model for Columnar CsI Flat Panel Imagers with Depth Dependent Gain and Blur

    SciTech Connect

    Peng, B; Lubinsky, A; Zheng, H; Zhao, W; Teymurazyan, A

    2014-06-01

    Purpose: To implement a depth dependent gain and blur cascaded linear system model (CLSM) for optimizing columnar structured CsI indirect conversion flat panel imager (FPI) for advanced imaging applications. Methods: For experimental validation, depth dependent escape efficiency, e(z), was extracted from PHS measurement of different CsI scintillators (thickness, substrate and light output). The inherent MTF and DQE of CsI was measured using high resolution CMOS sensor. For CLSM, e(z) and the depth dependent MTF(f,z), were estimated using Monte Carlo simulation (Geant4) of optical photon transport through columnar CsI. Previous work showed that Monte Carlo simulation for CsI was hindered by the non-ideality of its columnar structure. In the present work we allowed variation in columnar width with depth, and assumed diffusive reflective backing and columns. Monte Carlo simulation was performed using an optical point source placed at different depth of the CsI layer, from which MTF(z,f) and e(z) were computed. The resulting e(z) with excellent matching with experimental measurements were then applied to the CLSM, Monte Carlo simulation was repeated until the modeled MTF, DQE(f) also match experimental measurement. Results: For a 150 micron FOS HL type CsI, e(z) varies between 0.56 to 0.45, and the MTF at 14 cycles/mm varies between 62.1% to 3.9%, from the front to the back of the scintillator. The overall MTF and DQE(f) at all frequencies are in excellent agreement with experimental measurements at all frequencies. Conclusion: We have developed a CLSM for columnar CsI scintillators with depth dependent gain and MTF, which were estimated from Monte Carlo simulation with novel optical simulation settings. Preliminary results showed excellent agreement between simulation results and experimental measurements. Future work is aimed at extending this approach to optimize CsI screen optic design and sensor structure for achieving higher DQE(f) in cone-beam CT, which uses

  14. New Fast Fall Detection Method Based on Spatio-Temporal Context Tracking of Head by Using Depth Images

    PubMed Central

    Yang, Lei; Ren, Yanyun; Hu, Huosheng; Tian, Bo

    2015-01-01

    In order to deal with the problem of projection occurring in fall detection with two-dimensional (2D) grey or color images, this paper proposed a robust fall detection method based on spatio-temporal context tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor. In the pre-processing procedure, the parameters of the Single-Gauss-Model (SGM) are estimated and the coefficients of the floor plane equation are extracted from the background images. Once human subject appears in the scene, the silhouette is extracted by SGM and the foreground coefficient of ellipses is used to determine the head position. The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image. When the distance is lower than an adaptive threshold, the centroid height of the human will be used as the second judgment criteria to decide whether a fall incident happened. Lastly, four groups of experiments with different falling directions are performed. Experimental results show that the proposed method can detect fall incidents that occurred in different orientations, and they only need a low computation complexity. PMID:26378540

  15. Compton back scatter imaging for mild steel rebar detection and depth characterization embedded in concrete

    NASA Astrophysics Data System (ADS)

    Margret, M.; Menaka, M.; Venkatraman, B.; Chandrasekaran, S.

    2015-01-01

    A novel non-destructive Compton scattering technique is described to ensure the feasibility, reliability and applicability of detecting the reinforcing steel bar in concrete. The indigenously developed prototype system presented in this paper is capable of detecting the reinforcement of varied diameters embedded in the concrete and as well as up to 60 mm depth, with the aid of Caesium-137(137Cs) radioactive source and a high resolution HPGe detector. The technique could also detect the inhomogeneities present in the test specimen by interpreting the material density variation caused due to the count rate. The experimental results are correlated using established techniques such as radiography and rebar locators. The results obtained from its application to locate the rebars are quite promising and also been successfully used for reinforcement mapping. This method can be applied, especially when the intrusion is located underneath the cover of the concrete or considerably at larger depths and where two sided access is restricted.

  16. Examination of Optical Depth Effects on Fluorescence Imaging of Cardiac Propagation

    PubMed Central

    Bray, Mark-Anthony; Wikswo, John P.

    2003-01-01

    Optical mapping with voltage-sensitive dyes provides a high-resolution technique to observe cardiac electrodynamic behavior. Although most studies assume that the fluorescent signal is emitted from the surface layer of cells, the effects of signal attenuation with depth on signal interpretation are still unclear. This simulation study examines the effects of a depth-weighted signal on epicardial activation patterns and filament localization. We simulated filament behavior using a detailed cardiac model, and compared the signal obtained from the top (epicardial) layer of the spatial domain with the calculated weighted signal. General observations included a prolongation of the action upstroke duration, early upstroke initiation, and reduction in signal amplitude in the weighted signal. A shallow filament was found to produce a dual-humped action potential morphology consistent with previously reported observations. Simulated scroll wave breakup exhibited effects such as the false appearance of graded potentials, apparent supramaximal conduction velocities, and a spatially blurred signal with the local amplitude dependent upon the immediate subepicardial activity; the combination of these effects produced a corresponding change in the accuracy of filament localization. Our results indicate that the depth-dependent optical signal has significant consequences on the interpretation of epicardial activation dynamics. PMID:14645100

  17. Toward 1-mm depth precision with a solid state full-field range imaging system

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.

    2006-02-01

    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.

  18. Tunable semiconductor laser at 1025-1095 nm range for OCT applications with an extended imaging depth

    NASA Astrophysics Data System (ADS)

    Shramenko, Mikhail V.; Chamorovskiy, Alexander; Lyu, Hong-Chou; Lobintsov, Andrei A.; Karnowski, Karol; Yakubovich, Sergei D.; Wojtkowski, Maciej

    2015-03-01

    Tunable semiconductor laser for 1025-1095 nm spectral range is developed based on the InGaAs semiconductor optical amplifier and a narrow band-pass acousto-optic tunable filter in a fiber ring cavity. Mode-hop-free sweeping with tuning speeds of up to 104 nm/s was demonstrated. Instantaneous linewidth is in the range of 0.06-0.15 nm, side-mode suppression is up to 50 dB and polarization extinction ratio exceeds 18 dB. Optical power in output single mode fiber reaches 20 mW. The laser was used in OCT system for imaging a contact lens immersed in a 0.5% intra-lipid solution. The cross-section image provided the imaging depth of more than 5mm.

  19. Increasing the imaging depth of coherent anti-Stokes Raman scattering microscopy with a miniature microscope objective

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Huff, Terry B.; Fu, Yan; Jia, Kevin Y.; Cheng, Ji-Xin

    2007-08-01

    A miniature objective lens with a tip diameter of 1.3 mm was used for extending the penetration depth of coherent anti-Stokes Raman scattering (CARS) microscopy. Its axial and lateral focal widths were determined to be 11.4 and 0.86 μm, respectively, by two-photon excitation fluorescence imaging of 200 nm beads at a 735 nm excitation wavelength. By inserting the lens tip into a soft gel sample, CARS images of 2 μm polystyrene beads 5 mm deep from the surface were acquired. The miniature objective was applied to CARS imaging of rat spinal cord white matter with a minimal requirement for surgery.

  20. Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: Analysis and comparison

    NASA Astrophysics Data System (ADS)

    Kazmi, Wajahat; Foix, Sergi; Alenyà, Guillem; Andersen, Hans Jørgen

    2014-02-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposure times of the sensors. Performance of three different ToF cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancelation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of ToF cameras for a scene involving both shadow and sunlight exposures at the same time by taking advantage of camera flags (PMD) or confidence matrix (SwissRanger).

  1. Use of 2D images of depth and integrated reflectivity to represent the severity of demineralization in cross-polarization optical coherence tomography.

    PubMed

    Chan, Kenneth H; Chan, Andrew C; Fried, William A; Simon, Jacob C; Darling, Cynthia L; Fried, Daniel

    2015-01-01

    Several studies have demonstrated the potential of cross-polarization optical coherence tomography (CP-OCT) to quantify the severity of early caries lesions (tooth decay) on tooth surfaces. The purpose of this study is to show that 2D images of the lesion depth and the integrated reflectivity can be used to accurately represent the severity of early lesions. Simulated early lesions of varying severity were produced on tooth samples using simulated lesion models. Methods were developed to convert the 3D CP-OCT images of the samples to 2D images of the lesion depth and lesion integrated reflectivity. Calculated lesion depths from OCT were compared with lesion depths measured from histological sections examined using polarized light microscopy. The 2D images of the lesion depth and integrated reflectivity are well suited for visualization of early demineralization. PMID:24307350

  2. Common-path depth-filtered digital holography for high resolution imaging of buried semiconductor structures

    NASA Astrophysics Data System (ADS)

    Finkeldey, Markus; Schellenberg, Falk; Gerhardt, Nils C.; Paar, Christof; Hofmann, Martin R.

    2016-03-01

    We investigate digital holographic microscopy (DHM) in reflection geometry for non-destructive 3D imaging of semiconductor devices. This technique provides high resolution information of the inner structure of a sample while maintaining its integrity. To illustrate the performance of the DHM, we use our setup to localize the precise spots for laser fault injection, in the security related field of side-channel attacks. While digital holographic microscopy techniques easily offer high resolution phase images of surface structures in reflection geometry, they are typically incapable to provide high quality phase images of buried structures due to the interference of reflected waves from different interfaces inside the structure. Our setup includes a sCMOS camera for image capture, arranged in a common-path interferometer to provide very high phase stability. As a proof of principle, we show sample images of the inner structure of a modern microcontroller. Finally, we compare our holographic method to classic optical beam induced current (OBIC) imaging to demonstrate its benefits.

  3. Multimodal adaptive optics for depth-enhanced high-resolution ophthalmic imaging

    NASA Astrophysics Data System (ADS)

    Hammer, Daniel X.; Mujat, Mircea; Iftimia, Nicusor V.; Lue, Niyom; Ferguson, R. Daniel

    2010-02-01

    We developed a multimodal adaptive optics (AO) retinal imager for diagnosis of retinal diseases, including glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and retinitis pigmentosa (RP). The development represents the first ever high performance AO system constructed that combines AO-corrected scanning laser ophthalmoscopy (SLO) and swept source Fourier domain optical coherence tomography (SSOCT) imaging modes in a single compact clinical prototype platform. The SSOCT channel operates at a wavelength of 1 μm for increased penetration and visualization of the choriocapillaris and choroid, sites of major disease activity for DR and wet AMD. The system is designed to operate on a broad clinical population with a dual deformable mirror (DM) configuration that allows simultaneous low- and high-order aberration correction. The system also includes a wide field line scanning ophthalmoscope (LSO) for initial screening, target identification, and global orientation; an integrated retinal tracker (RT) to stabilize the SLO, OCT, and LSO imaging fields in the presence of rotational eye motion; and a high-resolution LCD-based fixation target for presentation to the subject of stimuli and other visual cues. The system was tested in a limited number of human subjects without retinal disease for performance optimization and validation. The system was able to resolve and quantify cone photoreceptors across the macula to within ~0.5 deg (~100-150 μm) of the fovea, image and delineate ten retinal layers, and penetrate to resolve targets deep into the choroid. In addition to instrument hardware development, analysis algorithms were developed for efficient information extraction from clinical imaging sessions, with functionality including automated image registration, photoreceptor counting, strip and montage stitching, and segmentation. The system provides clinicians and researchers with high-resolution, high performance adaptive optics imaging to help

  4. Real-Depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for a new, patent pending, 'floating 3-D, off-the-screen- experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3-D graphics' technologies are actually flat on screen. Floating ImagesTM technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3-D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax (the ability to look around foreground objects to see previously hidden background objects, with each eye seeing a different view at all times) and accommodation (the need to re-focus one's eyes when shifting attention from a near object to a distant object) which coincides with convergence (the need to re-aim one's eyes when shifting attention from a near object to a distant object). Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed (unlike stereoscopic and autostereoscopic displays). The imagery (video or computer generated) must either be formatted for the Floating ImagesTM platform when written or existing software can be re-formatted without much difficulty.

  5. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  6. Illumination strategies to achieve effective indoor millimeter wave imaging for personnel screening applications

    NASA Astrophysics Data System (ADS)

    Doyle, Rory; Lyons, Brendan; Lettington, Alan; McEnroe, Tony; Walshe, John; McNaboe, John; Curtin, Peter

    2005-05-01

    The ability of millimetre-waves (mm-wave) to penetrate obscurants, be they clothing, fog etc., enables unique imaging applications in areas such as security screening of personnel and landing aids for aircraft. When used in an outdoor application, the natural thermal contrast provided by cold sky reflections off of objects allow for direct imaging of a scene. Imaging at mm-wave frequencies in an indoor situation requires that a thermal contrast be generated in order to illuminate and detect objects of interest. In the case of a portal screening application the illumination needs to be provided over the imaged area in a uniform, omni-directional manner and at a sufficient level of contrast to achieve the desired signal to noise ratio at the sensor. The primary options are to generate this contrast by using active noise sources or to develop a passive thermally induced source of mm-wave energy. This paper describes the approaches taken to developing and implementing an indoor imaging configuration for a mm-wave camera that is to be used in people screening applications. The camera uses a patented mechanical scanning method to directly generate a raster frame image of portal dimensions. Imaging has been conducted at a range of frequencies with the main focus being on 94GHz operation. Experiences with both active and passive illumination schemes are described with conclusions on the merits or otherwise of each. The results of imaging trials demonstrate the potential for using mm-wave imaging in an indoor situation and example images illustrate the capability of the camera and the illumination methods when used for personnel screening.

  7. Imaging without lenses: achievements and remaining challenges of wide-field on-chip microscopy

    PubMed Central

    Greenbaum, Alon; Luo, Wei; Su, Ting-Wei; Göröcs, Zoltán; Xue, Liang; Isikman, Serhan O; Coskun, Ahmet F; Mudanyali, Onur; Ozcan, Aydogan

    2012-01-01

    We discuss unique features of lens-free computational imaging tools and report some of their emerging results for wide-field on-chip microscopy, such as the achievement of a numerical aperture (NA) of ~0.8–0.9 across a field of view (FOV) of more than 20 mm2 or an NA of ~0.1 across a FOV of ~18 cm2, which corresponds to an image with more than 1.5 gigapixels. We also discuss the current challenges that these computational on-chip microscopes face, shedding light on their future directions and applications. PMID:22936170

  8. Burn Depth Estimation Based on Infrared Imaging of Thermally Excited Tissue

    SciTech Connect

    Dickey, F.M.; Hoswade, S.C.; Yee, M.L.

    1999-03-05

    Accurate estimation of the depth of partial-thickness burns and the early prediction of a need for surgical intervention are difficult. A non-invasive technique utilizing the difference in thermal relaxation time between burned and normal skin may be useful in this regard. In practice, a thermal camera would record the skin's response to heating or cooling by a small amount-roughly 5 C for a short duration. The thermal stimulus would be provided by a heat lamp, hot or cold air, or other means. Processing of the thermal transients would reveal areas that returned to equilibrium at different rates, which should correspond to different burn depths. In deeper thickness burns, the outside layer of skin is further removed from the constant-temperature region maintained through blood flow. Deeper thickness areas should thus return to equilibrium more slowly than other areas. Since the technique only records changes in the skin's temperature, it is not sensitive to room temperature, the burn's location, or the state of the patient. Preliminary results are presented for analysis of a simulated burn, formed by applying a patch of biosynthetic wound dressing on top of normal skin tissue.

  9. Nanoscale β-nuclear magnetic resonance depth imaging of topological insulators

    PubMed Central

    Koumoulis, Dimitrios; Morris, Gerald D.; He, Liang; Kou, Xufeng; King, Danny; Wang, Dong; Hossain, Masrur D.; Wang, Kang L.; Fiete, Gregory A.; Kanatzidis, Mercouri G.; Bouchard, Louis-S.

    2015-01-01

    Considerable evidence suggests that variations in the properties of topological insulators (TIs) at the nanoscale and at interfaces can strongly affect the physics of topological materials. Therefore, a detailed understanding of surface states and interface coupling is crucial to the search for and applications of new topological phases of matter. Currently, no methods can provide depth profiling near surfaces or at interfaces of topologically inequivalent materials. Such a method could advance the study of interactions. Herein, we present a noninvasive depth-profiling technique based on β-detected NMR (β-NMR) spectroscopy of radioactive 8Li+ ions that can provide “one-dimensional imaging” in films of fixed thickness and generates nanoscale views of the electronic wavefunctions and magnetic order at topological surfaces and interfaces. By mapping the 8Li nuclear resonance near the surface and 10-nm deep into the bulk of pure and Cr-doped bismuth antimony telluride films, we provide signatures related to the TI properties and their topological nontrivial characteristics that affect the electron–nuclear hyperfine field, the metallic shift, and magnetic order. These nanoscale variations in β-NMR parameters reflect the unconventional properties of the topological materials under study, and understanding the role of heterogeneities is expected to lead to the discovery of novel phenomena involving quantum materials. PMID:26124141

  10. Lidar Waveform-Based Analysis of Depth Images Constructed Using Sparse Single-Photon Data

    NASA Astrophysics Data System (ADS)

    Altmann, Yoann; Ren, Ximing; McCarthy, Aongus; Buller, Gerald S.; McLaughlin, Steve

    2016-05-01

    This paper presents a new Bayesian model and algorithm used for depth and intensity profiling using full waveforms from the time-correlated single photon counting (TCSPC) measurement in the limit of very low photon counts. The model proposed represents each Lidar waveform as a combination of a known impulse response, weighted by the target intensity, and an unknown constant background, corrupted by Poisson noise. Prior knowledge about the problem is embedded in a hierarchical model that describes the dependence structure between the model parameters and their constraints. In particular, a gamma Markov random field (MRF) is used to model the joint distribution of the target intensity, and a second MRF is used to model the distribution of the target depth, which are both expected to exhibit significant spatial correlations. An adaptive Markov chain Monte Carlo algorithm is then proposed to compute the Bayesian estimates of interest and perform Bayesian inference. This algorithm is equipped with a stochastic optimization adaptation mechanism that automatically adjusts the parameters of the MRFs by maximum marginal likelihood estimation. Finally, the benefits of the proposed methodology are demonstrated through a serie of experiments using real data.

  11. Thermal Images of Seeds Obtained at Different Depths by Photoacoustic Microscopy (PAM)

    NASA Astrophysics Data System (ADS)

    Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2015-06-01

    The objective of the present study was to obtain thermal images of a broccoli seed ( Brassica oleracea) by photoacoustic microscopy, at different modulation frequencies of the incident light beam ((0.5, 1, 5, and 20) Hz). The thermal images obtained in the amplitude of the photoacoustic signal vary with each applied frequency. In the lowest light frequency modulation, there is greater thermal wave penetration in the sample. Likewise, the photoacoustic signal is modified according to the structural characteristics of the sample and the modulation frequency of the incident light. Different structural components could be seen by photothermal techniques, as shown in the present study.

  12. Single-pixel three-dimensional imaging with time-based depth resolution

    NASA Astrophysics Data System (ADS)

    Sun, Ming-Jie; Edgar, Matthew P.; Gibson, Graham M.; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J.

    2016-07-01

    Time-of-flight three-dimensional imaging is an important tool for applications such as object recognition and remote sensing. Conventional time-of-flight three-dimensional imaging systems frequently use a raster scanned laser to measure the range of each pixel in the scene sequentially. Here we show a modified time-of-flight three-dimensional imaging system, which can use compressed sensing techniques to reduce acquisition times, whilst distributing the optical illumination over the full field of view. Our system is based on a single-pixel camera using short-pulsed structured illumination and a high-speed photodiode, and is capable of reconstructing 128 × 128-pixel resolution three-dimensional scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, by using a compressive sampling strategy, we demonstrate continuous real-time three-dimensional video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost three-dimensional imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  13. Matters of Light & Depth: Creating Memorable Images for Video, Film, & Stills through Lighting.

    ERIC Educational Resources Information Center

    Lowell, Ross

    Written for students, professionals with limited experience, and professionals who encounter lighting difficulties, this book encourages sensitivity to light in its myriad manifestations: it offers advice in creating memorable images for video, film, and stills through lighting. Chapters in the book are: (1) "Lights of Passage: Basic Theory and…

  14. Single-pixel three-dimensional imaging with time-based depth resolution.

    PubMed

    Sun, Ming-Jie; Edgar, Matthew P; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three-dimensional imaging is an important tool for applications such as object recognition and remote sensing. Conventional time-of-flight three-dimensional imaging systems frequently use a raster scanned laser to measure the range of each pixel in the scene sequentially. Here we show a modified time-of-flight three-dimensional imaging system, which can use compressed sensing techniques to reduce acquisition times, whilst distributing the optical illumination over the full field of view. Our system is based on a single-pixel camera using short-pulsed structured illumination and a high-speed photodiode, and is capable of reconstructing 128 × 128-pixel resolution three-dimensional scenes to an accuracy of ∼3 mm at a range of ∼5 m. Furthermore, by using a compressive sampling strategy, we demonstrate continuous real-time three-dimensional video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost three-dimensional imaging devices for precision ranging at wavelengths beyond the visible spectrum. PMID:27377197

  15. Photothermal optical coherence tomography for depth-resolved imaging of mesenchymal stem cells via single wall carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Subhash, Hrebesh M.; Connolly, Emma; Murphy, Mary; Barron, Valerie; Leahy, Martin

    2014-03-01

    The progress in stem cell research over the past decade holds promise and potential to address many unmet clinical therapeutic needs. Tracking stem cell with modern imaging modalities are critically needed for optimizing stem cell therapy, which offers insight into various underlying biological processes such as cell migration, engraftment, homing, differentiation, and functions etc. In this study we report the feasibility of photothermal optical coherence tomography (PT-OCT) to image human mesenchymal stem cells (hMSCs) labeled with single-walled carbon nanotubes (SWNTs) for in vitro cell tracking in three dimensional scaffolds. PT-OCT is a functional extension of conventional OCT with extended capability of localized detection of absorbing targets from scattering background to provide depth-resolved molecular contrast imaging. A 91 kHz line rate, spectral domain PT-OCT system at 1310nm was developed to detect the photothermal signal generated by 800nm excitation laser. In general, MSCs do not have obvious optical absorption properties and cannot be directly visualized using PT-OCT imaging. However, the optical absorption properties of hMSCs can me modified by labeling with SWNTs. Using this approach, MSC were labeled with SWNT and the cell distribution imaged in a 3D polymer scaffold using PT-OCT.

  16. Quantitative, depth-resolved determination of particle motion using multi-exposure, spatial frequency domain laser speckle imaging

    PubMed Central

    Rice, Tyler B.; Kwan, Elliott; Hayakawa, Carole K.; Durkin, Anthony J.; Choi, Bernard; Tromberg, Bruce J.

    2013-01-01

    Laser Speckle Imaging (LSI) is a simple, noninvasive technique for rapid imaging of particle motion in scattering media such as biological tissue. LSI is generally used to derive a qualitative index of relative blood flow due to unknown impact from several variables that affect speckle contrast. These variables may include optical absorption and scattering coefficients, multi-layer dynamics including static, non-ergodic regions, and systematic effects such as laser coherence length. In order to account for these effects and move toward quantitative, depth-resolved LSI, we have developed a method that combines Monte Carlo modeling, multi-exposure speckle imaging (MESI), spatial frequency domain imaging (SFDI), and careful instrument calibration. Monte Carlo models were used to generate total and layer-specific fractional momentum transfer distributions. This information was used to predict speckle contrast as a function of exposure time, spatial frequency, layer thickness, and layer dynamics. To verify with experimental data, controlled phantom experiments with characteristic tissue optical properties were performed using a structured light speckle imaging system. Three main geometries were explored: 1) diffusive dynamic layer beneath a static layer, 2) static layer beneath a diffuse dynamic layer, and 3) directed flow (tube) submerged in a dynamic scattering layer. Data fits were performed using the Monte Carlo model, which accurately reconstructed the type of particle flow (diffusive or directed) in each layer, the layer thickness, and absolute flow speeds to within 15% or better. PMID:24409388

  17. Orientation and depth estimation for femoral components using image sensor, magnetometer and inertial sensors in THR surgeries.

    PubMed

    Jiyang Gao; Shaojie Su; Hong Chen; Zhihua Wang

    2015-08-01

    Malposition of the acetabular and femoral component has long been recognized as an important cause of dislocation after total hip replacement (THR) surgeries. In order to help surgeons improve the positioning accuracy of the components, a visual-aided system for THR surgeries that could estimate orientation and depth of femoral component is proposed. The sensors are fixed inside the femoral prosthesis trial and checkerboard patterns are printed on the internal surface of the acetabular prosthesis trial. An extended Kalman filter is designed to fuse the data from inertial sensors and the magnetometer orientation estimation. A novel image processing algorithm for depth estimation is developed. The algorithms have been evaluated under the simulation with rotation quaternion and translation vector and the experimental results shows that the root mean square error (RMSE) of the orientation estimation is less then 0.05 degree and the RMSE for depth estimation is 1mm. Finally, the femoral head is displayed in 3D graphics in real time to help surgeons with the component positioning. PMID:26736858

  18. Enhanced contrast and depth resolution in polarization imaging using elliptically polarized light.

    PubMed

    Sridhar, Susmita; Da Silva, Anabela

    2016-07-01

    Polarization gating is a popular and widely used technique in biomedical optics to sense superficial tissues (colinear detection), deeper volumes (crosslinear detection), and also selectively probe subsuperficial volumes (using elliptically polarized light). As opposed to the conventional linearly polarized illumination, we propose a new protocol of polarization gating that combines coelliptical and counter-elliptical measurements to selectively enhance the contrast of the images. This new method of eliminating multiple-scattered components from the images shows that it is possible to retrieve a greater signal and a better contrast for subsurface structures. In vivo experiments were performed on skin abnormalities of volunteers to confirm the results of the subtraction method and access subsurface information. PMID:26868614

  19. Depth-resolved optical imaging of transmural electrical propagation in perfused heart

    PubMed Central

    Hillman, Elizabeth M. C.; Bernus, Olivier; Pease, Emily; Bouchard, Matthew B.; Pertsov, Arkady

    2008-01-01

    We present a study of the 3-dimensional (3D) propagation of electrical waves in the heart wall using Laminar Optical Tomography (LOT). Optical imaging contrast is provided by a voltage sensitive dye whose fluorescence reports changes in membrane potential. We examined the transmural propagation dynamics of electrical waves in the right ventricle of Langendorf perfused rat hearts, initiated either by endo-cardial or epi-cardial pacing. 3D images were acquired at an effective frame rate of 667Hz. We compare our experimental results to a mathematical model of electrical transmural propagation. We demonstrate that LOT can clearly resolve the direction of propagation of electrical waves within the cardiac wall, and that the dynamics observed agree well with the model of electrical propagation in rat ventricular tissue. PMID:18592044

  20. Thermal-wave radar: a novel subsurface imaging modality with extended depth-resolution dynamic range.

    PubMed

    Tabatabaei, Nima; Mandelis, Andreas

    2009-03-01

    Combining the ideas behind linear frequency modulated continuous wave radars and frequency domain photothermal radiometry (PTR), a novel PTR method is introduced. Analytical solutions to the heat diffusion problem for both opaque and transparent solids are provided. Simulations and experimental results suggest a significant improvement in the dynamic range when using the thermal-wave radar (TWR) instead of conventional PTR. A practical TWR image resolution augmentation method is proposed. PMID:19334943

  1. Multi-angle lensless digital holography for depth resolved imaging on a chip

    PubMed Central

    Su, Ting-Wei; Isikman, Serhan O.; Bishara, Waheb; Tseng, Derek; Erlinger, Anthony; Ozcan, Aydogan

    2010-01-01

    A multi-angle lensfree holographic imaging platform that can accurately characterize both the axial and lateral positions of cells located within multi-layered micro-channels is introduced. In this platform, lensfree digital holograms of the micro-objects on the chip are recorded at different illumination angles using partially coherent illumination. These digital holograms start to shift laterally on the sensor plane as the illumination angle of the source is tilted. Since the exact amount of this lateral shift of each object hologram can be calculated with an accuracy that beats the diffraction limit of light, the height of each cell from the substrate can be determined over a large field of view without the use of any lenses. We demonstrate the proof of concept of this multi-angle lensless imaging platform by using light emitting diodes to characterize various sized microparticles located on a chip with sub-micron axial and lateral localization over ~60 mm2 field of view. Furthermore, we successfully apply this lensless imaging approach to simultaneously characterize blood samples located at multi-layered micro-channels in terms of the counts, individual thicknesses and the volumes of the cells at each layer. Because this platform does not require any lenses, lasers or other bulky optical/mechanical components, it provides a compact and high-throughput alternative to conventional approaches for cytometry and diagnostics applications involving lab on a chip systems. PMID:20588819

  2. Learning the missing values in depth maps

    NASA Astrophysics Data System (ADS)

    Yin, Xuanwu; Wang, Guijin; Zhang, Chun; Liao, Qingmin

    2013-12-01

    In this paper, we consider the task of hole filling in depth maps, with the help of an associated color image. We take a supervised learning approach to solve this problem. The model is learnt from the training set, which contain pixels that have depth values. Then we apply supervised learning to predict the depth values in the holes. Our model uses a regional Markov Random Field (MRF) that incorporates multiscale absolute and relative features (computed from the color image), and models depths not only at individual points but also between adjacent points. The experiments show that the proposed approach is able to recover fairly accurate depth values and achieve a high quality depth map.

  3. Achieving High Contrast for Exoplanet Imaging with a Kalman Filter and Stroke Minimization

    NASA Astrophysics Data System (ADS)

    Eldorado Riggs, A. J.; Groff, T. D.; Kasdin, N. J.; Carlotti, A.; Vanderbei, R. J.

    2014-01-01

    High contrast imaging requires focal plane wavefront control and estimation to correct aberrations in an optical system; non-common path errors prevent the use of conventional estimation with a separate wavefront sensor. The High Contrast Imaging Laboratory (HCIL) at Princeton has led the development of several techniques for focal plane wavefront control and estimation. In recent years, we developed a Kalman filter for optimal wavefront estimation. Our Kalman filter algorithm is an improvement upon DM Diversity, which requires at least two images pairs each iteration and does not utilize any prior knowledge of the system. The Kalman filter is a recursive estimator, meaning that it uses the data from prior estimates along with as few as one new image pairs per iteration to update the electric field estimate. Stroke minimization has proven to be a feasible controller for achieving high contrast. While similar to a variation of Electric Field Conjugation (EFC), stroke minimization achieves the same contrast with less stroke on the DMs. We recently utilized these algorithms to achieve high contrast for the first time in our experiment at the High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory (JPL). Our HCIT experiment was also the first demonstration of symmetric dark hole correction in the image plane using two DMs--this is a major milestone for future space missions. Our ongoing work includes upgrading our optimal estimator to include an estimate of the incoherent light in the system, which allows for simultaneous estimation of the light from a planet along with starlight. The two-DM experiment at the HCIT utilized a shaped pupil coronagraph. Those tests utilized ripple style, free-standing masks etched out of silicon, but our current work is in designing 2-D optimized reflective shaped pupils. In particular, we have created several designs for the AFTA telescope, whose pupil presents major hurdles because of its atypical pupil obstructions. Our

  4. Direct Depth- and Lateral- Imaging of Nanoscale Magnets Generated by Ion Impact

    NASA Astrophysics Data System (ADS)

    Röder, Falk; Hlawacek, Gregor; Wintz, Sebastian; Hübner, René; Bischoff, Lothar; Lichte, Hannes; Potzger, Kay; Lindner, Jürgen; Fassbender, Jürgen; Bali, Rantej

    2015-11-01

    Nanomagnets form the building blocks for a variety of spin-transport, spin-wave and data storage devices. In this work we generated nanoscale magnets by exploiting the phenomenon of disorder-induced ferromagnetism; disorder was induced locally on a chemically ordered, initially non-ferromagnetic, Fe60Al40 precursor film using  nm diameter beam of Ne+ ions at 25 keV energy. The beam of energetic ions randomized the atomic arrangement locally, leading to the formation of ferromagnetism in the ion-affected regime. The interaction of a penetrating ion with host atoms is known to be spatially inhomogeneous, raising questions on the magnetic homogeneity of nanostructures caused by ion-induced collision cascades. Direct holographic observations of the flux-lines emergent from the disorder-induced magnetic nanostructures were made in order to measure the depth- and lateral- magnetization variation at ferromagnetic/non-ferromagnetic interfaces. Our results suggest that high-resolution nanomagnets of practically any desired 2-dimensional geometry can be directly written onto selected alloy thin films using a nano-focussed ion-beam stylus, thus enabling the rapid prototyping and testing of novel magnetization configurations for their magneto-coupling and spin-wave properties.

  5. Direct Depth- and Lateral- Imaging of Nanoscale Magnets Generated by Ion Impact

    PubMed Central

    Röder, Falk; Hlawacek, Gregor; Wintz, Sebastian; Hübner, René; Bischoff, Lothar; Lichte, Hannes; Potzger, Kay; Lindner, Jürgen; Fassbender, Jürgen; Bali, Rantej

    2015-01-01

    Nanomagnets form the building blocks for a variety of spin-transport, spin-wave and data storage devices. In this work we generated nanoscale magnets by exploiting the phenomenon of disorder-induced ferromagnetism; disorder was induced locally on a chemically ordered, initially non-ferromagnetic, Fe60Al40 precursor film using  nm diameter beam of Ne+ ions at 25 keV energy. The beam of energetic ions randomized the atomic arrangement locally, leading to the formation of ferromagnetism in the ion-affected regime. The interaction of a penetrating ion with host atoms is known to be spatially inhomogeneous, raising questions on the magnetic homogeneity of nanostructures caused by ion-induced collision cascades. Direct holographic observations of the flux-lines emergent from the disorder-induced magnetic nanostructures were made in order to measure the depth- and lateral- magnetization variation at ferromagnetic/non-ferromagnetic interfaces. Our results suggest that high-resolution nanomagnets of practically any desired 2-dimensional geometry can be directly written onto selected alloy thin films using a nano-focussed ion-beam stylus, thus enabling the rapid prototyping and testing of novel magnetization configurations for their magneto-coupling and spin-wave properties. PMID:26584789

  6. Direct Depth- and Lateral- Imaging of Nanoscale Magnets Generated by Ion Impact.

    PubMed

    Röder, Falk; Hlawacek, Gregor; Wintz, Sebastian; Hübner, René; Bischoff, Lothar; Lichte, Hannes; Potzger, Kay; Lindner, Jürgen; Fassbender, Jürgen; Bali, Rantej

    2015-01-01

    Nanomagnets form the building blocks for a variety of spin-transport, spin-wave and data storage devices. In this work we generated nanoscale magnets by exploiting the phenomenon of disorder-induced ferromagnetism; disorder was induced locally on a chemically ordered, initially non-ferromagnetic, Fe60Al40 precursor film using  nm diameter beam of Ne(+) ions at 25 keV energy. The beam of energetic ions randomized the atomic arrangement locally, leading to the formation of ferromagnetism in the ion-affected regime. The interaction of a penetrating ion with host atoms is known to be spatially inhomogeneous, raising questions on the magnetic homogeneity of nanostructures caused by ion-induced collision cascades. Direct holographic observations of the flux-lines emergent from the disorder-induced magnetic nanostructures were made in order to measure the depth- and lateral- magnetization variation at ferromagnetic/non-ferromagnetic interfaces. Our results suggest that high-resolution nanomagnets of practically any desired 2-dimensional geometry can be directly written onto selected alloy thin films using a nano-focussed ion-beam stylus, thus enabling the rapid prototyping and testing of novel magnetization configurations for their magneto-coupling and spin-wave properties. PMID:26584789

  7. Imaging widespread seismicity at midlower crustal depths beneath Long Beach, CA, with a dense seismic array: Evidence for a depth-dependent earthquake size distribution

    NASA Astrophysics Data System (ADS)

    Inbal, Asaf; Clayton, Robert W.; Ampuero, Jean-Paul

    2015-08-01

    We use a dense seismic array composed of 5200 vertical geophones to monitor microseismicity in Long Beach, California. Poor signal-to-noise ratio due to anthropogenic activity is mitigated via downward-continuation of the recorded wavefield. The downward-continued data are continuously back projected to search for coherent arrivals from sources beneath the array, which reveals numerous, previously undetected events. The spatial distribution of seismicity is uncorrelated with the mapped fault traces, or with activity in the nearby oil-fields. Many events are located at depths larger than 20 km, well below the commonly accepted seismogenic depth for that area. The seismicity exhibits temporal clustering consistent with Omori's law, and its size distribution obeys the Gutenberg-Richter relation above 20 km but falls off exponentially at larger depths. The dense array allows detection of earthquakes two magnitude units smaller than the permanent seismic network in the area. Because the event size distribution above 20 km depth obeys a power law whose exponent is near one, this improvement yields a hundred-fold decrease in the time needed for effective characterization of seismicity in Long Beach.

  8. Real-time imaging systems' combination of methods to achieve automatic target recognition

    NASA Astrophysics Data System (ADS)

    Maraviglia, Carlos G.; Williams, Elmer F.; Pezzulich, Alan Z.

    1998-03-01

    Using a combination of strategies real time imaging weapons systems are achieving their goals of detecting their intended targets. The demands of acquiring a target in a cluttered environment in a timely manner with a high degree of confidence demands compromise be made as to having a truly automatic system. A combination of techniques such as dedicated image processing hardware, real time operating systems, mixes of algorithmic methods, and multi-sensor detectors are a forbearance of the unleashed potential of future weapons system and their incorporation in truly autonomous target acquisition. Elements such as position information, sensor gain controls, way marks for mid course correction, and augmentation with different imaging spectrums as well as future capabilities such as neural net expert systems and decision processors over seeing a fusion matrix architecture may be considered tools for a weapon system's achievement of its ultimate goal. Currently, acquiring a target in a cluttered environment in a timely manner with a high degree of confidence demands compromises be made as to having a truly automatic system. It is now necessary to include a human in the track decision loop, a system feature that may be long lived. Automatic Track Recognition will still be the desired goal in future systems due to the variability of military missions and desirability of an expendable asset. Furthermore, with the increasing incorporation of multi-sensor information into the track decision the human element's real time contribution must be carefully engineered.

  9. On evaluation of depth accuracy in consumer depth sensors

    NASA Astrophysics Data System (ADS)

    Abd Aziz, Azim Zaliha; Wei, Hong; Ferryman, James

    2015-12-01

    This paper presents an experimental study of different depth sensors. The aim is to answer the question, whether these sensors give accurate data for general depth image analysis. The study examines the depth accuracy between three popularly used depth sensors; ASUS Xtion Prolive, Kinect Xbox 360 and Kinect for Windows v2. The main attention is to study on the stability of pixels in the depth image captured at several different sensor-object distances by measuring the depth returned by the sensors within specified time intervals. The experimental results show that the fluctuation (mm) of the random selected pixels within the target area, increases with increasing distance to the sensor, especially on the Kinect for Xbox 360 and the Asus Xtion Prolive. Both of these sensors provide pixels fluctuation between 20mm and 30mm at a sensor-object distance beyond 1500mm. However, the pixel's stability of the Kinect for Windows v2 not affected much with the distance between the sensor and the object. The maximum fluctuation for all the selected pixels of Kinect for Windows v2 is approximately 5mm at sensor-object distance of between 800mm and 3000mm. Therefore, in the optimal distance, the best stability achieved.

  10. Nanoscopy—imaging life at the nanoscale: a Nobel Prize achievement with a bright future

    NASA Astrophysics Data System (ADS)

    Blom, Hans; Bates, Mark

    2015-10-01

    A grand scientific prize was awarded last year to three pioneering scientists, for their discovery and development of molecular ‘ON-OFF’ switching which, when combined with optical imaging, can be used to see the previously invisible with light microscopy. The Royal Swedish Academy of Science announced on October 8th their decision and explained that this achievement—rooted in physics and applied in biology and medicine—was awarded with the Nobel Prize in Chemistry for controlling fluorescent molecules to create images of specimens smaller than anything previously observed with light. The story of how this noble switch in optical microscopy was achieved and how it was engineered to visualize life at the nanoscale is highlighted in this invited comment.

  11. Using a depth-sensing infrared camera system to access and manipulate medical imaging from within the sterile operating field

    PubMed Central

    Strickland, Matt; Tremaine, Jamie; Brigley, Greg; Law, Calvin

    2013-01-01

    Background As surgical procedures become increasingly dependent on equipment and imaging, the need for sterile members of the surgical team to have unimpeded access to the nonsterile technology in their operating room (OR) is of growing importance. To our knowledge, our team is the first to use an inexpensive infrared depth-sensing camera (a component of the Microsoft Kinect) and software developed in-house to give surgeons a touchless, gestural interface with which to navigate their picture archiving and communication systems intraoperatively. Methods The system was designed and developed with feedback from surgeons and OR personnel and with consideration of the principles of aseptic technique and gestural controls in mind. Simulation was used for basic validation before trialing in a pilot series of 6 hepatobiliary-pancreatic surgeries. Results The interface was used extensively in 2 laparoscopic and 4 open procedures. Surgeons primarily used the system for anatomic correlation, real-time comparison of intraoperative ultrasound with preoperative computed tomography and magnetic resonance imaging scans and for teaching residents and fellows. Conclusion The system worked well in a wide range of lighting conditions and procedures. It led to a perceived increase in the use of intraoperative image consultation. Further research should be focused on investigating the usefulness of touchless gestural interfaces in different types of surgical procedures and its effects on operative time. PMID:23706851

  12. Linear Dispersion Relation and Depth Sensitivity to Swell Parameters: Application to Synthetic Aperture Radar Imaging and Bathymetry

    PubMed Central

    Boccia, Valentina; Renga, Alfredo; Rufino, Giancarlo; D'Errico, Marco; Moccia, Antonio; Aragno, Cesare; Zoffoli, Simona

    2015-01-01

    Long gravity waves or swell dominating the sea surface is known to be very useful to estimate seabed morphology in coastal areas. The paper reviews the main phenomena related to swell waves propagation that allow seabed morphology to be sensed. The linear dispersion is analysed and an error budget model is developed to assess the achievable depth accuracy when Synthetic Aperture Radar (SAR) data are used. The relevant issues and potentials of swell-based bathymetry by SAR are identified and discussed. This technique is of particular interest for characteristic regions of the Mediterranean Sea, such as in gulfs and relatively close areas, where traditional SAR-based bathymetric techniques, relying on strong tidal currents, are of limited practical utility. PMID:25789333

  13. Linear dispersion relation and depth sensitivity to swell parameters: application to synthetic aperture radar imaging and bathymetry.

    PubMed

    Boccia, Valentina; Renga, Alfredo; Rufino, Giancarlo; D'Errico, Marco; Moccia, Antonio; Aragno, Cesare; Zoffoli, Simona

    2015-01-01

    Long gravity waves or swell dominating the sea surface is known to be very useful to estimate seabed morphology in coastal areas. The paper reviews the main phenomena related to swell waves propagation that allow seabed morphology to be sensed. The linear dispersion is analysed and an error budget model is developed to assess the achievable depth accuracy when Synthetic Aperture Radar (SAR) data are used. The relevant issues and potentials of swell-based bathymetry by SAR are identified and discussed. This technique is of particular interest for characteristic regions of the Mediterranean Sea, such as in gulfs and relatively close areas, where traditional SAR-based bathymetric techniques, relying on strong tidal currents, are of limited practical utility. PMID:25789333

  14. Click-assembled, oxygen sensing nanoconjugates for depth-resolved, near-infrared imaging in a 3D cancer model

    PubMed Central

    Nichols, Alexander J.; Roussakis, Emmanuel; Klein, Oliver J.

    2014-01-01

    Hypoxia is an important factor that contributes to the development of drug-resistant cancer, yet few non-perturbative tools exist for studying oxygen in tissue. While progress has been made in the development of chemical probes for optical oxygen mapping, penetration into poorly perfused or avascular tumor regions remains problematic. Here we report a Click-Assembled Oxygen Sensing (CAOS) nanoconjugate and demonstrate its properties in an in vitro 3D spheroid cancer model. Our synthesis relies on sequential click-based ligation of poly(amidoamine)-like subunits for rapid assembly. Using near-infrared confocal phosphorescence microscopy, we demonstrate the ability of CAOS nanoconjugates to penetrate hundreds of microns into spheroids within hours and show their sensitivity to oxygen changes throughout the nodule. This proof-of-concept study demonstrates a modular approach that is readily extensible to a wide variety of oxygen and cellular sensors for depth-resolved imaging in tissue and tissue models. PMID:24590700

  15. Seismic imaging of the Waltham Canyon fault, California: comparison of ray‐theoretical and Fresnel volume prestack depth migration

    USGS Publications Warehouse

    Bauer, Klaus; Ryberg, Trond; Fuis, Gary S.; Lüth, Stefan

    2013-01-01

    Near‐vertical faults can be imaged using reflected refractions identified in controlled‐source seismic data. Often theses phases are observed on a few neighboring shot or receiver gathers, resulting in a low‐fold data set. Imaging can be carried out with Kirchhoff prestack depth migration in which migration noise is suppressed by constructive stacking of large amounts of multifold data. Fresnel volume migration can be used for low‐fold data without severe migration noise, as the smearing along isochrones is limited to the first Fresnel zone around the reflection point. We developed a modified Fresnel volume migration technique to enhance imaging of steep faults and to suppress noise and undesired coherent phases. The modifications include target‐oriented filters to separate reflected refractions from steep‐dipping faults and reflections with hyperbolic moveout. Undesired phases like multiple reflections, mode conversions, direct P and S waves, and surface waves are suppressed by these filters. As an alternative approach, we developed a new prestack line‐drawing migration method, which can be considered as a proxy to an infinite frequency approximation of the Fresnel volume migration. The line‐drawing migration is not considering waveform information but requires significantly shorter computational time. Target‐oriented filters were extended by dip filters in the line‐drawing migration method. The migration methods were tested with synthetic data and applied to real data from the Waltham Canyon fault, California. The two techniques are applied best in combination, to design filters and to generate complementary images of steep faults.

  16. Magnetic Resonance Imaging (MRI) Analysis of Fibroid Location in Women Achieving Pregnancy After Uterine Artery Embolization

    SciTech Connect

    Walker, Woodruff J.; Bratby, Mark John

    2007-09-15

    The purpose of this study was to evaluate the fibroid morphology in a cohort of women achieving pregnancy following treatment with uterine artery embolization (UAE) for symptomatic uterine fibroids. A retrospective review of magnetic resonance imaging (MRI) of the uterus was performed to assess pre-embolization fibroid morphology. Data were collected on fibroid size, type, and number and included analysis of follow-up imaging to assess response. There have been 67 pregnancies in 51 women, with 40 live births. Intramural fibroids were seen in 62.7% of the women (32/48). Of these the fibroids were multiple in 16. A further 12 women had submucosal fibroids, with equal numbers of types 1 and 2. Two of these women had coexistent intramural fibroids. In six women the fibroids could not be individually delineated and formed a complex mass. All subtypes of fibroid were represented in those subgroups of women achieving a live birth versus those who did not. These results demonstrate that the location of uterine fibroids did not adversely affect subsequent pregnancy in the patient population investigated. Although this is only a small qualitative study, it does suggest that all types of fibroids treated with UAE have the potential for future fertility.

  17. An Evaluation of Effects of Different Mydriatics on Choroidal Thickness by Examining Anterior Chamber Parameters: The Scheimpflug Imaging and Enhanced Depth Imaging-OCT Study

    PubMed Central

    Yuvacı, İsa; Pangal, Emine; Yuvacı, Sümeyra; Bayram, Nurettin; Ataş, Mustafa; Başkan, Burhan; Demircan, Süleyman; Akal, Ali

    2015-01-01

    Aim. To assess the effects of mydriatics commonly used in clinical practice on choroidal thickness and anterior chamber change. Methods. This was a prospective, randomized, controlled, double-blinded study including a single eye of the participants. The subjects were assigned into 4 groups to receive tropicamide 1%, phenylephrine 2.5%, cyclopentolate 1%, and artificial tears. At the baseline, anterior chamber parameters were assessed using a Pentacam Scheimpflug camera system, and choroidal thickness (CT) was measured using a spectral-domain OCT with Enhanced Depth Imaging (EDI) modality. All measurements were repeated again after drug administration. Results. Increases in pupil diameter, volume, and depth of anterior chamber were found to be significant (p = 0.000, p = 0.000, and p = 0.000, resp.), while decreases in the choroidal thickness were found to be significant in subjects receiving mydriatics (p < 0.05). Conclusions. The study has shown that while cyclopentolate, tropicamide, and phenylephrine cause a decrease in choroidal thickness, they also lead to an increase in the volume and depth of anterior chamber. However, no correlation was detected between anterior chamber parameters and choroidal changes after drug administration. These findings suggest that the mydriatics may affect the choroidal thickness regardless of anterior chamber parameters. This study was registered with trial registration number 2014/357. PMID:26509080

  18. Time-lapse imaging of fault properties at seismogenic depth using repeating earthquakes, active sources and seismic ambient noise

    NASA Astrophysics Data System (ADS)

    Cheng, Xin

    2009-12-01

    The time-varying stress field of fault systems at seismogenic depths plays the mort important role in controlling the sequencing and nucleation of seismic events. Using seismic observations from repeating earthquakes, controlled active sources and seismic ambient noise, five studies at four different fault systems across North America, Central Japan, North and mid-West China are presented to describe our efforts to measure such time dependent structural properties. Repeating and similar earthquakes are hunted and analyzed to study the post-seismic fault relaxation at the aftershock zone of the 1984 M 6.8 western Nagano and the 1976 M 7.8 Tangshan earthquakes. The lack of observed repeating earthquakes at western Nagano is attributed to the absence of a well developed weak fault zone, suggesting that the fault damage zone has been almost completely healed. In contrast, the high percentage of similar and repeating events found at Tangshan suggest the existence of mature fault zones characterized by stable creep under steady tectonic loading. At the Parkfield region of the San Andreas Fault, repeating earthquake clusters and chemical explosions are used to construct a scatterer migration image based on the observation of systematic temporal variations in the seismic waveforms across the occurrence time of the 2004 M 6 Parkfield earthquake. Coseismic fluid charge or discharge in fractures caused by the Parkfield earthquake is used to explain the observed seismic scattering properties change at depth. In the same region, a controlled source cross-well experiment conducted at SAFOD pilot and main holes documents two large excursions in the travel time required for a shear wave to travel through the rock along a fixed pathway shortly before two rupture events, suggesting that they may be related to pre-rupture stress induced changes in crack properties. At central China, a tomographic inversion based on the theory of seismic ambient noise and coda wave interferometry

  19. Tracking Achievement Gaps and Assessing the Impact of NCLB on the Gaps: An In-Depth Look into National and State Reading and Math Outcome Trends

    ERIC Educational Resources Information Center

    Lee, Jaekyung

    2006-01-01

    This study offers systematic trend analyses of NAEP national and state-level public school fourth and eighth graders' reading and math achievement results during pre-NCLB (1990-2001) and post-NCLB (2002-2005) periods. It compares post-NCLB trends in reading and math achievement with pre-NCLB trends among different racial and socioeconomic groups…

  20. Coupling sky images with three-dimensional radiative transfer models: a new method to estimate cloud optical depth

    NASA Astrophysics Data System (ADS)

    Mejia, F. A.; Kurtz, B.; Murray, K.; Hinkelman, L. M.; Sengupta, M.; Xie, Y.; Kleissl, J.

    2015-10-01

    A method for retrieving cloud optical depth (τc) using a ground-based sky imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various τc produced by a 3-D Radiative Transfer Model (3DRTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (θ0), τc, solar pixel angle/scattering angle (ϑs), and pixel zenith angle/view angle (ϑz). The effects of these parameters are described and the functions for radiance, Iλ(τc, θ0, ϑs, ϑz) and the red-blue ratio, RBR(τc, θ0, ϑs, ϑz) are retrieved from the 3DRTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for τc, where RBR increases with τc up to about τc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Iλmeas(ϑs, ϑz), in addition to RBRmeas(ϑs, ϑz) to obtain a unique solution for τc. The RRBR method is applied to images taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and validated against measurements from a microwave radiometer (MWR); output from the Min method for overcast skies, and τc retrieved by Beer's law from direct normal irradiance (DNI) measurements. A τc RMSE of 5.6 between the Min method and the USI are observed. The MWR and USI have an RMSE of 2.3 which is well within the uncertainty of the MWR. An RMSE of 0.95 between the USI and DNI retrieved τc is observed. The procedure developed here provides a foundation to test and develop other cloud detection algorithms.

  1. Validation of snow depth reconstruction from lapse-rate webcam images against terrestrial laser scanner measurements in centrel Pyrenees

    NASA Astrophysics Data System (ADS)

    Revuelto, Jesús; Jonas, Tobias; López-Moreno, Juan Ignacio

    2015-04-01

    Snow distribution in mountain areas plays a key role in many processes as runoff dynamics, ecological cycles or erosion rates. Nevertheless, the acquisition of high resolution snow depth data (SD) in space-time is a complex task that needs the application of remote sensing techniques as Terrestrial Laser Scanning (TLS). Such kind of techniques requires intense field work for obtaining high quality snowpack evolution during a specific time period. Combining TLS data with other remote sensing techniques (satellite images, photogrammetry…) and in-situ measurements could represent an improvement of the available information of a variable with rapid topographic changes. The aim of this study is to reconstruct daily SD distribution from lapse-rate images from a webcam and data from two to three TLS acquisitions during the snow melting periods of 2012, 2013 and 2014. This information is obtained at Izas Experimental catchment in Central Spanish Pyrenees; a catchment of 33ha, with an elevation ranging from 2050 to 2350m a.s.l. The lapse-rate images provide the Snow Covered Area (SCA) evolution at the study site, while TLS allows obtaining high resolution information of SD distribution. With ground control points, lapse-rate images are georrectified and their information is rasterized into a 1-meter resolution Digital Elevation Model. Subsequently, for each snow season, the Melt-Out Date (MOD) of each pixel is obtained. The reconstruction increases the estimated SD lose for each time step (day) in a distributed manner; starting the reconstruction for each grid cell at the MOD (note the reverse time evolution). To do so, the reconstruction has been previously adjusted in time and space as follows. Firstly, the degree day factor (SD lose/positive average temperatures) is calculated from the information measured at an automatic weather station (AWS) located in the catchment. Afterwards, comparing the SD lose at the AWS during a specific time period (i.e. between two TLS

  2. Adaptive Neuro-Fuzzy Inference System (ANFIS)-Based Models for Predicting the Weld Bead Width and Depth of Penetration from the Infrared Thermal Image of the Weld Pool

    NASA Astrophysics Data System (ADS)

    Subashini, L.; Vasudevan, M.

    2012-02-01

    Type 316 LN stainless steel is the major structural material used in the construction of nuclear reactors. Activated flux tungsten inert gas (A-TIG) welding has been developed to increase the depth of penetration because the depth of penetration achievable in single-pass TIG welding is limited. Real-time monitoring and control of weld processes is gaining importance because of the requirement of remoter welding process technologies. Hence, it is essential to develop computational methodologies based on an adaptive neuro fuzzy inference system (ANFIS) or artificial neural network (ANN) for predicting and controlling the depth of penetration and weld bead width during A-TIG welding of type 316 LN stainless steel. In the current work, A-TIG welding experiments have been carried out on 6-mm-thick plates of 316 LN stainless steel by varying the welding current. During welding, infrared (IR) thermal images of the weld pool have been acquired in real time, and the features have been extracted from the IR thermal images of the weld pool. The welding current values, along with the extracted features such as length, width of the hot spot, thermal area determined from the Gaussian fit, and thermal bead width computed from the first derivative curve were used as inputs, whereas the measured depth of penetration and weld bead width were used as output of the respective models. Accurate ANFIS models have been developed for predicting the depth of penetration and the weld bead width during TIG welding of 6-mm-thick 316 LN stainless steel plates. A good correlation between the measured and predicted values of weld bead width and depth of penetration were observed in the developed models. The performance of the ANFIS models are compared with that of the ANN models.

  3. Quantitative comparison of contrast and imaging depth of ultrahigh-resolution optical coherence tomography images in 800–1700 nm wavelength region

    PubMed Central

    Ishida, Shutaro; Nishizawa, Norihiko

    2012-01-01

    We investigated the wavelength dependence of imaging depth and clearness of structure in ultrahigh-resolution optical coherence tomography over a wide wavelength range. We quantitatively compared the optical properties of samples using supercontinuum sources at five wavelengths, 800 nm, 1060 nm, 1300 nm, 1550 nm, and 1700 nm, with the same system architecture. For samples of industrially used homogeneous materials with low water absorption, the attenuation coefficients of the samples were fitted using Rayleigh scattering theory. We confirmed that the systems with the longer-wavelength sources had lower scattering coefficients and less dependence on the sample materials. For a biomedical sample, we observed wavelength dependence of the attenuation coefficient, which can be explained by absorption by water and hemoglobin. PMID:22312581

  4. Analysis of the Resistivity Imaging Results Conducted Over Karst Voids in Klucze Using Depth of Investigation Index

    NASA Astrophysics Data System (ADS)

    Olga, Krajewska; Glazer, Michał; Jolanta, Pierwoła

    2014-09-01

    Conducted by "Olkusz" Speleological Club exploratory works related to the Gieńkówka cave led only to its partial opening. There are indications stating that this cave has continuation beyond its currently available parts. In order to verify those assumptions resistivity imaging method has been used. During analysis of the resistivity models obtained from field measurements the synthetic model, simulating the intersection of the cave corridor, has been utilized. In order to assess the reliability of resistivity cross sections in terms of the presence of artifacts left by the inversion process Depth of Investigation (DOI) index has been applied. For preparing DOI maps two inversions on the same data set were carried out using different reference models. Then the results were compared to each other. High resistivity anomalies revealed on obtained models show strong correlation with actual caves known in this area. In addition, similar anomalies have been found in place of the predicted continuity in Gieńkówka cave, thus confirming the hypothesis made in this research. High DOI index values in the occurrence of caves pointing to the instability of the inversion process in those areas

  5. Active probing of cloud multiple scattering, optical depth, vertical thickness, and liquid water content using wide-angle imaging lidar

    NASA Astrophysics Data System (ADS)

    Love, Steven P.; Davis, Anthony B.; Rohde, Charles A.; Tellier, Larry; Ho, Cheng

    2002-09-01

    At most optical wavelengths, laser light in a cloud lidar experiment is not absorbed but merely scattered out of the beam, eventually escaping the cloud via multiple scattering. There is much information available in this light scattered far from the input beam, information ignored by traditional 'on-beam' lidar. Monitoring these off-beam returns in a fully space- and time-resolved manner is the essence of our unique instrument, Wide Angle Imaging Lidar (WAIL). In effect, WAIL produces wide-field (60-degree full-angle) 'movies' of the scattering process and records the cloud's radiative Green functions. A direct data product of WAIL is the distribution of photon path lengths resulting from multiple scattering in the cloud. Following insights from diffusion theory, we can use the measured Green functions to infer the physical thickness and optical depth of the cloud layer, and, from there, estimate the volume-averaged liquid water content. WAIL is notable in that it is applicable to optically thick clouds, a regime in which traditional lidar is reduced to ceilometry. Here we present recent WAIL data on various clouds and discuss the extension of WAIL to full diurnal monitoring by means of an ultra-narrow magneto-optic atomic line filter for daytime measurements.

  6. Determination of hydrogen diffusion coefficients in F82H by hydrogen depth profiling with a tritium imaging plate technique

    SciTech Connect

    Higaki, M.; Otsuka, T.; Hashizume, K.; Tokunaga, K.; Ezato, K.; Suzuki, S.; Enoeda, M.; Akiba, M.

    2015-03-15

    Hydrogen diffusion coefficients in a reduced activation ferritic/martensitic steel (F82H) and an oxide dispersion strengthened F82H (ODS-F82H) have been determined from depth profiles of plasma-loaded hydrogen with a tritium imaging plate technique (TIPT) in the temperature range from 298 K to 523 K. Data on hydrogen diffusion coefficients, D, in F82H, are summarized as D [m{sup 2}*s{sup -1}] =1.1*10{sup -7}exp(-16[kJ mol{sup -1}]/RT). The present data indicate almost no trapping effect on hydrogen diffusion due to an excess entry of energetic hydrogen by the plasma loading, which results in saturation of the trapping sites at the surface and even in the bulk. In the case of ODS-F82H, data of hydrogen diffusion coefficients are summarized as D [m{sup 2}*s{sup -1}] =2.2*10{sup -7}exp(-30[kJ mol{sup -1}]/RT) indicating a remarkable trapping effect on hydrogen diffusion caused by tiny oxide particles (Y{sub 2}O{sub 3}) in the bulk of F82H. Such oxide particles introduced in the bulk may play an effective role not only on enhancement of mechanical strength but also on suppression of hydrogen penetration by plasma loading.

  7. Non-invasive depth profile imaging of the stratum corneum using confocal Raman microscopy: first insights into the method.

    PubMed

    Ashtikar, Mukul; Matthäus, Christian; Schmitt, Michael; Krafft, Christoph; Fahr, Alfred; Popp, Jürgen

    2013-12-18

    The stratum corneum is a strong barrier that must be overcome to achieve successful transdermal delivery of a pharmaceutical agent. Many strategies have been developed to enhance the permeation through this barrier. Traditionally, drug penetration through the stratum corneum is evaluated by employing tape-stripping protocols and measuring the content of the analyte. Although effective, this method cannot provide a detailed information regarding the penetration pathways. To address this issue various microscopic techniques have been employed. Raman microscopy offers the advantage of label free imaging and provides spectral information regarding the chemical integrity of the drug as well as the tissue. In this paper we present a relatively simple method to obtain XZ-Raman profiles of human stratum corneum using confocal Raman microscopy on intact full thickness skin biopsies. The spectral datasets were analysed using a spectral unmixing algorithm. The spectral information obtained, highlights the different components of the tissue and the presence of drug. We present Raman images of untreated skin and diffusion patterns for deuterated water and beta-carotene after Franz-cell diffusion experiment. PMID:23764946

  8. Femininity, Masculinity, and Body Image Issues among College-Age Women: An In-Depth and Written Interview Study of the Mind-Body Dichotomy

    ERIC Educational Resources Information Center

    Leavy, Patricia; Gnong, Andrea; Ross, Lauren Sardi

    2009-01-01

    In this article we investigate college-age women's body image issues in the context of dominant femininity and its polarization of the mind and body. We use original data collected through seven in-depth interviews and 32 qualitative written interviews with college-age women and men. We coded the data thematically applying feminist approaches to…

  9. Achievements in scientific photography. Volume 28 - The optical image and recording media

    NASA Astrophysics Data System (ADS)

    Chibisov, K. V.

    Papers are presented on such topics as the properties of optical data recording systems, image quality, image processing, and recording media. Particular consideration is given to mathematical models for the formation of optical images; trends in the development of quality criteria for photographic systems; hybrid optoelectronic systems of image processing; and photothermoplastic recording media.

  10. Exploring the effects of landscape structure on aerosol optical depth (AOD) patterns using GIS and HJ-1B images.

    PubMed

    Ye, Luping; Fang, Linchuan; Tan, Wenfeng; Wang, Yunqiang; Huang, Yu

    2016-02-01

    A GIS approach and HJ-1B images were employed to determine the effect of landscape structure on aerosol optical depth (AOD) patterns. Landscape metrics, fractal analysis and contribution analysis were proposed to quantitatively illustrate the impact of land use on AOD patterns. The high correlation between the mean AOD and landscape metrics indicates that both the landscape composition and spatial structure affect the AOD pattern. Additionally, the fractal analysis demonstrated that the densities of built-up areas and bare land decreased from the high AOD centers to the outer boundary, but those of water and forest increased. These results reveal that the built-up area is the main positive contributor to air pollution, followed by bare land. Although bare land had a high AOD, it made a limited contribution to regional air pollution due to its small spatial extent. The contribution analysis further elucidated that built-up areas and bare land can increase air pollution more strongly in spring than in autumn, whereas forest and water have a completely opposite effect. Based on fractal and contribution analyses, the different effects of cropland are ascribed to the greater vegetation coverage from farming activity in spring than in autumn. The opposite effect of cropland on air pollution reveals that green coverage and human activity also influence AOD patterns. Given that serious concerns have been raised regarding the effects of built-up areas, bare land and agricultural air pollutant emissions, this study will add fundamental knowledge of the understanding of the key factors influencing urban air quality. PMID:26766513

  11. Operational Retrieval of aerosol optical depth over Indian subcontinent and Indian Ocean using INSAT-3D/Imager product validation

    NASA Astrophysics Data System (ADS)

    Mishra, M. K.; Rastogi, G.; Chauhan, P.

    2014-11-01

    Aerosol optical depth (AOD) over Indian subcontinent and Indian Ocean region is derived operationally for the first time from the geostationary earth orbit (GEO) satellite INSAT-3D Imager data at 0.65 μm wavelength. Single visible channel algorithm based on clear sky composites gives larger retrieval error in AOD than other multiple channel algorithms due to errors in estimating surface reflectance and atmospheric property. However, since MIR channel signal is insensitive to the presence of most aerosols, therefore in present study, AOD retrieval algorithm employs both visible (centred at 0.65 μm) and mid-infrared (MIR) band (centred at 3.9 μm) measurements, and allows us to monitor transport of aerosols at higher temporal resolution. Comparisons made between INSAT-3D derived AOD (τI) and MODIS derived AOD (τM) co-located in space (at 1° resolution) and time during January, February and March (JFM) 2014 encompasses 1165, 1052 and 900 pixels, respectively. Good agreement found between τI and τM during JFM 2014 with linear correlation coefficients (R) of 0.87, 0.81 and 0.76, respectively. The extensive validation made during JFM 2014 encompasses 215 co-located AOD in space and time derived by INSAT 3D (τI) and 10 sun-photometers (τA) that includes 9 AERONET (Aerosol Robotic Network) and 1 handheld sun-photometer site. INSAT-3D derived AOD i.e. τI, is found within the retrieval errors of τI = ±0.07 ±0.15τA with linear correlation coefficient (R) of 0.90 and root mean square error equal (RMSE) to 0.06. Present work shows that INSAT-3D aerosol products can be used quantitatively in many applications with caution for possible residual clouds, snow/ice, and water contamination.

  12. Evaluation of choroidal thickness via enhanced depth-imaging optical coherence tomography in patients with systemic hypertension

    PubMed Central

    Gök, Mustafa; Karabaş, V Levent; Emre, Ender; Akşar, Arzu Toruk; Aslan, Mehmet Ş; Ural, Dilek

    2015-01-01

    Purpose: The purpose was to evaluate choroidal thickness via spectral domain optical coherence tomography (SD-OCT) and to compare the data with those of 24-h blood pressure monitoring, elastic features of the aorta, and left ventricle systolic functions, in patients with systemic hypertension. Materials and Methods: This was a case-control, cross-sectional prospective study. A total of 116 patients with systemic hypertension, and 116 healthy controls over 45 years of age, were included. Subfoveal choroidal thickness (SFCT) was measured using a Heidelberg SD-OCT platform operating in the enhanced depth imaging mode. Patients were also subjected to 24-h ambulatory blood pressure monitoring (ABPM) and standard transthoracic echocardiography (STTE). Patients were divided into dippers and nondippers using ABPM data and those with or without left ventricular hypertrophy (LVH+ and LVH-) based on STTE data. The elastic parameters of the aorta, thus aortic strain (AoS), the beta index (BI), aortic distensibility (AoD), and the left ventricular mass index (LVMI), were calculated from STTE data. Results: No significant difference in SFCT was evident between patients and controls (P ≤ 0.611). However, a significant negative correlation was evident between age and SFCT in both groups (r = −0.66/−0.56, P ≤ 0.00). No significant SFCT difference was evident between the dipper and nondipper groups (P ≤ 0.67), or the LVH (+) and LVH (-) groups (P ≤ 0.84). No significant correlation was evident between SFCT and any of AoS, BI, AoD, or LVMI. Discussion: The choroid is affected by atrophic changes associated with aging. Even in the presence of comorbid risk factors including LVH and arterial stiffness, systemic hypertension did not affect SFCT. PMID:25971169

  13. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan; McGreer, Ian D.; Strauss, Michael A.; Annis, James; Buck, Zoë; Green, Richard; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-06-25

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).

  14. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.; Green, Richard; Bian, Fuyan; Strauss, Michael A.; Buck, Zoë; Annis, James; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)

  15. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-optimized Co-adds over 300 deg2 in Five Filters

    NASA Astrophysics Data System (ADS)

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan; McGreer, Ian D.; Strauss, Michael A.; Annis, James; Buck, Zoë; Green, Richard; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg2 on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg2 of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).

  16. HgCdTe Detectors for Space and Science Imaging: General Issues and Latest Achievements

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Rothman, J.; Cervera, C.; Baier, N.; Lobre, C.; Zanatta, J. P.; Boulade, O.; Moreau, V.; Fieque, B.

    2016-05-01

    HgCdTe (MCT) is a very versatile material system for infrared (IR) detection, suitable for high performance detection in a wide range of applications and spectral ranges. Indeed, the ability to tailor the cutoff frequency as close as possible to the needs makes it a perfect candidate for high performance detection. Moreover, the high quality material available today, grown either by molecular beam epitaxy or liquid phase epitaxy, allows for very low dark currents at low temperatures, suitable for low flux detection applications such as science imaging. MCT has also demonstrated robustness to the aggressive environment of space and faces, therefore, a large demand for space applications. A satellite may stare at the earth, in which case detection usually involves a lot of photons, called a high flux scenario. Alternatively, a satellite may stare at outer space for science purposes, in which case the detected photon number is very low, leading to low flux scenarios. This latter case induces very strong constraints onto the detector: low dark current, low noise, (very) large focal plane arrays. The classical structure used to fulfill those requirements are usually p/n MCT photodiodes. This type of structure has been deeply investigated in our laboratory for different spectral bands, in collaboration with the CEA Astrophysics lab. However, another alternative may also be investigated with low excess noise: MCT n/p avalanche photodiodes (APD). This paper reviews the latest achievements obtained on this matter at DEFIR (LETI and Sofradir common laboratory) from the short wave infrared (SWIR) band detection for classical astronomical needs, to long wave infrared (LWIR) band for exoplanet transit spectroscopy, up to very long wave infrared (VLWIR) bands. The different available diode architectures (n/p VHg or p/n, or even APDs) are reviewed, including different available ROIC architectures for low flux detection.

  17. HgCdTe Detectors for Space and Science Imaging: General Issues and Latest Achievements

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Rothman, J.; Cervera, C.; Baier, N.; Lobre, C.; Zanatta, J. P.; Boulade, O.; Moreau, V.; Fieque, B.

    2016-09-01

    HgCdTe (MCT) is a very versatile material system for infrared (IR) detection, suitable for high performance detection in a wide range of applications and spectral ranges. Indeed, the ability to tailor the cutoff frequency as close as possible to the needs makes it a perfect candidate for high performance detection. Moreover, the high quality material available today, grown either by molecular beam epitaxy or liquid phase epitaxy, allows for very low dark currents at low temperatures, suitable for low flux detection applications such as science imaging. MCT has also demonstrated robustness to the aggressive environment of space and faces, therefore, a large demand for space applications. A satellite may stare at the earth, in which case detection usually involves a lot of photons, called a high flux scenario. Alternatively, a satellite may stare at outer space for science purposes, in which case the detected photon number is very low, leading to low flux scenarios. This latter case induces very strong constraints onto the detector: low dark current, low noise, (very) large focal plane arrays. The classical structure used to fulfill those requirements are usually p/ n MCT photodiodes. This type of structure has been deeply investigated in our laboratory for different spectral bands, in collaboration with the CEA Astrophysics lab. However, another alternative may also be investigated with low excess noise: MCT n/ p avalanche photodiodes (APD). This paper reviews the latest achievements obtained on this matter at DEFIR (LETI and Sofradir common laboratory) from the short wave infrared (SWIR) band detection for classical astronomical needs, to long wave infrared (LWIR) band for exoplanet transit spectroscopy, up to very long wave infrared (VLWIR) bands. The different available diode architectures ( n/ p VHg or p/ n, or even APDs) are reviewed, including different available ROIC architectures for low flux detection.

  18. Latest achievements on MCT IR detectors for space and science imaging

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Rothman, J.; Castelein, P.; Cervera, C.; Baier, N.; Lobre, C.; De Borniol, E.; Zanatta, J. P.; Boulade, O.; Moreau, V.; Fieque, B.; Chorier, P.

    2016-05-01

    HgCdTe (MCT) is a very versatile material for IR detection. Indeed, the ability to tailor the cutoff frequency as close as possible to the detection needs makes it a perfect candidate for high performance detection in a wide range of applications and spectral ranges. Moreover, the high quality material available today, either by liquid phase epitaxy (LPE) or molecular beam epitaxy (MBE) allows for very low dark currents at low temperatures and make it suitable for very low flux detection application such as science imaging. MCT has also demonstrated its robustness to aggressive space environment and faces therefore a large demand for space application such as staring at the outer space for science purposes in which case, the detected photon number is very low This induces very strong constrains onto the detector: low dark current, low noise, low persistence, (very) large focal plane arrays. The MCT diode structure adapted to fulfill those requirements is naturally the p/n photodiode. Following the developments of this technology made at DEFIR and transferred to Sofradir in MWIR and LWIR ranges for tactical applications, our laboratory has consequently investigated its adaptation for ultra-low flux in different spectral bands, in collaboration with the CEA Astrophysics lab. Another alternative for ultra low flux applications in SWIR range, has also been investigated with low excess noise MCT n/p avalanche photodiodes (APD). Those APDs may in some cases open the gate to sub electron noise IR detection.. This paper will review the latest achievements obtained on this matter at DEFIR (CEA-LETI and Sofradir common laboratory) from the short wave (SWIR) band detection for classical astronomical needs, to the long wave (LWIR) band for exoplanet transit spectroscopy, up to the very long waves (VLWIR) band.

  19. In-depth imaging and quantification of degenerative changes associated with Achilles ruptured tendons by polarization-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bagnaninchi, P. O.; Yang, Y.; Bonesi, M.; Maffulli, G.; Phelan, C.; Meglinski, I.; El Haj, A.; Maffulli, N.

    2010-07-01

    The objective of this study was to develop a method based on polarization-sensitive optical coherent tomography (PSOCT) for the imaging and quantification of degenerative changes associated with Achilles tendon rupture. Ex vivo PSOCT examinations were performed in 24 patients. The study involved samples from 14 ruptured Achilles tendons, 4 tendinopathic Achilles tendons and 6 patellar tendons (collected during total knee replacement) as non-ruptured controls. The samples were imaged in both intensity and phase retardation modes within 24 h after surgery, and birefringence was quantified. The samples were fixed and processed for histology immediately after imaging. Slides were assessed twice in a blind manner to provide a semi-quantitative histological score of degeneration. In-depth micro structural imaging was demonstrated. Collagen disorganization and high cellularity were observable by PSOCT as the main markers associated with pathological features. Quantitative assessment of birefringence and penetration depth found significant differences between non-ruptured and ruptured tendons. Microstructure abnormalities were observed in the microstructure of two out of four tendinopathic samples. PSOCT has the potential to explore in situ and in-depth pathological change associated with Achilles tendon rupture, and could help to delineate abnormalities in tendinopathic samples in vivo.

  20. Image reconstruction for PET/CT scanners: past achievements and future challenges

    PubMed Central

    Tong, Shan; Alessio, Adam M; Kinahan, Paul E

    2011-01-01

    PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831

  1. Functional imaging using the retinal function imager: direct imaging of blood velocity, achieving fluorescein angiography-like images without any contrast agent, qualitative oximetry, and functional metabolic signals.

    PubMed

    Izhaky, David; Nelson, Darin A; Burgansky-Eliash, Zvia; Grinvald, Amiram

    2009-07-01

    The Retinal Function Imager (RFI; Optical Imaging, Rehovot, Israel) is a unique, noninvasive multiparameter functional imaging instrument that directly measures hemodynamic parameters such as retinal blood-flow velocity, oximetric state, and metabolic responses to photic activation. In addition, it allows capillary perfusion mapping without any contrast agent. These parameters of retinal function are degraded by retinal abnormalities. This review delineates the development of these parameters and demonstrates their clinical applicability for noninvasive detection of retinal function in several modalities. The results suggest multiple clinical applications for early diagnosis of retinal diseases and possible critical guidance of their treatment. PMID:19763751

  2. Developments in electronic imaging techniques; Proceedings of the Seminar-in-Depth, San Mateo, Calif., October 16, 17, 1972.

    NASA Technical Reports Server (NTRS)

    Zirkind, R. (Editor); Nudelman, S. S.; Schnitzler, A.

    1973-01-01

    Capabilities and limitations of infrared imaging systems are discussed, and a real-time simulator for image data systems is described. Ultrahigh resolution electronic imaging and storage with the return beam vidicon is treated, and a description is given of an electron-lens for opaque photocathodes. Ground surveillance with an active low light level TV, digital processing of Mariner 9 TV data, image enhancement by holography, and application of data compression techniques to spacecraft imaging systems are given attention. Individual items are announced in this issue.

  3. Thermal Coherence Tomography: Depth-Resolved Imaging in Parabolic Diffusion-Wave Fields Using the Thermal-Wave Radar

    NASA Astrophysics Data System (ADS)

    Tabatabaei, N.; Mandelis, A.

    2012-11-01

    Energy transport in diffusion-wave fields is gradient driven and therefore diffuse, yielding depth-integrated responses with poor axial resolution. Using matched filter principles, a methodology is proposed enabling these parabolic diffusion-wave energy fields to exhibit energy localization akin to propagating hyperbolic wave fields. This not only improves the axial resolution, but also allows for deconvolution of individual responses of superposed axially discrete sources, opening a new field of depth-resolved subsurface thermal coherence tomography using diffusion waves. The depth-resolved nature of the developed methodology is verified through experiments carried out on phantoms and biological samples. The results suggest that thermal coherence tomography can resolve deep structural changes in hard dental and bone tissues, allowing for remote detection of early dental caries and potentially early osteoporosis.

  4. Estimation of aerosol optical depth and additional atmospheric parameters for the calculation of apparent reflectance from radiance measured by the Airborne Visible/Infrared Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Conel, James E.; Roberts, Dar A.

    1993-01-01

    The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures spatial images of the total upwelling spectral radiance from 400 to 2500 nm through 10 nm spectral channels. Quantitative research and application objectives for surface investigations require inversion of the measured radiance of surface reflectance or surface leaving radiance. To calculate apparent surface reflectance, estimates of atmospheric water vapor abundance, cirrus cloud effects, surface pressure elevation, and aerosol optical depth are required. Algorithms for the estimation of these atmospheric parameters from the AVIRIS data themselves are described. From these atmospheric parameters we show an example of the calculation of apparent surface reflectance from the AVIRIS-measured radiance using a radiative transfer code.

  5. Experiences and achievements in automated image sequence orientation for close-range photogrammetric projects

    NASA Astrophysics Data System (ADS)

    Barazzetti, Luigi; Forlani, Gianfranco; Remondino, Fabio; Roncella, Riccardo; Scaioni, Marco

    2011-07-01

    Automatic image orientation of close-range image blocks is becoming a task of increasing importance in the practice of photogrammetry. Although image orientation procedures based on interactive tie point measurements do not require any preferential block structure, the use of structured sequences can help to accomplish this task in an automated way. Automatic orientation of image sequences has been widely investigated in the Computer Vision community. Here the method is generally named "Structure from Motion" (SfM), or "Structure and Motion". These refer to the simultaneous estimation of the image orientation parameters and 3D object points of a scene from a set of image correspondences. Such approaches, that generally disregard camera calibration data, do not ensure an accurate 3D reconstruction, which is a requirement for photogrammetric projects. The major contribution of SfM is therefore viewed in the photogrammetric community as a powerful tool to automatically provide a dense set of tie points as well as initial parameters for a final rigorous bundle adjustment. The paper, after a brief overview of automatic procedures for close-range image sequence orientation, will show some characteristic examples. Although powerful and reliable image orientation solutions are nowadays available at research level, there are certain questions that are still open. Thus the paper will also report some open issues, like the geometric characteristics of the sequences, scene's texture and shape, ground constraints (control points and/or free-network adjustment), feature matching techniques, outlier rejection and bundle adjustment models.

  6. Nuclear imaging of the breast: Translating achievements in instrumentation into clinical use

    PubMed Central

    Hruska, Carrie B.; O'Connor, Michael K.

    2013-01-01

    Approaches to imaging the breast with nuclear medicine and/or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed. PMID:23635248

  7. Nuclear imaging of the breast: Translating achievements in instrumentation into clinical use

    SciTech Connect

    Hruska, Carrie B.; O'Connor, Michael K.

    2013-05-15

    Approaches to imaging the breast with nuclear medicine and/or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed.

  8. Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens.

    PubMed

    Wang, Yu-Jen; Shen, Xin; Lin, Yi-Hsin; Javidi, Bahram

    2015-08-01

    Conventional synthetic-aperture integral imaging uses a lens array to sense the three-dimensional (3D) object or scene that can then be reconstructed digitally or optically. However, integral imaging generally suffers from a fixed and limited range of depth of field (DOF). In this Letter, we experimentally demonstrate a 3D integral-imaging endoscopy with tunable DOF by using a single large-aperture focal-length-tunable liquid crystal (LC) lens. The proposed system can provide high spatial resolution and an extended DOF in synthetic-aperture integral imaging 3D endoscope. In our experiments, the image plane in the integral imaging pickup process can be tuned from 18 to 38 mm continuously using a large-aperture LC lens, and the total DOF is extended from 12 to 51 mm. To the best of our knowledge, this is the first report on synthetic aperture integral imaging 3D endoscopy with a large-aperture LC lens that can provide high spatial resolution 3D imaging with an extend DOF. PMID:26258358

  9. Improving depth maps with limited user input

    NASA Astrophysics Data System (ADS)

    Vandewalle, Patrick; Klein Gunnewiek, René; Varekamp, Chris

    2010-02-01

    A vastly growing number of productions from the entertainment industry are aiming at 3D movie theaters. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g. different display sizes, display types, viewing distances), we need a flexible 3D format that can adjust the depth effect. This can be provided by the image plus depth format, in which a video frame is enriched with depth information for all pixels in the video frame. This format can be extended with additional layers, such as an occlusion layer or a transparency layer. The occlusion layer contains information on the data that is behind objects, and is also referred to as occluded video. The transparency layer, on the other hand, contains information on the opacity of the foreground layer. This allows rendering of semi-transparencies such as haze, smoke, windows, etc., as well as transitions from foreground to background. These additional layers are only beneficial if the quality of the depth information is high. High quality depth information can currently only be achieved with user assistance. In this paper, we discuss an interactive method for depth map enhancement that allows adjustments during the propagation over time. Furthermore, we will elaborate on the automatic generation of the transparency layer, using the depth maps generated with an interactive depth map generation tool.

  10. Apparent Depth.

    ERIC Educational Resources Information Center

    Nassar, Antonio B.

    1994-01-01

    Discusses a well-known optical refraction problem where the depth of an object in a liquid is determined. Proposes that many texts incorrectly solve the problem. Provides theory, equations, and diagrams. (MVL)

  11. Imaging the SE1 reflector near the Continental Deep Drilling Site (KTB, Germany) with coherence-based prestack-depth migration

    NASA Astrophysics Data System (ADS)

    Hellwig, O.; Hlousek, F.; Buske, S.

    2013-12-01

    Kirchhoff prestack depth migration algorithms are widely used to image geological structures. There are a variety of Kirchhoff-type methods, such as Fresnel-Volume-Migration (FVM), that try to overcome the incapability of standard Kirchhoff migration to image steeply dipping reflectors or to produce clear and artifact-free seismic images if only a small number of seismic traces is available. All of these modified Kirchhoff migration algorithms employ additional weighting factors to confine the migration operator and to limit the seismic image to the actual position along the two-way travel time isochrone where diffractions and reflections originate. Coherence-based prestack-depth migration (CBM) uses a weighting factor obtained directly from the input data by evaluating a normalized coherence measure defined over neighboring traces and a time window around the particular time sample to be imaged. This coherence measure and the corresponding weighting factor are high if the differences in the arrival times of a coherent event at nearby receivers can be explained by the differences in the travel times along the ray paths from the source position to a certain image point on the two-way travel time isochrone, and from there to the receiver group. In turn, a small weighting factor is obtained if the travel time differences cannot be explained by a certain combination of source, image point and the selected receiver group. Thereby it is possible to suppress random noise and to obtain artifact-free seismic images even with a small number of seismic traces. This method is applied to a single shot from the Instruct-93 data recorded at the Continental Deep Drilling Site (KTB) near Windischeschenbach (Germany). This seismic experiment was designed to illuminate the steeply dipping SE1-reflector, that was known from earlier seismic investigations, at a target depth of about 8 to 9 km. For this purpose the shot point and the 120 receivers were placed approximately 10 km away

  12. Clear-cornea cataract surgery: pupil size and shape changes, along with anterior chamber volume and depth changes. A Scheimpflug imaging study

    PubMed Central

    Kanellopoulos, Anastasios John; Asimellis, George

    2014-01-01

    Purpose To investigate, by high-precision digital analysis of data provided by Scheimpflug imaging, changes in pupil size and shape and anterior chamber (AC) parameters following cataract surgery. Patients and methods The study group (86 eyes, patient age 70.58±10.33 years) was subjected to cataract removal surgery with in-the-bag intraocular lens implantation (pseudophakic). A control group of 75 healthy eyes (patient age 51.14±16.27 years) was employed for comparison. Scheimpflug imaging (preoperatively and 3 months postoperatively) was employed to investigate central corneal thickness, AC depth, and AC volume. In addition, by digitally analyzing the black-and-white dotted line pupil edge marking in the Scheimpflug “large maps,” the horizontal and vertical pupil diameters were individually measured and the pupil eccentricity was calculated. The correlations between AC depth and pupil shape parameters versus patient age, as well as the postoperative AC and pupil size and shape changes, were investigated. Results Compared to preoperative measurements, AC depth and AC volume of the pseudophakic eyes increased by 0.99±0.46 mm (39%; P<0.001) and 43.57±24.59 mm3 (36%; P<0.001), respectively. Pupil size analysis showed that the horizontal pupil diameter was reduced by −0.27±0.22 mm (−9.7%; P=0.001) and the vertical pupil diameter was reduced by −0.32±0.24 mm (−11%; P<0.001). Pupil eccentricity was reduced by −39.56%; P<0.001. Conclusion Cataract extraction surgery appears to affect pupil size and shape, possibly in correlation to AC depth increase. This novel investigation based on digital analysis of Scheimpflug imaging data suggests that the cataract postoperative photopic pupil is reduced and more circular. These changes appear to be more significant with increasing patient age. PMID:25368512

  13. High-resolution 1050 nm spectral domain retinal optical coherence tomography at 120 kHz A-scan rate with 6.1 mm imaging depth

    PubMed Central

    An, Lin; Li, Peng; Lan, Gongpu; Malchow, Doug; Wang, Ruikang K.

    2013-01-01

    We report a newly developed high speed 1050nm spectral domain optical coherence tomography (SD-OCT) system for imaging posterior segment of human eye. The system is capable of an axial resolution at ~10 µm in air, an imaging depth of 6.1 mm in air, a system sensitivity fall-off at ~6 dB/3mm and an imaging speed of 120,000 A-scans per second. We experimentally demonstrate the system’s capability to perform phase-resolved imaging of dynamic blood flow within retina, indicating high phase stability of the SDOCT system. Finally, we show an example that uses this newly developed system to image posterior segment of human eye with a large view of view (10 × 9 mm2), providing detailed visualization of microstructural features from anterior retina to posterior choroid. The demonstrated system parameters and imaging performances are comparable to those that a typical 1 µm swept source OCT would deliver for retinal imaging. PMID:23411636

  14. Increasing depth penetration in biological tissue imaging using 808-nm excited Nd3+/Yb3+/Er3+-doped upconverting nanoparticles.

    PubMed

    Söderlund, Hugo; Mousavi, Monirehalsadat; Liu, Haichun; Andersson-Engels, Stefan

    2015-08-01

    Ytterbium (Yb 3+ )-sensitized upconverting nanoparticles (UCNPs) are excited at 975 nm causing relatively high absorption in tissue. A new type of UCNPs with neodymium (Nd 3+ ) and Yb 3+ codoping is excitable at a 808-nm wavelength. At this wavelength, the tissue absorption is lower. Here we quantify, both experimentally and theoretically, to what extent Nd 3+ -doped UCNPs will provide an increased signal at larger depths in tissue compared to conventional 975-nm excited UCNPs. PMID:26271054

  15. Depth-varying density and organization of chondrocytes in immature and mature bovine articular cartilage assessed by 3d imaging and analysis

    NASA Technical Reports Server (NTRS)

    Jadin, Kyle D.; Wong, Benjamin L.; Bae, Won C.; Li, Kelvin W.; Williamson, Amanda K.; Schumacher, Barbara L.; Price, Jeffrey H.; Sah, Robert L.

    2005-01-01

    Articular cartilage is a heterogeneous tissue, with cell density and organization varying with depth from the surface. The objectives of the present study were to establish a method for localizing individual cells in three-dimensional (3D) images of cartilage and quantifying depth-associated variation in cellularity and cell organization at different stages of growth. Accuracy of nucleus localization was high, with 99% sensitivity relative to manual localization. Cellularity (million cells per cm3) decreased from 290, 310, and 150 near the articular surface in fetal, calf, and adult samples, respectively, to 120, 110, and 50 at a depth of 1.0 mm. The distance/angle to the nearest neighboring cell was 7.9 microm/31 degrees , 7.1 microm/31 degrees , and 9.1 microm/31 degrees for cells at the articular surface of fetal, calf, and adult samples, respectively, and increased/decreased to 11.6 microm/31 degrees , 12.0 microm/30 degrees , and 19.2 microm/25 degrees at a depth of 0.7 mm. The methodologies described here may be useful for analyzing the 3D cellular organization of cartilage during growth, maturation, aging, degeneration, and regeneration.

  16. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  17. Extended imaging depth to 12 mm for 1050-nm spectral domain optical coherence tomography for imaging the whole anterior segment of the human eye at 120-kHz A-scan rate

    PubMed Central

    Li, Peng; An, Lin; Lan, Gongpu; Johnstone, Murray; Malchow, Doug

    2013-01-01

    Abstract. We demonstrate a 1050-nm spectral domain optical coherence tomography (OCT) system with a 12 mm imaging depth in air, a 120 kHz A-scan rate and a 10 μm axial resolution for anterior-segment imaging of human eye, in which a new prototype InGaAs linescan camera with 2048 active-pixel photodiodes is employed to record OCT spectral interferograms in parallel. Combined with the full-range complex technique, we show that the system delivers comparable imaging performance to that of a swept-source OCT with similar system specifications. PMID:23334687

  18. Monocular catadioptric panoramic depth estimation via caustics-based virtual scene transition.

    PubMed

    He, Yu; Wang, Lingxue; Cai, Yi; Xue, Wei

    2016-09-01

    Existing catadioptric panoramic depth estimation systems usually require two panoramic imaging subsystems to achieve binocular disparity. The system structures are complicated and only sparse depth maps can be obtained. We present a novel monocular catadioptric panoramic depth estimation method that achieves dense depth maps of panoramic scenes using a single unmodified conventional catadioptric panoramic imaging system. Caustics model the reflection of the curved mirror and establish the distance relationship between the virtual and real panoramic scenes to overcome the nonlinear problem of the curved mirror. Virtual scene depth is then obtained by applying our structure classification regularization to depth from defocus. Finally, real panoramic scene depth is recovered using the distance relationship. Our method's effectiveness is demonstrated in experiments. PMID:27607512

  19. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    PubMed Central

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-01-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology. PMID:27358000

  20. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues.

    PubMed

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-01-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology. PMID:27358000

  1. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    NASA Astrophysics Data System (ADS)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  2. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging.

    PubMed

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  3. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging

    PubMed Central

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  4. Framework of a Contour Based Depth Map Coding Method

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; He, Xun; Jin, Xin; Goto, Satoshi

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  5. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  6. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  7. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  8. Underwater camera with depth measurement

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  9. Differences in the Properties of the Radial Artery between Cun, Guan, Chi, and Nearby Segments Using Ultrasonographic Imaging: A Pilot Study on Arterial Depth, Diameter, and Blood Flow

    PubMed Central

    Kim, Jaeuk U.; Lee, Yu Jung; Kim, Jong Yeol

    2015-01-01

    Aim of the Study. The three conventional pulse-diagnostic palpation locations (PLs) on both wrists are Cun, Guan, and Chi, and each location reveals different clinical information. To identify anatomical or hemodynamic specificity, we used ultrasonographic imaging to determine the arterial diameter, radial artery depth, and arterial blood flow velocity at the three PLs and at nearby non-PL segments. Methods. We applied an ultrasound scanner to 44 subjects and studied the changes in the arterial diameter and depth as well as in the average/maximum blood flow velocities along the radial artery at three PLs and three non-PLs located more proximally than Chi. Results. All of the measurements at all of the PLs were significantly different (P < 0.01). Artery depth was significantly different among the non-PLs; however, this difference became insignificant after normalization to the arm circumference. Conclusions. Substantial changes in the hemodynamic and anatomical properties of the radial artery around the three PLs were insignificant at the nearby non-PLs segments. This finding may provide a partial explanation for the diagnostic use of “Cun, Guan, and Chi.” PMID:25763090

  10. Depth from water reflection.

    PubMed

    Linjie Yang; Jianzhuang Liu; Xiaoou Tang

    2015-04-01

    The scene in a water reflection image often exhibits bilateral symmetry. In this paper, we design a framework to reconstruct the depth from a single water reflection image. This problem can be regarded as a special case of two-view stereo vision. It is challenging to obtain correspondences from the real scene and the mirror scene due to their large appearance difference. We first propose an appearance adaptation method to transform the appearance of the mirror scene so that it is much closer to the real scene. We then present a stereo matching algorithm to obtain the disparity map of the real scene. Compared with other depth-from-symmetry work that deals with man-made objects, our algorithm can recover the depth maps of a variety of scenes, where both natural and man-made objects may exist. PMID:25643408

  11. Reconstruction of Indoor Models Using Point Clouds Generated from Single-Lens Reflex Cameras and Depth Images

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Wu, T.-S.; Lee, I.-C.; Chang, H.; Su, A. Y. S.

    2015-05-01

    This paper presents a data acquisition system consisting of multiple RGB-D sensors and digital single-lens reflex (DSLR) cameras. A systematic data processing procedure for integrating these two kinds of devices to generate three-dimensional point clouds of indoor environments is also developed and described. In the developed system, DSLR cameras are used to bridge the Kinects and provide a more accurate ray intersection condition, which takes advantage of the higher resolution and image quality of the DSLR cameras. Structure from Motion (SFM) reconstruction is used to link and merge multiple Kinect point clouds and dense point clouds (from DSLR color images) to generate initial integrated point clouds. Then, bundle adjustment is used to resolve the exterior orientation (EO) of all images. Those exterior orientations are used as the initial values to combine these point clouds at each frame into the same coordinate system using Helmert (seven-parameter) transformation. Experimental results demonstrate that the design of the data acquisition system and the data processing procedure can generate dense and fully colored point clouds of indoor environments successfully even in featureless areas. The accuracy of the generated point clouds were evaluated by comparing the widths and heights of identified objects as well as coordinates of pre-set independent check points against in situ measurements. Based on the generated point clouds, complete and accurate three-dimensional models of indoor environments can be constructed effectively.

  12. Quantitative estimation of Secchi disk depth using the HJ-1B CCD image and in situ observations in Sishili Bay, China

    NASA Astrophysics Data System (ADS)

    Yu, Dingfeng; Zhou, Bin; Fan, Yanguo; Li, Tantan; Liang, Shouzhen; Sun, Xiaoling

    2014-11-01

    Secchi disk depth (SDD) is an important optical property of water related to water quality and primary production. The traditional sampling method is not only time-consuming and labor-intensive but also limited in terms of temporal and spatial coverage, while remote sensing technology can deal with these limitations. In this study, models estimating SDD have been proposed based on the regression analysis between the HJ-1 satellite CCD image and synchronous in situ water quality measurements. The results illustrate the band ratio model of B3/B1 of CCD could be used to estimate Secchi depth in this region, with the mean relative error (MRE) of 8.6% and root mean square error (RMSE) of 0.1 m, respectively. This model has been applied to one image of HJ-1 satellite CCD, generating water transparency on June 23, 2009, which will be of immense value for environmental monitoring. In addition, SDD was deeper in offshore waters than in inshore waters. River runoffs, hydrodynamic environments, and marine aquaculture are the main factors influencing SDD in this area.

  13. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  14. Depth-based computational photography

    NASA Astrophysics Data System (ADS)

    Liu, Ziwei; Xu, Tingfa; Liu, Jingdan; Li, Xiangmin; Zhao, Peng

    2015-05-01

    A depth-based computational photography model is proposed for all-in-focus image capture. A decomposition function, a defocus matrix, and a depth matrix are introduced to construct the photography model. The original image acquired from a camera can be decomposed into several sub-images on the basis of depth information. The defocus matrix can be deduced from the depth matrix according to the sensor defocus geometry for a thin lens model. And the depth matrix is reconstructed using the axial binocular stereo vision algorithm. This photography model adopts an energy functional minimization method to acquire the sharpest image pieces separately. The implementation of the photography method is described in detail. Experimental results for an actual scene demonstrate that our model is effective.

  15. Depth measurement using structured light and spatial frequency.

    PubMed

    Chan, Shih-Yu; Shih, Hsi-Fu; Chen, Jenq-Shyong

    2016-07-01

    This paper proposes a novel design of an optical system for depth measurement, adopting a computer-generated hologram to project a periodic line pattern from which a coaxial triangulation is performed. The spatial periodicity of diffraction images captured in the system is converted to the frequency domain, and the relative depth of the plane of interest is acquired. The experimental results show that the system could achieve resolution in the range of 1 mm over a relative depth range of ∼300-600  mm from the camera. The standard deviations are 0.71 and 0.46 mm for two experiments. PMID:27409192

  16. Static stereo vision depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, D. B.; Von Sydow, M.

    1988-01-01

    A major problem in high-precision teleoperation is the high-resolution presentation of depth information. Stereo television has so far proved to be only a partial solution, due to an inherent trade-off among depth resolution, depth distortion and the alignment of the stereo image pair. Converged cameras can guarantee image alignment but suffer significant depth distortion when configured for high depth resolution. Moving the stereo camera rig to scan the work space further distorts depth. The 'dynamic' (camera-motion induced) depth distortion problem was solved by Diner and Von Sydow (1987), who have quantified the 'static' (camera-configuration induced) depth distortion. In this paper, a stereo image presentation technique which yields aligned images, high depth resolution and low depth distortion is demonstrated, thus solving the trade-off problem.

  17. Multiangle Imaging Spectroradiometer (MISR) Global Aerosol Optical Depth Validation Based on 2 Years of Coincident Aerosol Robotic Network (AERONET) Observations

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph A.; Gaitley, Barbara J.; Martonchik, John V.; Diner, David J.; Crean, Kathleen A.; Holben, Brent

    2005-01-01

    Performance of the Multiangle Imaging Spectroradiometer (MISR) early postlaunch aerosol optical thickness (AOT) retrieval algorithm is assessed quantitatively over land and ocean by comparison with a 2-year measurement record of globally distributed AERONET Sun photometers. There are sufficient coincident observations to stratify the data set by season and expected aerosol type. In addition to reporting uncertainty envelopes, we identify trends and outliers, and investigate their likely causes, with the aim of refining algorithm performance. Overall, about 2/3 of the MISR-retrieved AOT values fall within [0.05 or 20% x AOT] of Aerosol Robotic Network (AERONET). More than a third are within [0.03 or 10% x AOT]. Correlation coefficients are highest for maritime stations (approx.0.9), and lowest for dusty sites (more than approx.0.7). Retrieved spectral slopes closely match Sun photometer values for Biomass burning and continental aerosol types. Detailed comparisons suggest that adding to the algorithm climatology more absorbing spherical particles, more realistic dust analogs, and a richer selection of multimodal aerosol mixtures would reduce the remaining discrepancies for MISR retrievals over land; in addition, refining instrument low-light-level calibration could reduce or eliminate a small but systematic offset in maritime AOT values. On the basis of cases for which current particle models are representative, a second-generation MISR aerosol retrieval algorithm incorporating these improvements could provide AOT accuracy unprecedented for a spaceborne technique.

  18. Depth keying

    NASA Astrophysics Data System (ADS)

    Gvili, Ronen; Kaplan, Amir; Ofek, Eyal; Yahav, Giora

    2003-05-01

    We present a new solution to the known problem of video keying in a natural environment. We segment foreground objects from background objects using their relative distance from the camera, which makes it possible to do away with the use of color for keying. To do so, we developed and built a novel depth video camera, capable of producing RGB and D signals, where D stands for the distance to each pixel. The new RGBD camera enables the creation of a whole new gallery of effects and applications such as multi-layer background substitutions. This new modality makes the production of real time mixed reality video possible, as well as post-production manipulation of recorded video. We address the problem of color spill -- in which the color of the foreground object is mixed, along its boundary, with the background color. This problem prevents an accurate separation of the foreground object from its background, and it is most visible when compositing the foreground objects to a new background. Most existing techniques are limited to the use of a constant background color. We offer a novel general approach to the problem with enabling the use of the natural background, based upon the D channel generated by the camera.

  19. Flexible depth of field photography.

    PubMed

    Kuthirummal, Sujit; Nagahara, Hajime; Zhou, Changyin; Nayar, Shree K

    2011-01-01

    The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics. PMID:21088319

  20. Teaching image-processing concepts in junior high school: boys' and girls' achievements and attitudes towards technology

    NASA Astrophysics Data System (ADS)

    Barak, Moshe; Asad, Khaled

    2012-04-01

    Background : This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose : The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these subjects to the children's world and to the digital culture characterizing society today. Sample : The participants were 60 junior high-school students (9th grade). Design and method : Data collection included observations in the classes, administering an attitude questionnaire before and after the course, giving an achievement exam and analyzing the students' final projects. Results and conclusions : The findings indicated that boys' and girls' achievements were similar throughout the course, and all managed to handle the mathematical knowledge without any particular difficulties. Learners' motivation to engage in the subject was high in the project-based learning part of the course in which they dealt, for instance, with editing their own pictures and experimenting with a facial recognition method. However, the students were less interested in learning the theory at the beginning of the course. The course increased the girls', more than the boys', interest in learning scientific-technological subjects in school, and the gender gap in this regard was bridged.

  1. A depth video processing algorithm for high encoding and rendering performance

    NASA Astrophysics Data System (ADS)

    Guo, Mingsong; Chen, Fen; Sheng, Chengkai; Peng, Zongju; Jiang, Gangyi

    2014-11-01

    In free viewpoint video system, the color and the corresponding depth video are utilized to synthesize the virtual views by depth image based rendering (DIBR) technique. Hence, high quality of depth videos is a prerequisite for high quality of virtual views. However, depth variation, caused by scene variance and limited depth capturing technologies, may increase the encoding bitrate of depth videos and decrease the quality of virtual views. To tackle these problems, a depth preprocess method based on smoothing the texture and abrupt changes of depth videos is proposed to increase the accuracy of depth videos in this paper. Firstly, a bilateral filter is adopted to smooth the whole depth videos and protect the edge of depth videos at the same time. Secondly, abrupt variation is detected by a threshold calculated according to the camera parameter of each video sequence. Holes of virtual views occur when the depth values of left view change obviously from low to high in horizontal direction or the depth values of right view change obviously from high to low. So for the left view, depth value difference in left side gradually becomes smaller where it is greater than the thresholds. And then, in right side of right view is processed likewise. Experimental results show that the proposed method can averagely reduce the encoding bitrate by 25% while the quality of the synthesized virtual views can be improve by 0.39dB on average compared with using original depth videos. The subjective quality improvement is also achieved.

  2. Dust deflation by dust devils on Mars derived from optical depth measurements using the shadow method in HiRISE images

    NASA Astrophysics Data System (ADS)

    Reiss, D.; Hoekzema, N. M.; Stenzel, O. J.

    2014-04-01

    We measured the optical depth of three separate dust devils and their surroundings with the so called "shadow method" in HiRISE images. The calculated optical depths of the dust devils range from 0.29±0.18 to 1.20±0.38. Conservative calculations of the minimum and maximum dust loads are in the range of 4-122 mg m-3. Assuming reliable upper and lower boundary values of vertical speeds within the dust devils between 0.1 and 10 ms-1 based on terrestrial and Martian studies we derived dust fluxes in the range of 6.3-1221 mg m-2 s-1 (PSP_004285_1375), from 0.38-162 mg m-2 s-1 (ESP_013545_1110), and from 3.2-581 mg m-2 s-1 (ESP_016306_2410) for the three dust devils. Our dust load and dust flux calculations for the three dust devils are in good agreement to previous studies. Two of the analyzed dust devils left continuous dark tracks on the surface. For these dust devils we could calculate how much dust was removed by using the minimum and maximum dust fluxes in combination with measured horizontal speeds of these dust devils. Our results indicate that a dust removal of an equivalent layer of less than 2 μm (or less than one monolayer) is sufficient for the formation of dust devil tracks on Mars. This value might be used in future studies to estimate the contribution of dust devils to the global dust entrainment into the atmosphere on Mars.

  3. Cost-effective instrumentation for quantitative depth measurement of optic nerve head using stereo fundus image pair and image cross correlation techniques

    NASA Astrophysics Data System (ADS)

    de Carvalho, Luis Alberto V.; Carvalho, Valeria

    2014-02-01

    One of the main problems with glaucoma throughout the world is that there are typically no symptoms in the early stages. Many people who have the disease do not know they have it and by the time one finds out, the disease is usually in an advanced stage. Most retinal cameras available in the market today use sophisticated optics and have several other features/capabilities (wide-angle optics, red-free and angiography filters, etc) that make them expensive for the general practice or for screening purposes. Therefore, it is important to develop instrumentation that is fast, effective and economic, in order to reach the mass public in the general eye-care centers. In this work, we have constructed the hardware and software of a cost-effective and non-mydriatic prototype device that allows fast capturing and plotting of high-resolution quantitative 3D images and videos of the optical disc head and neighboring region (30° of field of view). The main application of this device is for glaucoma screening, although it may also be useful for the diagnosis of other pathologies related to the optic nerve.

  4. Predefined Redundant Dictionary for Effective Depth Maps Representation

    NASA Astrophysics Data System (ADS)

    Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi

    2016-01-01

    The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.

  5. Climatology of the aerosol optical depth by components from the Multiangle Imaging SpectroRadiometer (MISR) and a high-resolution chemistry transport model

    NASA Astrophysics Data System (ADS)

    Lee, H.; Kalashnikova, O. V.; Suzuki, K.; Braverman, A.; Garay, M. J.; Kahn, R. A.

    2015-12-01

    The Multi-angle Imaging SpectroRadiometer (MISR) Joint Aerosol (JOINT_AS) Level 3 product provides a global, descriptive summary of MISR Level 2 aerosol optical depth (AOD) and aerosol type information for each month between March 2000 and the present. Using Version 1 of JOINT_AS, which is based on the operational (Version 22) MISR Level 2 aerosol product, this study analyzes, for the first time, characteristics of observed and simulated distributions of AOD for three broad classes of aerosols: non-absorbing, absorbing, and non-spherical - near or downwind of their major source regions. The statistical moments (means, standard deviations, and skewnesses) and distributions of AOD by components derived from the JOINT_AS are compared with results from the SPectral RadIatioN-TrAnSport (SPRINTARS) model, a chemistry transport model (CTM) with very high spatial and temporal resolution. Overall, the AOD distributions of combined MISR aerosol types show good agreement with those from SPRINTARS. Marginal distributions of AOD for each aerosol type in both MISR and SPRINTARS show considerable high positive skewness, which indicates the importance of including extreme AOD events when comparing satellite retrievals with models. The MISR JOINT_AS product will greatly facilitate comparisons between satellite observations and model simulations of aerosols by type.

  6. Click-assembled, oxygen-sensing nanoconjugates for depth-resolved, near-infrared imaging in a 3D cancer model.

    PubMed

    Nichols, Alexander J; Roussakis, Emmanuel; Klein, Oliver J; Evans, Conor L

    2014-04-01

    Hypoxia is an important contributing factor to the development of drug-resistant cancer, yet few nonperturbative tools exist for studying oxygenation in tissues. While progress has been made in the development of chemical probes for optical oxygen mapping, penetration of such molecules into poorly perfused or avascular tumor regions remains problematic. A click-assembled oxygen-sensing (CAOS) nanoconjugate is reported and its properties demonstrated in an in vitro 3D spheroid cancer model. The synthesis relies on the sequential click-based ligation of poly(amidoamine)-like subunits for rapid assembly. Near-infrared confocal phosphorescence microscopy was used to demonstrate the ability of the CAOS nanoconjugates to penetrate hundreds of micrometers into spheroids within hours and to show their sensitivity to oxygen changes throughout the nodule. This proof-of-concept study demonstrates a modular approach that is readily extensible to a wide variety of oxygen and cellular sensors for depth-resolved imaging in tissue and tissue models. PMID:24590700

  7. Climatology of the aerosol optical depth by components from the Multi-angle Imaging SpectroRadiometer (MISR) and chemistry transport models

    NASA Astrophysics Data System (ADS)

    Lee, Huikyo; Kalashnikova, Olga V.; Suzuki, Kentaroh; Braverman, Amy; Garay, Michael J.; Kahn, Ralph A.

    2016-06-01

    The Multi-angle Imaging SpectroRadiometer (MISR) Joint Aerosol (JOINT_AS) Level 3 product has provided a global, descriptive summary of MISR Level 2 aerosol optical depth (AOD) and aerosol type information for each month over 16+ years since March 2000. Using Version 1 of JOINT_AS, which is based on the operational (Version 22) MISR Level 2 aerosol product, this study analyzes, for the first time, characteristics of observed and simulated distributions of AOD for three broad classes of aerosols: spherical nonabsorbing, spherical absorbing, and nonspherical - near or downwind of their major source regions. The statistical moments (means, standard deviations, and skewnesses) and distributions of AOD by components derived from the JOINT_AS are compared with results from two chemistry transport models (CTMs), the Goddard Chemistry Aerosol Radiation and Transport (GOCART) and SPectral RadIatioN-TrAnSport (SPRINTARS). Overall, the AOD distributions retrieved from MISR and modeled by GOCART and SPRINTARS agree with each other in a qualitative sense. Marginal distributions of AOD for each aerosol type in both MISR and models show considerable high positive skewness, which indicates the importance of including extreme AOD events when comparing satellite retrievals with models. The MISR JOINT_AS product will greatly facilitate comparisons between satellite observations and model simulations of aerosols by type.

  8. TOF-SIMS Analysis of Sea Salt Particles: Imaging and Depth Profiling in the Discovery of an Unrecognized Mechanism for pH Buffering

    SciTech Connect

    Gaspar, Dan J.; Laskin, Alexander; Wang, Weihong; Hunt, Sherri W.; Finlayson-Pitts, Barbara J.

    2004-06-15

    As part of a broader effort at understanding the chemistry of sea salt particles, we have performed time-of-flight secondary ion mass spectroscopy (TOF-SIMS) analysis of individual sea salt particles deposited on a transmission electron microscopy (TEM) grid. Environmental scanning electron microscopy (ESEM) and TOF-SIMS analysis have, in conjunction with OH exposure studies, led to the discovery of an unrecognized buffering mechanism in the uptake and oxidation of SO2 in sea salt particles in the marine boundary layer. This chemistry may resolve several discrepancies in the atmospheric chemistry literature. Several challenges during the acquisition and interpretation of both imaging and depth profiling data on specific particles on the TEM grid identified by the ESEM were overcome. A description of the analysis challenges and the solutions ultimately developed to them is presented here, along with an account of how the TOF-SIMS data were incorporated into the overall research effort. Several issues unique to the analysis of high aspect ratio particles are addressed.[1

  9. TOF-SIMS Analysis of Sea Salt Particles: Imaging and Depth Profiling in the Discovery of an Unrecogized Mechanism for pH Buffering

    SciTech Connect

    Gaspar, Dan J.; Laskin, Alexander; Wang, Weihong; Hunt, Sherri W.; Finlayson-Pitts, Barbara J.

    2004-06-15

    As part of a broader effort at understanding the chemistry of sea salt particles, we have performed time-of-flight secondary ion mass spectroscopy (TOF-SIMS) analysis of individual sea salt particles deposited on a transmission electron microscopy (TEM) grid. Environmental scanning electron microscopy (ESEM) and TOF-SIMS analysis have, in conjunction with OH exposure studies, led to the discovery of an unrecognised buffering mechanism in the uptake and oxidation of SO2 in sea salt particles in the marine boundary layer. This chemistry may resolve several discrepancies in the atmospheric chemistry literature. Several challenges during the acquisition and interpretation of both imaging and depth profiling data on specific particles on the TEM grid identified by the ESEM were overcome. A description of the analysis challenges and the solutions ultimately developed to them is presented here, along with an account of how the TOF-SIMS data were incorporated into the overall research effort. Several issues unique to the analysis of high aspect ratio particles are addressed.[1

  10. Learning Sparse Representations of Depth

    NASA Astrophysics Data System (ADS)

    Tosic, Ivana; Olshausen, Bruno A.; Culpepper, Benjamin J.

    2011-09-01

    This paper introduces a new method for learning and inferring sparse representations of depth (disparity) maps. The proposed algorithm relaxes the usual assumption of the stationary noise model in sparse coding. This enables learning from data corrupted with spatially varying noise or uncertainty, typically obtained by laser range scanners or structured light depth cameras. Sparse representations are learned from the Middlebury database disparity maps and then exploited in a two-layer graphical model for inferring depth from stereo, by including a sparsity prior on the learned features. Since they capture higher-order dependencies in the depth structure, these priors can complement smoothness priors commonly used in depth inference based on Markov Random Field (MRF) models. Inference on the proposed graph is achieved using an alternating iterative optimization technique, where the first layer is solved using an existing MRF-based stereo matching algorithm, then held fixed as the second layer is solved using the proposed non-stationary sparse coding algorithm. This leads to a general method for improving solutions of state of the art MRF-based depth estimation algorithms. Our experimental results first show that depth inference using learned representations leads to state of the art denoising of depth maps obtained from laser range scanners and a time of flight camera. Furthermore, we show that adding sparse priors improves the results of two depth estimation methods: the classical graph cut algorithm by Boykov et al. and the more recent algorithm of Woodford et al.

  11. Large viewing angle three-dimensional display with smooth motion parallax and accurate depth cues.

    PubMed

    Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Chen, Zhidong; Chen, Duo; Duan, Wei; Yan, Binbin; Yu, Chongxiu; Xu, Daxiong

    2015-10-01

    A three-dimensional (3D) display with smooth motion parallax and large viewing angle is demonstrated, which is based on a microlens array and a coded two-dimensional (2D) image on a 50 inch liquid crystal device (LCD) panel with the resolution of 3840 × 2160. Combining with accurate depth cues expressing, the flipping images of the traditional integral imaging (II) are eliminated, and smooth motion parallax can be achieved. The image on the LCD panel is coded as an elemental image packed repeatedly, and the depth cue is determined by the repeated period of elemental image. To construct the 3D image with complex depth structure, the varying period of elemental image is required. Here, the detailed principle and coding method are presented. The shape and the texture of a target 3D image are designed by a structure image and an elemental image, respectively. In the experiment, two groups of structure images and their corresponding elemental images are utilized to construct a 3D scene with a football in a green net. The constructed 3D image exhibits obviously enhanced 3D perception and smooth motion parallax. The viewing angle is 60°, which is much larger than that of the traditional II. PMID:26480110

  12. Intermediate depth seismicity - a reflection seismic approach

    NASA Astrophysics Data System (ADS)

    Haberland, C.; Rietbrock, A.

    2004-12-01

    During subduction the descending oceanic lithosphere is subject to metamorphic reactions, some of them associated with the release of fluids. It is now widely accepted, that these reactions and associated dehydration processes are directly related with the generation of intermediate depth earthquakes (dehydration embrittlement). However, the structure of the layered oceanic plate at depth and the location of the earthquakes relative to structural units of the subducting plate (sources within the oceanic crust and/or in the upper oceanic mantle lithosphere?) are still not resolved yet. This is in mainly due to the fact that the observational resolution needed to address these topics (in the range of only a few kilometers) is hardly achieved in field experiments and related studies. Here we study the wavefields of intermediate depth earthquakes typically observed by temporary networks in order to assess their high-resolution potential in resolving structure of the down going slab and locus of seismicity. In particular we study whether the subducted oceanic Moho can be detected by the analysis of secondary phases of local earthquakes (near vertical reflection). Due to the irregular geometry of sources and receivers we apply an imaging technique similar to diffraction stack migration. The method is tested using synthetic data both based on 2-D finite difference simulations and 3-D kinematic ray tracing. The accuracy of the hypocenter location and onset times crucial for the successful application of stacking techniques (coherency) was achieved by the use of relatively relocated intermediate depth seismicity. Additionally, we simulate the propagation of the wavefields at larger distance (wide angle) indicating the development of guided waves traveling in the low-velocity waveguide associated with the modeled oceanic crust. We also present application on local earthquake data from the South American subduction zone.

  13. Imaging the Alpine Fault to depths of more than 2 km - Initial results from the 2011 WhataDUSIE seismic reflection profile, Whataroa Valley, New Zealand

    NASA Astrophysics Data System (ADS)

    Kovacs, A.; Gorman, A. R.; Buske, S.; Schmitt, D. R.; Eccles, J. D.; Toy, V. G.; Sutherland, R.; Townend, J.; Norris, R.; Pooley, B.; Cooper, J.; Bruce, C.; Smillie, M.; Bain, S.; Hellwig, O.; Hlousek, F.; Hellmich, J.; Riedel, M.; Schijns, H. M.

    2011-12-01

    The Alpine Fault is a major plate-bounding fault that is thought to fail in large earthquakes (Mw~7.9) every 200-400 years and to have last ruptured in AD 1717. It is the principal geological structure accommodating transpressional motion between the Australian and Pacific plates on the South Island, with a long-term horizontal motion over the last 1-2 million years of 21-27 mm/yr. Determining the Alpine Fault zone structure at depths of several kilometres beneath the Earth's surface is crucial for understanding not only what conditions govern earthquake rupture but also how ongoing faulting produces mountain ranges such as the Southern Alps. The valley of the Whataroa River, in the central sector of the Alpine Fault, provides rare access to the SE (hanging wall) side of the fault for the purpose of a seismic survey. During January and February 2011, a ~5-km-long seismic reflection line was collected that aimed to image the Alpine Fault at depth. The acquisition was undertaken with the use of 21 Geode seismographs and two Seistronix seismographs with a total capacity of 552 channels. Geophone spacing varied from 4 m in the north (close to the surface trace of the fault) to 8 m in the south (farther from the surface trace.) Sources were 400-g Pentex charges buried in 1.5-2.0 m deep holes of which ~100 were dug by an excavator and ~100 were dug by hand tools where heavy equipment could not access shot locations. Single shots had a nominal separation of 25 m at the north end of the line. At the south end of the line, shots were deployed in patterns of five with a nominal spacing of 125 m. Acquisition system requirements and surface morphology (meanders in the Whataroa River) required five separate acquisition systems. Timing of shots for these systems was accomplished with a radio-controlled firing system, GPS clocks linked to co-located Reftek seismographs, and overlapping traces between acquisition systems. Shot records have been merged and processed through to

  14. Jupiter Clouds in Depth

    NASA Technical Reports Server (NTRS)

    2000-01-01

    [figure removed for brevity, see original site] 619 nm [figure removed for brevity, see original site] 727 nm [figure removed for brevity, see original site] 890 nm

    Images from NASA's Cassini spacecraft using three different filters reveal cloud structures and movements at different depths in the atmosphere around Jupiter's south pole.

    Cassini's cameras come equipped with filters that sample three wavelengths where methane gas absorbs light. These are in the red at 619 nanometer (nm) wavelength and in the near-infrared at 727 nm and 890 nm. Absorption in the 619 nm filter is weak. It is stronger in the 727 nm band and very strong in the 890 nm band where 90 percent of the light is absorbed by methane gas. Light in the weakest band can penetrate the deepest into Jupiter's atmosphere. It is sensitive to the amount of cloud and haze down to the pressure of the water cloud, which lies at a depth where pressure is about 6 times the atmospheric pressure at sea level on the Earth). Light in the strongest methane band is absorbed at high altitude and is sensitive only to the ammonia cloud level and higher (pressures less than about one-half of Earth's atmospheric pressure) and the middle methane band is sensitive to the ammonia and ammonium hydrosulfide cloud layers as deep as two times Earth's atmospheric pressure.

    The images shown here demonstrate the power of these filters in studies of cloud stratigraphy. The images cover latitudes from about 15 degrees north at the top down to the southern polar region at the bottom. The left and middle images are ratios, the image in the methane filter divided by the image at a nearby wavelength outside the methane band. Using ratios emphasizes where contrast is due to methane absorption and not to other factors, such as the absorptive properties of the cloud particles, which influence contrast at all wavelengths.

    The most prominent feature seen in all three filters is the polar stratospheric haze that makes Jupiter

  15. Perceived depth from shading boundaries.

    PubMed

    Kim, Juno; Anstis, Stuart

    2016-01-01

    Shading is well known to provide information the visual system uses to recover the three-dimensional shape of objects. We examined conditions under which patterns in shading promote the experience of a change in depth at contour boundaries, rather than a change in reflectance. In Experiment 1, we used image manipulation to illuminate different regions of a smooth surface from different directions. This manipulation imposed local differences in shading direction across edge contours (delta shading). We found that increasing the angle of delta shading, from 0° to 180°, monotonically increased perceived depth across the edge. Experiment 2 found that the perceptual splitting of shading into separate foreground and background surfaces depended on an assumed light source from above prior. Image regions perceived as foreground structures in upright images appeared farther in depth when the same images were inverted. We also found that the experienced break in surface continuity could promote the experience of amodal completion of colored contours that were ambiguous as to their depth order (Experiment 3). These findings suggest that the visual system can identify occlusion relationships based on monocular variations in local shading direction, but interprets this information according to a light source from above prior of midlevel visual processing. PMID:27271807

  16. Enhanced up/down-conversion luminescence and heat: Simultaneously achieving in one single core-shell structure for multimodal imaging guided therapy.

    PubMed

    He, Fei; Feng, Lili; Yang, Piaoping; Liu, Bin; Gai, Shili; Yang, Guixin; Dai, Yunlu; Lin, Jun

    2016-10-01

    Upon near-infrared (NIR) light irradiation, the Nd(3+) doping derived down-conversion luminescence (DCL) in NIR region and thermal effect are extremely fascinating in bio-imaging and photothermal therapy (PTT) fields. However, the concentration quenching induced opposite changing trend of the two properties makes it difficult to get desired DCL and thermal effect together in one single particle. In this study, we firstly designed a unique NaGdF4:0.3%Nd@NaGdF4@NaGdF4:10%Yb/1%Er@NaGdF4:10%Yb @NaNdF4:10%Yb multiple core-shell structure. Here the inert two layers (NaGdF4 and NaGdF4:10%Yb) can substantially eliminate the quenching effects, thus achieving markedly enhanced NIR-to-NIR DCL, NIR-to-Vis up-conversion luminescence (UCL), and thermal effect under a single 808 nm light excitation simultaneously. The UCL excites the attached photosensitive drug (Au25 nanoclusters) to generate singlet oxygen ((1)O2) for photodynamic therapy (PDT), while DCL with strong NIR emission serves as probe for sensitive deep-tissue imaging. The in vitro and in vivo experimental results demonstrate the excellent cancer inhibition efficacy of this platform due to a synergistic effect arising from the combined PTT and PDT. Furthermore, multimodal imaging including fluorescence imaging (FI), photothermal imaging (PTI), and photoacoustic imaging (PAI) has been obtained, which is used to monitor the drug delivery process, internal structure of tumor and photo-therapeutic process, thus achieving the target of imaging-guided cancer therapy. PMID:27512942

  17. Dose reduction of up to 89% while maintaining image quality in cardiovascular CT achieved with prospective ECG gating

    NASA Astrophysics Data System (ADS)

    Londt, John H.; Shreter, Uri; Vass, Melissa; Hsieh, Jiang; Ge, Zhanyu; Adda, Olivier; Dowe, David A.; Sabllayrolles, Jean-Louis

    2007-03-01

    We present the results of dose and image quality performance evaluation of a novel, prospective ECG-gated Coronary CT Angiography acquisition mode (SnapShot Pulse, LightSpeed VCT-XT scanner, GE Healthcare, Waukesha, WI), and compare it to conventional retrospective ECG gated helical acquisition in clinical and phantom studies. Image quality phantoms were used to measure noise, slice sensitivity profile, in-plane resolution, low contrast detectability and dose, using the two acquisition modes. Clinical image quality and diagnostic confidence were evaluated in a study of 31 patients scanned with the two acquisition modes. Radiation dose reduction in clinical practice was evaluated by tracking 120 consecutive patients scanned with the prospectively gated scan mode. In the phantom measurements, the prospectively gated mode resulted in equivalent or better image quality measures at dose reductions of up to 89% compared to non-ECG modulated conventional helical scans. In the clinical study, image quality was rated excellent by expert radiologist reviewing the cases, with pathology being identical using the two acquisition modes. The average dose to patients in the clinical practice study was 5.6 mSv, representing 50% reduction compared to a similar patient population scanned with the conventional helical mode.

  18. Water depth estimation with ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Ross, D. S.

    1973-01-01

    Contrast-enhanced 9.5 inch ERTS-1 images were produced for an investigation on ocean water color. Such images lend themselves to water depth estimation by photographic and electronic density contouring. MSS-4 and -5 images of the Great Bahama Bank were density sliced by both methods. Correlation was found between the MSS-4 image and a hydrographic chart at 1:467,000 scale, in a number of areas corresponding to water depth of less than 2 meters, 5 to 10 meters and 10 to about 20 meters. The MSS-5 image was restricted to depths of about 2 meters. Where reflective bottom and clear water are found, ERTS-1 MSS-4 images can be used with density contouring by electronic or photographic methods for estimating depths to 5 meters within about one meter.

  19. Validation of MODIS Aerosol Optical Depth Retrieval Over Land

    NASA Technical Reports Server (NTRS)

    Chu, D. A.; Kaufman, Y. J.; Ichoku, C.; Remer, L. A.; Tanre, D.; Holben, B. N.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Aerosol optical depths are derived operationally for the first time over land in the visible wavelengths by MODIS (Moderate Resolution Imaging Spectroradiometer) onboard the EOSTerra spacecraft. More than 300 Sun photometer data points from more than 30 AERONET (Aerosol Robotic Network) sites globally were used in validating the aerosol optical depths obtained during July - September 2000. Excellent agreement is found with retrieval errors within (Delta)tau=+/- 0.05 +/- 0.20 tau, as predicted, over (partially) vegetated surfaces, consistent with pre-launch theoretical analysis and aircraft field experiments. In coastal and semi-arid regions larger errors are caused predominantly by the uncertainty in evaluating the surface reflectance. The excellent fit was achieved despite the ongoing improvements in instrument characterization and calibration. This results show that MODIS-derived aerosol optical depths can be used quantitatively in many applications with cautions for residual clouds, snow/ice, and water contamination.

  20. Phase-correction algorithm of deformed grating images in the depth measurement of weld pool surface in gas tungsten arc welding

    NASA Astrophysics Data System (ADS)

    Wei, Yiqing; Liu, Nansheng; Hu, Xian; Ai, Xiaopu

    2011-05-01

    The principle and system structure of the depth measurement of weld pool surface in tungsten insert gas (TIG) welding are first introduced in the paper, then the problem of the common phase lines is studied. We analyze the causes and characteristics of the phase lines, and propose a phase correction method based on line ratio. The paper presents the principle and detail processing steps of this phase correction algorithm, and then the effectiveness and processing characteristics of the algorithm are verified by simulation. Finally, the algorithm is applied to phase processing in the depth measurement of the TIG weld pool surface and obtains satisfying results.

  1. Simultaneous depth-resolved imaging of sub-nanometer scale ossicular vibrations and morphological features of the human-cadaver middle ear with spectral-domain phase-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Subhash, Hrebesh M.; Nguyen-Huynh, Anh; Wang, Ruikang K.; Jacques, Steven L.; Nuttall, Alfred L.

    2012-02-01

    We describe a novel method for the detection of the tiny motions of the middle ear (ME) ossicles and their morphological features with a spectral-domain phase sensitive optical coherence tomography (PS-OCT). Laser Doppler Vibrometry (LDV) and its variations are the most extensively used methods for studding the vibrational modes of the ME. However, most techniques are limited to single point analysis methods, and do not have the ability to provide depth resolved simultaneous imaging of multiple points on the ossicles especially with the intact eardrum. Consequently, the methods have the limited ability to provide relative vibration information at these points. In this study, we demonstrated the feasibility of using PS-OCT for simultaneous depth resolved imaging of both vibration information and morphological features in a cadaver human middle ear with high sensitivity and resolution. This technique has the potential to provide meaningful vibration of ossicles with a vibration sensitivity of ~0.5nm at 1kHz acoustic stimulation. To the best of our knowledge, this is the first demonstration of depth-resolved vibration imaging of ossicles with a PS-OCT system at sub-nanometer scale.

  2. Dual-band Fourier domain optical coherence tomography with depth-related compensations

    PubMed Central

    Zhang, Miao; Ma, Lixin; Yu, Ping

    2013-01-01

    Dual-band Fourier domain optical coherence tomography (FD-OCT) provides depth-resolved spectroscopic imaging that enhances tissue contrast and reduces image speckle. However, previous dual-band FD-OCT systems could not correctly give the tissue spectroscopic contrast due to depth-related discrepancy in the imaging method and attenuation in biological tissue samples. We designed a new dual-band full-range FD-OCT imaging system and developed an algorithm to compensate depth-related fall-off and light attenuation. In our imaging system, the images from two wavelength bands were intrinsically overlapped and their intensities were balanced. The processing time of dual-band OCT image reconstruction and depth-related compensations were minimized by using multiple threads that execute in parallel. Using the newly developed system, we studied tissue phantoms and human cancer xenografts and muscle tissues dissected from severely compromised immune deficient mice. Improved spectroscopic contrast and sensitivity were achieved, benefiting from the depth-related compensations. PMID:24466485

  3. Chemical analysis of solid materials by a LIMS instrument designed for space research: 2D elemental imaging, sub-nm depth profiling and molecular surface analysis

    NASA Astrophysics Data System (ADS)

    Moreno-García, Pavel; Grimaudo, Valentine; Riedo, Andreas; Neuland, Maike B.; Tulej, Marek; Broekmann, Peter; Wurz, Peter

    2016-04-01

    Direct quantitative chemical analysis with high lateral and vertical resolution of solid materials is of prime importance for the development of a wide variety of research fields, including e.g., astrobiology, archeology, mineralogy, electronics, among many others. Nowadays, studies carried out by complementary state-of-the-art analytical techniques such as Auger Electron Spectroscopy (AES), X-ray Photoelectron Spectroscopy (XPS), Secondary Ion Mass Spectrometry (SIMS), Glow Discharge Time-of-Flight Mass Spectrometry (GD-TOF-MS) or Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) provide extensive insight into the chemical composition and allow for a deep understanding of processes that might have fashioned the outmost layers of an analyte due to its interaction with the surrounding environment. Nonetheless, these investigations typically employ equipment that is not suitable for implementation on spacecraft, where requirements concerning weight, size and power consumption are very strict. In recent years Laser Ablation/Ionization Mass Spectrometry (LIMS) has re-emerged as a powerful analytical technique suitable not only for laboratory but also for space applications.[1-3] Its improved performance and measurement capabilities result from the use of cutting edge ultra-short femtosecond laser sources, improved vacuum technology and fast electronics. Because of its ultimate compactness, simplicity and robustness it has already proven to be a very suitable analytical tool for elemental and isotope investigations in space research.[4] In this contribution we demonstrate extended capabilities of our LMS instrument by means of three case studies: i) 2D chemical imaging performed on an Allende meteorite sample,[5] ii) depth profiling with unprecedented sub-nm vertical resolution on Cu electrodeposited interconnects[6,7] and iii) preliminary molecular desorption of polymers without assistance of matrix or functionalized substrates.[8] On the whole

  4. Academic Achievement and the Self-Image of Adolescents with Diabetes Mellitus Type-1 And Rheumatoid Arthritis.

    ERIC Educational Resources Information Center

    Erkolahti, Ritva; Ilonen, Tuula

    2005-01-01

    A total of 69 adolescents, 21 with diabetes mellitus type-1 (DM), 24 with rheumatoid arthritis (RA), and 24 controls matched for sex, age, social background, and living environment, were compared by means of their school grades and the Offer Self-Image Questionnaire. The ages of the children at the time of the diagnosis of the disease and its…

  5. Teaching Image-Processing Concepts in Junior High School: Boys' and Girls' Achievements and Attitudes towards Technology

    ERIC Educational Resources Information Center

    Barak, Moshe; Asad, Khaled

    2012-01-01

    Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these…

  6. Depth perception in autostereograms: 1/f noise is best

    NASA Astrophysics Data System (ADS)

    Yankelevsky, Yael; Shvartz, Ishai; Avraham, Tamar; Bruckstein, Alfred M.

    2016-02-01

    An autostereogram is a single image that encodes depth information that pops out when looking at it. The trick is achieved by replicating a vertical strip that sets a basic two-dimensional pattern with disparity shifts that encode a three-dimensional scene. It is of interest to explore the dependency between the ease of perceiving depth in autostereograms and the choice of the basic pattern used for generating them. In this work we confirm a theory proposed by Bruckstein et al. to explain the process of autostereographic depth perception, providing a measure for the ease of "locking into" the depth profile, based on the spectral properties of the basic pattern used. We report the results of three sets of psychophysical experiments using autostereograms generated from two-dimensional random noise patterns having power spectra of the form $1/f^\\beta$. The experiments were designed to test the ability of human subjects to identify smooth, low resolution surfaces, as well as detail, in the form of higher resolution objects in the depth profile, and to determine limits in identifying small objects as a function of their size. In accordance with the theory, we discover a significant advantage of the $1/f$ noise pattern (pink noise) for fast depth lock-in and fine detail detection, showing that such patterns are optimal choices for autostereogram design. Validating the theoretical model predictions strengthens its underlying assumptions, and contributes to a better understanding of the visual system's binocular disparity mechanisms.

  7. Depth reconstruction from sparse samples: representation, algorithm, and sampling.

    PubMed

    Liu, Lee-Kang; Chan, Stanley H; Nguyen, Truong Q

    2015-06-01

    The rapid development of 3D technology and computer vision applications has motivated a thrust of methodologies for depth acquisition and estimation. However, existing hardware and software acquisition methods have limited performance due to poor depth precision, low resolution, and high computational cost. In this paper, we present a computationally efficient method to estimate dense depth maps from sparse measurements. There are three main contributions. First, we provide empirical evidence that depth maps can be encoded much more sparsely than natural images using common dictionaries, such as wavelets and contourlets. We also show that a combined wavelet-contourlet dictionary achieves better performance than using either dictionary alone. Second, we propose an alternating direction method of multipliers (ADMM) for depth map reconstruction. A multiscale warm start procedure is proposed to speed up the convergence. Third, we propose a two-stage randomized sampling scheme to optimally choose the sampling locations, thus maximizing the reconstruction performance for a given sampling budget. Experimental results show that the proposed method produces high-quality dense depth estimates, and is robust to noisy measurements. Applications to real data in stereo matching are demonstrated. PMID:25769151

  8. SU-E-T-387: Achieving Optimal Patient Setup Imaging and Treatment Workflow Configurations in Multi-Room Proton Centers

    SciTech Connect

    Zhang, H; Prado, K; Langen, K; Yi, B; Mehta, M; Regine, W; D'Souza, W

    2014-06-01

    Purpose: To simulate patient flow in proton treatment center under uncertainty and to explore the feasibility of treatment preparation rooms to improve patient throughput and cyclotron utilization. Methods: Three center layout scenarios were modeled: (S1: In-Tx room imaging) patient setup and imaging (planar/volumetric) performed in treatment room, (S2: Patient setup in preparation room) each treatment room was assigned with preparation room(s) that was equipped with lasers only for patient setup and gross patient alignment, and (S3: Patient setup and imaging in preparation room) preparation room(s) was equipped with laser and volumetric imaging for patient setup, gross and fine patient alignment. A 'snap' imaging was performed in treatment room. For each scenario, the number of treatment rooms and the number of preparation rooms serving each treatment room were varied. We examined our results (average of 100 16-hour (two shifts) working days) by evaluating patient throughput and cyclotron utilization. Results: When the number of treatment rooms increased ([from, to]) [1, 5], daily patient throughput increased [32, 161], [29, 184] and [27, 184] and cyclotron utilization increased [13%, 85%], [12%, 98%], and [11%, 98%] for scenarios S1, S2 and S3 respectively. However, both measures plateaued after 4 rooms. With the preparation rooms, the throughput and the cyclotron utilization increased by 14% and 15%, respectively. Three preparation rooms were optimal to serve 1-3 treatment rooms and two preparation rooms were optimal to serve 4 or 5 treatment rooms. Conclusion: Patient preparation rooms for patient setup may increase throughput and decrease the need for additional treatment rooms (cost effective). Optimal number of preparation rooms serving each gantry room varies as a function of treatment rooms and patient setup scenarios. A 5th treatment room may not be justified by throughput or utilization.

  9. Remote sensing of stream depths with hydraulically assisted bathymetry (HAB) models

    NASA Astrophysics Data System (ADS)

    Fonstad, Mark A.; Marcus, W. Andrew

    2005-12-01

    This article introduces a technique for using a combination of remote sensing imagery and open-channel flow principles to estimate depths for each pixel in an imaged river. This technique, which we term hydraulically assisted bathymetry (HAB), uses a combination of local stream gage information on discharge, image brightness data, and Manning-based estimates of stream resistance to calculate water depth. The HAB technique does not require ground-truth depth information at the time of flight. HAB can be accomplished with multispectral or hyperspectral data, and therefore can be applied over entire watersheds using standard high spatial resolution satellite or aerial images. HAB also has the potential to be applied retroactively to historic imagery, allowing researchers to map temporal changes in depth. We present two versions of the technique, HAB-1 and HAB-2. HAB-1 is based primarily on the geometry, discharge and velocity relationships of river channels. Manning's equation (assuming average depth approximates the hydraulic radius), the discharge equation, and the assumption that the frequency distribution of depths within a cross-section approximates that of a triangle are combined with discharge data from a local station, width measurements from imagery, and slope measurements from maps to estimate minimum, average and maximum depths at a multiple cross-sections. These depths are assigned to pixels of maximum, average, and minimum brightness within the cross-sections to develop a brightness-depth relation to estimate depths throughout the remainder of the river. HAB-2 is similar to HAB-1 in operation, but the assumption that the distribution of depths approximates that of a triangle is replaced by an optical Beer-Lambert law of light absorbance. In this case, the flow equations and the optical equations are used to iteratively scale the river pixel values until their depths produce a discharge that matches that of a nearby gage. R2 values for measured depths

  10. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    SciTech Connect

    Wang, Qi; Wang, Junting; Lu, Qingyou; Hou, Yubin

    2013-11-15

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d{sub 31} coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  11. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Hou, Yubin; Wang, Junting; Lu, Qingyou

    2013-11-01

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d31 coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  12. True-Depth: a new type of true 3D volumetric display system suitable for CAD, medical imaging, and air-traffic control

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Floating Images, Inc. is developing a new type of volumetric monitor capable of producing a high-density set of points in 3D space. Since the points of light actually exist in space, the resulting image can be viewed with continuous parallax, both vertically and horizontally, with no headache or eyestrain. These 'real' points in space are always viewed with a perfect match between accommodation and convergence. All scanned points appear to the viewer simultaneously, making this display especially suitable for CAD, medical imaging, air-traffic control, and various military applications. This system has the potential to display imagery so accurately that a ruler could be placed within the aerial image to provide precise measurement in any direction. A special virtual imaging arrangement allows the user to superimpose 3D images on a solid object, making the object look transparent. This is particularly useful for minimally invasive surgery in which the internal structure of a patient is visible to a surgeon in 3D. Surgical procedures can be carried out through the smallest possible hole while the surgeon watches the procedure from outside the body as if the patient were transparent. Unlike other attempts to produce volumetric imaging, this system uses no massive rotating screen or any screen at all, eliminating down time due to breakage and possible danger due to potential mechanical failure. Additionally, it is also capable of displaying very large images.

  13. Achieving high precision photometry for transiting exoplanets with a low cost robotic DSLR-based imaging system

    NASA Astrophysics Data System (ADS)

    Guyon, Olivier; Martinache, Frantz

    2012-09-01

    We describe a low cost high precision photometric imaging system, which has been in robotic operation for one and half year on the Mauna Loa observatory (Hawaii). The system, which can be easily duplicated, is composed of commercially available components, offers a 150 sq deg field with two 70mm entrance apertures, and 6-band simultaneous photometry at a 0.01 Hz sampling. The detectors are low-cost commercial 3-color CMOS array, which we show is an attractive costeffective choice for high precision transit photometry. We describe the design of the system and show early results. A new data processing technique was developed to overcome pixelization and color errors. We show that this technique, which can also be applied on non-color imaging systems, essentially removes pixelization errors in the photometric signal, and we demonstrate on-sky photometric precision approaching fundamental error sources (photon noise and atmospheric scintillation). We conclude that our approach is ideally suited for exoplanet transit survey with multiple units. We show that in this scenario, the success metric is purely cost per etendue, which is at less than $10000s per square meter square degree for our system.

  14. Correlation Plenoptic Imaging.

    PubMed

    D'Angelo, Milena; Pepe, Francesco V; Garuccio, Augusto; Scarcelli, Giuliano

    2016-06-01

    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging. PMID:27314718

  15. Correlation Plenoptic Imaging

    NASA Astrophysics Data System (ADS)

    D'Angelo, Milena; Pepe, Francesco V.; Garuccio, Augusto; Scarcelli, Giuliano

    2016-06-01

    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.

  16. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-01

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second. PMID:24104293

  17. The Depths from Skin to the Major Organs at Chest Acupoints of Pediatric Patients

    PubMed Central

    Ma, Yi-Chun; Peng, Ching-Tien; Huang, Yu-Chuen; Lin, Hung-Yi; Lin, Jaung-Geng

    2015-01-01

    Background. Acupuncture is applied to treat numerous diseases in pediatric patients. Few reports have been published on the depth to which it is safe to insert needle acupoints in pediatric patients. We evaluated the depths to which acupuncture needles can be inserted safely in chest acupoints in pediatric patients and the variations in safe depth according to sex, age, body weight, and body mass index (BMI). Methods. We retrospectively studied computed tomography (CT) images of pediatric patients aged 4 to 18 years who had undergone chest CT at China Medical University Hospital from December 2004 to May 2013. The safe depth of chest acupoints was directly measured from the CT images. The relationships between the safe depth of these acupoints and sex, age, body weight, and BMI were analyzed. Results. The results demonstrated significant differences in depth among boys and girls at KI25 (kidney meridian), ST16 (stomach meridian), ST18, SP17 (spleen meridian), SP19, SP20, PC1 (pericardium meridian), LU2 (lung meridian), and GB22 (gallbladder meridian). Safe depth significantly differed among the age groups (P < 0.001), weight groups (P < 0.05), and BMI groups (P < 0.05). Conclusion. Physicians should focus on large variations in needle depth during acupuncture for achieving optimal therapeutic effect and preventing complications. PMID:26457105

  18. Learning in Depth: Students as Experts

    ERIC Educational Resources Information Center

    Egan, Kieran; Madej, Krystina

    2009-01-01

    Nearly everyone who has tried to describe an image of the educated person, from Plato to the present, includes at least two requirements: first, educated people must be widely knowledgeable and, second, they must know something in depth. The authors would like to advocate a somewhat novel approach to "learning in depth" (LiD) that seems likely to…

  19. Temporal and Spatial Denoising of Depth Maps

    PubMed Central

    Lin, Bor-Shing; Su, Mei-Ju; Cheng, Po-Hsun; Tseng, Po-Jui; Chen, Sao-Jie

    2015-01-01

    This work presents a procedure for refining depth maps acquired using RGB-D (depth) cameras. With numerous new structured-light RGB-D cameras, acquiring high-resolution depth maps has become easy. However, there are problems such as undesired occlusion, inaccurate depth values, and temporal variation of pixel values when using these cameras. In this paper, a proposed method based on an exemplar-based inpainting method is proposed to remove artefacts in depth maps obtained using RGB-D cameras. Exemplar-based inpainting has been used to repair an object-removed image. The concept underlying this inpainting method is similar to that underlying the procedure for padding the occlusions in the depth data obtained using RGB-D cameras. Therefore, our proposed method enhances and modifies the inpainting method for application in and the refinement of RGB-D depth data image quality. For evaluating the experimental results of the proposed method, our proposed method was tested on the Tsukuba Stereo Dataset, which contains a 3D video with the ground truths of depth maps, occlusion maps, RGB images, the peak signal-to-noise ratio, and the computational time as the evaluation metrics. Moreover, a set of self-recorded RGB-D depth maps and their refined versions are presented to show the effectiveness of the proposed method. PMID:26230696

  20. Temporal and Spatial Denoising of Depth Maps.

    PubMed

    Lin, Bor-Shing; Su, Mei-Ju; Cheng, Po-Hsun; Tseng, Po-Jui; Chen, Sao-Jie

    2015-01-01

    This work presents a procedure for refining depth maps acquired using RGB-D (depth) cameras. With numerous new structured-light RGB-D cameras, acquiring high-resolution depth maps has become easy. However, there are problems such as undesired occlusion, inaccurate depth values, and temporal variation of pixel values when using these cameras. In this paper, a proposed method based on an exemplar-based inpainting method is proposed to remove artefacts in depth maps obtained using RGB-D cameras. Exemplar-based inpainting has been used to repair an object-removed image. The concept underlying this inpainting method is similar to that underlying the procedure for padding the occlusions in the depth data obtained using RGB-D cameras. Therefore, our proposed method enhances and modifies the inpainting method for application in and the refinement of RGB-D depth data image quality. For evaluating the experimental results of the proposed method, our proposed method was tested on the Tsukuba Stereo Dataset, which contains a 3D video with the ground truths of depth maps, occlusion maps, RGB images, the peak signal-to-noise ratio, and the computational time as the evaluation metrics. Moreover, a set of self-recorded RGB-D depth maps and their refined versions are presented to show the effectiveness of the proposed method. PMID:26230696

  1. Effects of magnification and zooming on depth perception in digital stereomammography: an observer performance study.

    PubMed

    Chan, Heang-Ping; Goodsitt, Mitchell M; Hadjiiski, Lubomir M; Bailey, Janet E; Klein, Katherine; Darner, Katie L; Sahiner, Berkman

    2003-11-21

    We are evaluating the application of stereoscopic imaging to digital mammography. In the current study, we investigated the effects of magnification and zooming on depth perception. A modular phantom was designed which contained six layers of 1-mm-thick Lexan plates, each spaced 1 mm apart. Eight to nine small, thin nylon fibrils were pasted on each plate in horizontal or vertical orientations such that they formed 25 crossing fibril pairs in a projected image. The depth separation between each fibril pair ranged from 2 to 10 mm. A change in the order of the Lexan plates changed the depth separation of the two fibrils in a pair. Stereoscopic image pairs of the phantom were acquired with a GE full-field digital mammography system. Three different phantom configurations were imaged. All images were obtained using a Rh target/Rh filter spectrum at 30 kVp tube potential and a +/- 3 stereo shift angle. Images were acquired in both contact and 1.8X magnification geometry and an exposure range of 4 to 63 mAs was employed. The images were displayed on a Barco monitor driven by a Metheus stereo graphics board and viewed with LCD stereo glasses. Five observers participated in the study. Each observer visually judged whether the vertical fibril was in front of or behind the horizontal fibril in each fibril pair. It was found that the accuracy of depth discrimination increased with increasing fibril depth separation and x-ray exposure. The accuracy was not improved by electronic display zooming of the contact stereo images by 2X. Under conditions of high noise (low mAs) and small depth separation between the fibrils, the observers' depth discrimination ability was significantly better in stereo images acquired with geometric magnification than in images acquired with a contact technique and displayed with or without zooming. Under our experimental conditions, a 2 mm depth discrimination was achieved with over 60% accuracy on contact images with and without zooming, and with

  2. 7.0-T magnetic resonance imaging characterization of acute blood-brain-barrier disruption achieved with intracranial irreversible electroporation.

    PubMed

    Garcia, Paulo A; Rossmeisl, John H; Robertson, John L; Olson, John D; Johnson, Annette J; Ellis, Thomas L; Davalos, Rafael V

    2012-01-01

    The blood-brain-barrier (BBB) presents a significant obstacle to the delivery of systemically administered chemotherapeutics for the treatment of brain cancer. Irreversible electroporation (IRE) is an emerging technology that uses pulsed electric fields for the non-thermal ablation of tumors. We hypothesized that there is a minimal electric field at which BBB disruption occurs surrounding an IRE-induced zone of ablation and that this transient response can be measured using gadolinium (Gd) uptake as a surrogate marker for BBB disruption. The study was performed in a Good Laboratory Practices (GLP) compliant facility and had Institutional Animal Care and Use Committee (IACUC) approval. IRE ablations were performed in vivo in normal rat brain (n = 21) with 1-mm electrodes (0.45 mm diameter) separated by an edge-to-edge distance of 4 mm. We used an ECM830 pulse generator to deliver ninety 50-μs pulse treatments (0, 200, 400, 600, 800, and 1000 V/cm) at 1 Hz. The effects of applied electric fields and timing of Gd administration (-5, +5, +15, and +30 min) was assessed by systematically characterizing IRE-induced regions of cell death and BBB disruption with 7.0-T magnetic resonance imaging (MRI) and histopathologic evaluations. Statistical analysis on the effect of applied electric field and Gd timing was conducted via Fit of Least Squares with α = 0.05 and linear regression analysis. The focal nature of IRE treatment was confirmed with 3D MRI reconstructions with linear correlations between volume of ablation and electric field. Our results also demonstrated that IRE is an ablation technique that kills brain tissue in a focal manner depicted by MRI (n = 16) and transiently disrupts the BBB adjacent to the ablated area in a voltage-dependent manner as seen with Evan's Blue (n = 5) and Gd administration. PMID:23226293

  3. 7.0-T Magnetic Resonance Imaging Characterization of Acute Blood-Brain-Barrier Disruption Achieved with Intracranial Irreversible Electroporation

    PubMed Central

    Garcia, Paulo A.; Rossmeisl, John H.; Robertson, John L.; Olson, John D.; Johnson, Annette J.; Ellis, Thomas L.; Davalos, Rafael V.

    2012-01-01

    The blood-brain-barrier (BBB) presents a significant obstacle to the delivery of systemically administered chemotherapeutics for the treatment of brain cancer. Irreversible electroporation (IRE) is an emerging technology that uses pulsed electric fields for the non-thermal ablation of tumors. We hypothesized that there is a minimal electric field at which BBB disruption occurs surrounding an IRE-induced zone of ablation and that this transient response can be measured using gadolinium (Gd) uptake as a surrogate marker for BBB disruption. The study was performed in a Good Laboratory Practices (GLP) compliant facility and had Institutional Animal Care and Use Committee (IACUC) approval. IRE ablations were performed in vivo in normal rat brain (n = 21) with 1-mm electrodes (0.45 mm diameter) separated by an edge-to-edge distance of 4 mm. We used an ECM830 pulse generator to deliver ninety 50-μs pulse treatments (0, 200, 400, 600, 800, and 1000 V/cm) at 1 Hz. The effects of applied electric fields and timing of Gd administration (−5, +5, +15, and +30 min) was assessed by systematically characterizing IRE-induced regions of cell death and BBB disruption with 7.0-T magnetic resonance imaging (MRI) and histopathologic evaluations. Statistical analysis on the effect of applied electric field and Gd timing was conducted via Fit of Least Squares with α = 0.05 and linear regression analysis. The focal nature of IRE treatment was confirmed with 3D MRI reconstructions with linear correlations between volume of ablation and electric field. Our results also demonstrated that IRE is an ablation technique that kills brain tissue in a focal manner depicted by MRI (n = 16) and transiently disrupts the BBB adjacent to the ablated area in a voltage-dependent manner as seen with Evan's Blue (n = 5) and Gd administration. PMID:23226293

  4. Automatic exposure control in multichannel CT with tube current modulation to achieve a constant level of image noise: Experimental assessment on pediatric phantoms

    SciTech Connect

    Brisse, Herve J.; Madec, Ludovic; Gaboriaud, Genevieve; Lemoine, Thomas; Savignoni, Alexia; Neuenschwander, Sylvia; Aubert, Bernard; Rosenwald, Jean-Claude

    2007-07-15

    Automatic exposure control (AEC) systems have been developed by computed tomography (CT) manufacturers to improve the consistency of image quality among patients and to control the absorbed dose. Since a multichannel helical CT scan may easily increase individual radiation doses, this technical improvement is of special interest in children who are particularly sensitive to ionizing radiation, but little information is currently available regarding the precise performance of these systems on small patients. Our objective was to assess an AEC system on pediatric dose phantoms by studying the impact of phantom transmission and acquisition parameters on tube current modulation, on the resulting absorbed dose and on image quality. We used a four-channel CT scan working with a patient-size and z-axis-based AEC system designed to achieve a constant noise within the reconstructed images by automatically adjusting the tube current during acquisition. The study was performed with six cylindrical poly(methylmethacrylate) (PMMA) phantoms of variable diameters (10-32 cm) and one 5 years of age equivalent pediatric anthropomorphic phantom. After a single scan projection radiograph (SPR), helical acquisitions were performed and images were reconstructed with a standard convolution kernel. Tube current modulation was studied with variable SPR settings (tube angle, mA, kVp) and helical parameters (6-20 HU noise indices, 80-140 kVp tube potential, 0.8-4 s. tube rotation time, 5-20 mm x-ray beam thickness, 0.75-1.5 pitch, 1.25-10 mm image thickness, variable acquisition, and reconstruction fields of view). CT dose indices (CTDIvol) were measured, and the image quality criterion used was the standard deviation of the CT number measured in reconstructed images of PMMA material. Observed tube current levels were compared to the expected values from Brooks and Di Chiro's [R.A. Brooks and G.D. Chiro, Med. Phys. 3, 237-240 (1976)] model and calculated values (product of a reference value

  5. RGB-D depth-map restoration using smooth depth neighborhood supports

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Xue, Haoyang; Yu, Zhongjie; Wu, Qiang; Yang, Jie

    2015-05-01

    A method to restore the depth map of an RGB-D image using smooth depth neighborhood (SDN) supports is presented. The SDN supports are computed based on the corresponding color image of the depth map. Compared with the most widely used square supports, the proposed SDN supports can well-capture the local structure of the object. Only pixels with similar depth values are allowed to be included in the support. We combine our SDN supports with the joint bilateral filter (JBF) to form the SDN-JBF and use it to restore depth maps. Experimental results show that our SDN-JBF can not only rectify the misaligned depth pixels but also preserve sharp depth discontinuities.

  6. ToF-SIMS depth profiling of cells: z-correction, 3D imaging, and sputter rate of individual NIH/3T3 fibroblasts.

    PubMed

    Robinson, Michael A; Graham, Daniel J; Castner, David G

    2012-06-01

    Proper display of three-dimensional time-of-flight secondary ion mass spectrometry (ToF-SIMS) imaging data of complex, nonflat samples requires a correction of the data in the z-direction. Inaccuracies in displaying three-dimensional ToF-SIMS data arise from projecting data from a nonflat surface onto a 2D image plane, as well as possible variations in the sputter rate of the sample being probed. The current study builds on previous studies by creating software written in Matlab, the ZCorrectorGUI (available at http://mvsa.nb.uw.edu/), to apply the z-correction to entire 3D data sets. Three-dimensional image data sets were acquired from NIH/3T3 fibroblasts by collecting ToF-SIMS images, using a dual beam approach (25 keV Bi(3)(+) for analysis cycles and 20 keV C(60)(2+) for sputter cycles). The entire data cube was then corrected by using the new ZCorrectorGUI software, producing accurate chemical information from single cells in 3D. For the first time, a three-dimensional corrected view of a lipid-rich subcellular region, possibly the nuclear membrane, is presented. Additionally, the key assumption of a constant sputter rate throughout the data acquisition was tested by using ToF-SIMS and atomic force microscopy (AFM) analysis of the same cells. For the dried NIH/3T3 fibroblasts examined in this study, the sputter rate was found to not change appreciably in x, y, or z, and the cellular material was sputtered at a rate of approximately 10 nm per 1.25 × 10(13) ions C(60)(2+)/cm(2). PMID:22530745

  7. Noninvasive Optical Imaging and In Vivo Cell Tracking of Indocyanine Green Labeled Human Stem Cells Transplanted at Superficial or In-Depth Tissue of SCID Mice

    PubMed Central

    Sabapathy, Vikram; Mentam, Jyothsna; Jacob, Paul Mazhuvanchary; Kumar, Sanjay

    2015-01-01

    Stem cell based therapies hold great promise for the treatment of human diseases; however results from several recent clinical studies have not shown a level of efficacy required for their use as a first-line therapy, because more often in these studies fate of the transplanted cells is unknown. Thus monitoring the real-time fate of in vivo transplanted cells is essential to validate the full potential of stem cells based therapy. Recent studies have shown how real-time in vivo molecular imaging has helped in identifying hurdles towards clinical translation and designing potential strategies that may contribute to successful transplantation of stem cells and improved outcomes. At present, there are no cost effective and efficient labeling techniques for tracking the cells under in vivo conditions. Indocyanine green (ICG) is a safer, economical, and superior labelling technique for in vivo optical imaging. ICG is a FDA-approved agent and decades of usage have clearly established the effectiveness of ICG for human clinical applications. In this study, we have optimized the ICG labelling conditions that is optimal for noninvasive optical imaging and demonstrated that ICG labelled cells can be successfully used for in vivo cell tracking applications in SCID mice injury models. PMID:26240573

  8. Rifting-to-drifting transition of the South China Sea: early Cenozoic syn-rifting deposition imaged with prestack depth migration

    NASA Astrophysics Data System (ADS)

    Song, T.; Li, C.; Li, J.

    2012-12-01

    One of the major unsolved questions of the opening of the South China Sea (SCS) is its opening sequences and episodes. It has been suggested, for example, that the opening of the East and Northwest Sub-basins predated, or at least synchronized with, that of the Southwest Sub-basin, a model contrasting with some others in which an earlier opening in the Southwest Sub-basin is preferred. Difficulties in understanding the perplexing relationships between different sub-basins are often compounded by contradicting evidences leading to different interpretations. Here we carry out pre-stack depth migration of a recently acquired multichannel reflection seismic profile from the Southwest Sub-basin of the SCS in order to reveal complicated subsurface structures and strong lateral velocity variations associated with a thick syn-rifting sequence on the southern margin of the Southwest Sub-basin. Combined with gravimetric and magnetic inversion and modeling, this depth section helps us understand the complicated transitional processes from continental rifting to seafloor spreading. This syn-rifting sequence is found to be extremely thick, over 2 seconds in two-way travel time, and is located directly within the continent-ocean transition zone. It is bounded landwards by a seaward dipping fault, and tapers out seaward. The top of this sequence is an erosional truncation, representing mainly the Oligocene-Miocene unconformity landward but slightly an older unconformity on the seaward side. Stronger erosions of this sequence are found toward the ocean basin. The sequence itself is severely faulted by a group of seaward dipping faults developed mainly within the sequence. The overall deformation style suggests a successive episode of rifting, faulting, compression, tilting, and erosion, prior to seafloor spreading. Integrating information from gravity anomalies and seismic velocities, we interpret that this sequence represents a syn-rifting sequence developed during a long period

  9. Oxygen depth profiling with subnanometre depth resolution

    NASA Astrophysics Data System (ADS)

    Kosmata, Marcel; Munnik, Frans; Hanf, Daniel; Grötzschel, Rainer; Crocoll, Sonja; Möller, Wolfhard

    2014-10-01

    A High-depth Resolution Elastic Recoil Detection (HR-ERD) set-up using a magnetic spectrometer has been taken into operation at the Helmholtz-Zentrum Dresden-Rossendorf for the first time. This instrument allows the investigation of light elements in ultra-thin layers and their interfaces with a depth resolution of less than 1 nm near the surface. As the depth resolution is highly influenced by the experimental measurement parameters, sophisticated optimisation procedures have been implemented. Effects of surface roughness and sample damage caused by high fluences need to be quantified for each kind of material. Also corrections are essential for non-equilibrium charge state distributions that exist very close to the surface. Using the example of a high-k multilayer SiO2/Si3N4Ox/SiO2/Si it is demonstrated that oxygen in ultra-thin films of a few nanometres thickness can be investigated by HR-ERD.

  10. Assessment of imaging with extended depth-of-field by means of the light sword lens in terms of visual acuity scale

    PubMed Central

    Kakarenko, Karol; Ducin, Izabela; Grabowiecki, Krzysztof; Jaroszewicz, Zbigniew; Kolodziejczyk, Andrzej; Mira-Agudelo, Alejandro; Petelczyc, Krzysztof; Składowska, Aleksandra; Sypek, Maciej

    2015-01-01

    We present outcomes of an imaging experiment using the refractive light sword lens (LSL) as a contact lens in an optical system that serves as a simplified model of the presbyopic eye. The results show that the LSL produces significant improvements in visual acuity of the simplified presbyopic eye model over a wide range of defocus. Therefore, this element can be an interesting alternative for the multifocal contact and intraocular lenses currently used in ophthalmology. The second part of the article discusses possible modifications of the LSL profile in order to render it more suitable for fabrication and ophthalmological applications. PMID:26137376

  11. Assessment of imaging with extended depth-of-field by means of the light sword lens in terms of visual acuity scale.

    PubMed

    Kakarenko, Karol; Ducin, Izabela; Grabowiecki, Krzysztof; Jaroszewicz, Zbigniew; Kolodziejczyk, Andrzej; Mira-Agudelo, Alejandro; Petelczyc, Krzysztof; Składowska, Aleksandra; Sypek, Maciej

    2015-05-01

    We present outcomes of an imaging experiment using the refractive light sword lens (LSL) as a contact lens in an optical system that serves as a simplified model of the presbyopic eye. The results show that the LSL produces significant improvements in visual acuity of the simplified presbyopic eye model over a wide range of defocus. Therefore, this element can be an interesting alternative for the multifocal contact and intraocular lenses currently used in ophthalmology. The second part of the article discusses possible modifications of the LSL profile in order to render it more suitable for fabrication and ophthalmological applications. PMID:26137376

  12. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  13. Mosquitofish (Gambusia affinis) Preference and Behavioral Response to Animated Images of Conspecifics Altered in Their Color, Aspect Ratio, and Swimming Depth

    PubMed Central

    Polverino, Giovanni; Liao, Jian Cong; Porfiri, Maurizio

    2013-01-01

    Mosquitofish (Gambusia affinis) is an example of a freshwater fish species whose remarkable diffusion outside its native range has led to it being placed on the list of the world’s hundred worst invasive alien species (International Union for Conservation of Nature). Here, we investigate mosquitofish shoaling tendency using a dichotomous choice test in which computer-animated images of their conspecifics are altered in color, aspect ratio, and swimming level in the water column. Pairs of virtual stimuli are systematically presented to focal subjects to evaluate their attractiveness and the effect on fish behavior. Mosquitofish respond differentially to some of these stimuli showing preference for conspecifics with enhanced yellow pigmentation while exhibiting highly varying locomotory patterns. Our results suggest that computer-animated images can be used to understand the factors that regulate the social dynamics of shoals of Gambusia affinis. Such knowledge may inform the design of control plans and open new avenues in conservation and protection of endangered animal species. PMID:23342131

  14. Sampling Depths, Depth Shifts, and Depth Resolutions for Bi(n)(+) Ion Analysis in Argon Gas Cluster Depth Profiles.

    PubMed

    Havelund, R; Seah, M P; Gilmore, I S

    2016-03-10

    Gas cluster sputter depth profiling is increasingly used for the spatially resolved chemical analysis and imaging of organic materials. Here, a study is reported of the sampling depth in secondary ion mass spectrometry depth profiling. It is shown that effects of the sampling depth leads to apparent shifts in depth profiles of Irganox 3114 delta layers in Irganox 1010 sputtered, in the dual beam mode, using 5 keV Ar₂₀₀₀⁺ ions and analyzed with Bi(q+), Bi₃(q+) and Bi₅(q+) ions (q = 1 or 2) with energies between 13 and 50 keV. The profiles show sharp delta layers, broadened from their intrinsic 1 nm thickness to full widths at half-maxima (fwhm's) of 8-12 nm. For different secondary ions, the centroids of the measured delta layers are shifted deeper or shallower by up to 3 nm from the position measured for the large, 564.36 Da (C₃₃H₄₆N₃O₅⁻) characteristic ion for Irganox 3114 used to define a reference position. The shifts are linear with the Bi(n)(q+) beam energy and are greatest for Bi₃(q+), slightly less for Bi₅(q+) with its wider or less deep craters, and significantly less for Bi(q+) where the sputtering yield is very low and the primary ion penetrates more deeply. The shifts increase the fwhm’s of the delta layers in a manner consistent with a linearly falling generation and escape depth distribution function (GEDDF) for the emitted secondary ions, relevant for a paraboloid shaped crater. The total depth of this GEDDF is 3.7 times the delta layer shifts. The greatest effect is for the peaks with the greatest shifts, i.e. Bi₃(q+) at the highest energy, and for the smaller fragments. It is recommended that low energies be used for the analysis beam and that carefully selected, large, secondary ion fragments are used for measuring depth distributions, or that the analysis be made in the single beam mode using the sputtering Ar cluster ions also for analysis. PMID:26883085

  15. Combination of an optical parametric oscillator and quantum-dots 655 to improve imaging depth of vasculature by intravital multicolor two-photon microscopy.

    PubMed

    Ricard, Clément; Lamasse, Lisa; Jaouen, Alexandre; Rougon, Geneviève; Debarbieux, Franck

    2016-06-01

    Simultaneous imaging of different cell types and structures in the mouse central nervous system (CNS) by intravital two-photon microscopy requires the characterization of fluorophores and advances in approaches to visualize them. We describe the use of a two-photon infrared illumination generated by an optical parametric oscillator (OPO) on quantum-dots 655 (QD655) nanocrystals to improve resolution of the vasculature deeper in the mouse brain both in healthy and pathological conditions. Moreover, QD655 signal can be unmixed from the DsRed2, CFP, EGFP and EYFP fluorescent proteins, which enhances the panel of multi-parametric correlative investigations both in the cortex and the spinal cord. PMID:27375951

  16. Combination of an optical parametric oscillator and quantum-dots 655 to improve imaging depth of vasculature by intravital multicolor two-photon microscopy

    PubMed Central

    Ricard, Clément; Lamasse, Lisa; Jaouen, Alexandre; Rougon, Geneviève; Debarbieux, Franck

    2016-01-01

    Simultaneous imaging of different cell types and structures in the mouse central nervous system (CNS) by intravital two-photon microscopy requires the characterization of fluorophores and advances in approaches to visualize them. We describe the use of a two-photon infrared illumination generated by an optical parametric oscillator (OPO) on quantum-dots 655 (QD655) nanocrystals to improve resolution of the vasculature deeper in the mouse brain both in healthy and pathological conditions. Moreover, QD655 signal can be unmixed from the DsRed2, CFP, EGFP and EYFP fluorescent proteins, which enhances the panel of multi-parametric correlative investigations both in the cortex and the spinal cord. PMID:27375951

  17. Real-time structured light depth extraction

    NASA Astrophysics Data System (ADS)

    Keller, Kurtis; Ackerman, Jeremy D.

    2000-03-01

    Gathering depth data using structured light has been a procedure for many different environments and uses. Many of these system are utilized instead of laser line scanning because of their quickness. However, to utilize depth extraction for some applications, in our case laparoscopic surgery, the depth extraction must be in real time. We have developed an apparatus that speeds up the raw image display and grabbing in structured light depth extraction from 30 frames per second to 60 and 180 frames per second. This results in an updated depth and texture map of about 15 times per second versus about 3. This increased update rate allows for real time depth extraction for use in augmented medical/surgical applications. Our miniature, fist-sized projector utilizes an internal ferro-reflective LCD display that is illuminated with cold light from a flex light pipe. The miniature projector, attachable to a laparoscope, displays inverted pairs of structured light into the body where these images are then viewed by a high-speed camera set slightly off axis from the projector that grabs images synchronously. The images from the camera are ported to a graphics-processing card where six frames are worked on simultaneously to extract depth and create mapped textures from these images. This information is then sent to the host computer with 3D coordinate information of the projector/camera and the associated textures. The surgeon is then able to view body images in real time from different locations without physically moving the laparoscope imager/projector, thereby, reducing the trauma of moving laparoscopes in the patient.

  18. Stereoscopic depth constancy.

    PubMed

    Guan, Phillip; Banks, Martin S

    2016-06-19

    Depth constancy is the ability to perceive a fixed depth interval in the world as constant despite changes in viewing distance and the spatial scale of depth variation. It is well known that the spatial frequency of depth variation has a large effect on threshold. In the first experiment, we determined that the visual system compensates for this differential sensitivity when the change in disparity is suprathreshold, thereby attaining constancy similar to contrast constancy in the luminance domain. In a second experiment, we examined the ability to perceive constant depth when the spatial frequency and viewing distance both changed. To attain constancy in this situation, the visual system has to estimate distance. We investigated this ability when vergence, accommodation and vertical disparity are all presented accurately and therefore provided veridical information about viewing distance. We found that constancy is nearly complete across changes in viewing distance. Depth constancy is most complete when the scale of the depth relief is constant in the world rather than when it is constant in angular units at the retina. These results bear on the efficacy of algorithms for creating stereo content.This article is part of the themed issue 'Vision in our three-dimensional world'. PMID:27269596

  19. Stereoscopic depth constancy

    PubMed Central

    Guan, Phillip

    2016-01-01

    Depth constancy is the ability to perceive a fixed depth interval in the world as constant despite changes in viewing distance and the spatial scale of depth variation. It is well known that the spatial frequency of depth variation has a large effect on threshold. In the first experiment, we determined that the visual system compensates for this differential sensitivity when the change in disparity is suprathreshold, thereby attaining constancy similar to contrast constancy in the luminance domain. In a second experiment, we examined the ability to perceive constant depth when the spatial frequency and viewing distance both changed. To attain constancy in this situation, the visual system has to estimate distance. We investigated this ability when vergence, accommodation and vertical disparity are all presented accurately and therefore provided veridical information about viewing distance. We found that constancy is nearly complete across changes in viewing distance. Depth constancy is most complete when the scale of the depth relief is constant in the world rather than when it is constant in angular units at the retina. These results bear on the efficacy of algorithms for creating stereo content. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269596

  20. A new method to enlarge a range of continuously perceived depth in DFD (depth-fused 3D) display

    NASA Astrophysics Data System (ADS)

    Tsunakawa, Atsuhiro; Soumiya, Tomoki; Horikawa, Yuta; Yamamoto, Hirotsugu; Suyama, Shiro

    2013-03-01

    We can successfully solve the problem in DFD display that the maximum depth difference of front and rear planes is limited because depth fusing from front and rear images to one 3-D image becomes impossible. The range of continuously perceived depth was estimated as depth difference of front and rear planes increases. When the distance was large enough, perceived depth was near front plane at 0~40 % of rear luminance and near rear plane at 60~100 % of rear luminance. This maximum depth range can be successfully enlarged by spatial-frequency modulation of front and rear images. The change of perceived depth dependence was evaluated when high frequency component of front and rear images is cut off using Fourier Transformation at the distance between front and rear plane of 5 and 10 cm (4.9 and 9.4 minute of arc). When high frequency component does not cut off enough at the distance of 5 cm, perceived depth was separated to near front plane and near rear plane. However, when the images are blurred enough by cutting high frequency component, the perceived depth has a linear dependency on luminance ratio. When the images are not blurred at the distance of 10 cm, perceived depth is separated to near front plane at 0~30% of rear luminance, near rear plane at 80~100 % and near midpoint at 40~70 %. However, when the images are blurred enough, perceived depth successfully has a linear dependency on luminance ratio.

  1. Deep depth undex simulator

    SciTech Connect

    Higginbotham, R. R.; Malakhoff, A.

    1985-01-29

    A deep depth underwater simulator is illustrated for determining the dual effects of nuclear type underwater explosion shockwaves and hydrostatic pressures on a test vessel while simulating, hydrostatically, that the test vessel is located at deep depths. The test vessel is positioned within a specially designed pressure vessel followed by pressurizing a fluid contained between the test and pressure vessels. The pressure vessel, with the test vessel suspended therein, is then placed in a body of water at a relatively shallow depth, and an explosive charge is detonated at a predetermined distance from the pressure vessel. The resulting shockwave is transmitted through the pressure vessel wall so that the shockwave impinging on the test vessel is representative of nuclear type explosive shockwaves transmitted to an underwater structure at great depths.

  2. Depth-encoded all-fiber swept source polarization sensitive OCT

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Lee, ByungKun; Choi, WooJhon; Potsaid, Benjamin; Liu, Jonathan; Jayaraman, Vijaysekhar; Cable, Alex; Kraus, Martin F.; Liang, Kaicheng; Hornegger, Joachim; Fujimoto, James G.

    2014-01-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of conventional OCT and can assess depth-resolved tissue birefringence in addition to intensity. Most existing PS-OCT systems are relatively complex and their clinical translation remains difficult. We present a simple and robust all-fiber PS-OCT system based on swept source technology and polarization depth-encoding. Polarization multiplexing was achieved using a polarization maintaining fiber. Polarization sensitive signals were detected using fiber based polarization beam splitters and polarization controllers were used to remove the polarization ambiguity. A simplified post-processing algorithm was proposed for speckle noise reduction relaxing the demand for phase stability. We demonstrated systems design for both ophthalmic and catheter-based PS-OCT. For ophthalmic imaging, we used an optical clock frequency doubling method to extend the imaging range of a commercially available short cavity light source to improve polarization depth-encoding. For catheter based imaging, we demonstrated 200 kHz PS-OCT imaging using a MEMS-tunable vertical cavity surface emitting laser (VCSEL) and a high speed micromotor imaging catheter. The system was demonstrated in human retina, finger and lip imaging, as well as ex vivo swine esophagus and cardiovascular imaging. The all-fiber PS-OCT is easier to implement and maintain compared to previous PS-OCT systems and can be more easily translated to clinical applications due to its robust design. PMID:25401008

  3. Imaging medical imaging

    NASA Astrophysics Data System (ADS)

    Journeau, P.

    2015-03-01

    This paper presents progress on imaging the research field of Imaging Informatics, mapped as the clustering of its communities together with their main results by applying a process to produce a dynamical image of the interactions between their results and their common object(s) of research. The basic side draws from a fundamental research on the concept of dimensions and projective space spanning several streams of research about three-dimensional perceptivity and re-cognition and on their relation and reduction to spatial dimensionality. The application results in an N-dimensional mapping in Bio-Medical Imaging, with dimensions such as inflammatory activity, MRI acquisition sequencing, spatial resolution (voxel size), spatiotemporal dimension inferred, toxicity, depth penetration, sensitivity, temporal resolution, wave length, imaging duration, etc. Each field is represented through the projection of papers' and projects' `discriminating' quantitative results onto the specific N-dimensional hypercube of relevant measurement axes, such as listed above and before reduction. Past published differentiating results are represented as red stars, achieved unpublished results as purple spots and projects at diverse progress advancement levels as blue pie slices. The goal of the mapping is to show the dynamics of the trajectories of the field in its own experimental frame and their direction, speed and other characteristics. We conclude with an invitation to participate and show a sample mapping of the dynamics of the community and a tentative predictive model from community contribution.

  4. Depth Optimization Study

    DOE Data Explorer

    Kawase, Mitsuhiro

    2009-11-22

    The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.

  5. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  6. Capturing Motion and Depth Before Cinematography.

    PubMed

    Wade, Nicholas J

    2016-01-01

    Visual representations of biological states have traditionally faced two problems: they lacked motion and depth. Attempts were made to supply these wants over many centuries, but the major advances were made in the early-nineteenth century. Motion was synthesized by sequences of slightly different images presented in rapid succession and depth was added by presenting slightly different images to each eye. Apparent motion and depth were combined some years later, but they tended to be applied separately. The major figures in this early period were Wheatstone, Plateau, Horner, Duboscq, Claudet, and Purkinje. Others later in the century, like Marey and Muybridge, were stimulated to extend the uses to which apparent motion and photography could be applied to examining body movements. These developments occurred before the birth of cinematography, and significant insights were derived from attempts to combine motion and depth. PMID:26684420

  7. Learning joint intensity-depth sparse representations.

    PubMed

    Tosic, Ivana; Drewes, Sarah

    2014-05-01

    This paper presents a method for learning overcomplete dictionaries of atoms composed of two modalities that describe a 3D scene: 1) image intensity and 2) scene depth. We propose a novel joint basis pursuit (JBP) algorithm that finds related sparse features in two modalities using conic programming and we integrate it into a two-step dictionary learning algorithm. The JBP differs from related convex algorithms because it finds joint sparsity models with different atoms and different coefficient values for intensity and depth. This is crucial for recovering generative models where the same sparse underlying causes (3D features) give rise to different signals (intensity and depth). We give a bound for recovery error of sparse coefficients obtained by JBP, and show numerically that JBP is superior to the group lasso algorithm. When applied to the Middlebury depth-intensity database, our learning algorithm converges to a set of related features, such as pairs of depth and intensity edges or image textures and depth slants. Finally, we show that JBP outperforms state of the art methods on depth inpainting for time-of-flight and Microsoft Kinect 3D data. PMID:24723574

  8. Radon depth migration

    SciTech Connect

    Hildebrand, S.T. ); Carroll, R.J. )

    1993-02-01

    A depth migration method is presented that used Radon-transformed common-source seismograms as input. It is shown that the Radon depth migration method can be extended to spatially varying velocity depth models by using asymptotic ray theory (ART) to construct wavefield continuation operators. These operators downward continue an incident receiver-array plane wave and an assumed point-source wavefield into the subsurface. The migration velocity model is constrain to have longer characteristic wavelengths than the dominant source wavelength such that the ART approximations for the continuation operators are valid. This method is used successfully to migrate two synthetic data examples: (1) a point diffractor, and (2) a dipping layer and syncline interface model. It is shown that the Radon migration method has a computational advantage over the standard Kirchhoff migration method in that fewer rays are computed in a main memory implementation.

  9. Polarization lidar for shallow water depth measurement.

    PubMed

    Mitchell, Steven; Thayer, Jeffrey P; Hayman, Matthew

    2010-12-20

    A bathymetric, polarization lidar system transmitting at 532 nm and using a single photomultiplier tube is employed for applications of shallow water depth measurement. The technique exploits polarization attributes of the probed water body to isolate surface and floor returns, enabling constant fraction detection schemes to determine depth. The minimum resolvable water depth is no longer dictated by the system's laser or detector pulse width and can achieve better than 1 order of magnitude improvement over current water depth determination techniques. In laboratory tests, an Nd:YAG microchip laser coupled with polarization optics, a photomultiplier tube, a constant fraction discriminator, and a time-to-digital converter are used to target various water depths with an ice floor to simulate a glacial meltpond. Measurement of 1 cm water depths with an uncertainty of ±3 mm are demonstrated using the technique. This novel approach enables new approaches to designing laser bathymetry systems for shallow depth determination from remote platforms while not compromising deep water depth measurement. PMID:21173834

  10. Improved Boundary Layer Depth Retrievals from MPLNET

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Welton, Ellsworth J.; Molod, Andrea M.; Joseph, Everette

    2013-01-01

    Continuous lidar observations of the planetary boundary layer (PBL) depth have been made at the Micropulse Lidar Network (MPLNET) site in Greenbelt, MD since April 2001. However, because of issues with the operational PBL depth algorithm, the data is not reliable for determining seasonal and diurnal trends. Therefore, an improved PBL depth algorithm has been developed which uses a combination of the wavelet technique and image processing. The new algorithm is less susceptible to contamination by clouds and residual layers, and in general, produces lower PBL depths. A 2010 comparison shows the operational algorithm overestimates the daily mean PBL depth when compared to the improved algorithm (1.85 and 1.07 km, respectively). The improved MPLNET PBL depths are validated using radiosonde comparisons which suggests the algorithm performs well to determine the depth of a fully developed PBL. A comparison with the Goddard Earth Observing System-version 5 (GEOS-5) model suggests that the model may underestimate the maximum daytime PBL depth by 410 m during the spring and summer. The best agreement between MPLNET and GEOS-5 occurred during the fall and they diered the most in the winter.

  11. A neural representation of depth from motion parallax in macaque visual cortex.

    PubMed

    Nadler, Jacob W; Angelaki, Dora E; DeAngelis, Gregory C

    2008-04-01

    Perception of depth is a fundamental challenge for the visual system, particularly for observers moving through their environment. The brain makes use of multiple visual cues to reconstruct the three-dimensional structure of a scene. One potent cue, motion parallax, frequently arises during translation of the observer because the images of objects at different distances move across the retina with different velocities. Human psychophysical studies have demonstrated that motion parallax can be a powerful depth cue, and motion parallax seems to be heavily exploited by animal species that lack highly developed binocular vision. However, little is known about the neural mechanisms that underlie this capacity. Here we show, by using a virtual-reality system to translate macaque monkeys (Macaca mulatta) while they viewed motion parallax displays that simulated objects at different depths, that many neurons in the middle temporal area (area MT) signal the sign of depth (near versus far) from motion parallax in the absence of other depth cues. To achieve this, neurons must combine visual motion with extra-retinal (non-visual) signals related to the animal's movement. Our findings suggest a new neural substrate for depth perception and demonstrate a robust interaction of visual and non-visual cues in area MT. Combined with previous studies that implicate area MT in depth perception based on binocular disparities, our results suggest that area MT contains a more general representation of three-dimensional space that makes use of multiple cues. PMID:18344979

  12. Real-time depth monitoring and control of laser machining through scanning beam delivery system

    NASA Astrophysics Data System (ADS)

    Ji, Yang; Grindal, Alexander W.; Webster, Paul J. L.; Fraser, James M.

    2015-04-01

    Scanning optics enable many laser applications in manufacturing because their low inertia allows rapid movement of the process beam across the sample. We describe our method of inline coherent imaging for real-time (up to 230 kHz) micron-scale (7-8 µm axial resolution) tracking and control of laser machining depth through a scanning galvo-telecentric beam delivery system. For 1 cm trench etching in stainless steel, we collect high speed intrapulse and interpulse morphology which is useful for further understanding underlying mechanisms or comparison with numerical models. We also collect overall sweep-to-sweep depth penetration which can be used for feedback depth control. For trench etching in silicon, we show the relationship of etch rate with average power and scan speed by computer processing of depth information without destructive sample post-processing. We also achieve three-dimensional infrared continuous wave (modulated) laser machining of a 3.96 × 3.96 × 0.5 mm3 (length × width × maximum depth) pattern on steel with depth feedback. To the best of our knowledge, this is the first successful demonstration of direct real-time depth monitoring and control of laser machining with scanning optics.

  13. Depth-encoded synthetic aperture optical coherence tomography of biological tissues with extended focal depth.

    PubMed

    Mo, Jianhua; de Groot, Mattijs; de Boer, Johannes F

    2015-02-23

    Optical coherence tomography (OCT) has proven to be able to provide three-dimensional (3D) volumetric images of scattering biological tissues for in vivo medical diagnostics. Unlike conventional optical microscopy, its depth-resolving ability (axial resolution) is exclusively determined by the laser source and therefore invariant over the full imaging depth. In contrast, its transverse resolution is determined by the objective's numerical aperture and the wavelength which is only approximately maintained over twice the Rayleigh range. However, the prevailing laser sources for OCT allow image depths of more than 5 mm which is considerably longer than the Rayleigh range. This limits high transverse resolution imaging with OCT. Previously, we reported a novel method to extend the depth-of-focus (DOF) of OCT imaging in Mo et al.Opt. Express 21, 10048 (2013)]. The approach is to create three different optical apertures via pupil segmentation with an annular phase plate. These three optical apertures produce three OCT images from the same sample, which are encoded to different depth positions in a single OCT B-scan. This allows for correcting the defocus-induced curvature of wave front in the pupil so as to improve the focus. As a consequence, the three images originating from those three optical apertures can be used to reconstruct a new image with an extended DOF. In this study, we successfully applied this method for the first time to both an artificial phantom and biological tissues over a four times larger depth range. The results demonstrate a significant DOF improvement, paving the way for 3D high resolution OCT imaging beyond the conventional Rayleigh range. PMID:25836528

  14. Crack depth determination with inductive thermography

    NASA Astrophysics Data System (ADS)

    Oswald-Tranta, B.; Schmidt, R.

    2015-05-01

    Castings, forgings and other steel products are nowadays usually tested with magnetic particle inspection, in order to detect surface cracks. An alternative method is active thermography with inductive heating, which is quicker, it can be well automated and as in this paper presented, even the depth of a crack can be estimated. The induced eddy current, due to its very small penetration depth in ferro-magnetic materials, flows around a surface crack, heating this selectively. The surface temperature is recorded during and after the short inductive heating pulse with an infrared camera. Using Fourier transformation the whole IR image sequence is evaluated and the phase image is processed to detect surface cracks. The level and the local distribution of the phase around a crack correspond to its depth. Analytical calculations were used to model the signal distribution around cracks with different depth and a relationship has been derived between the depth of a crack and its phase value. Additionally, also the influence of the heating pulse duration has been investigated. Samples with artificial and with natural cracks have been tested. Results are presented comparing the calculated and measured phase values depending on the crack depth. Keywords: inductive heating, eddy current, infrared

  15. Motion parallax thresholds for unambiguous depth perception.

    PubMed

    Holmin, Jessica; Nawrot, Mark

    2015-10-01

    The perception of unambiguous depth from motion parallax arises from the neural integration of retinal image motion and extra-retinal eye movement signals. It is only recently that these parameters have been articulated in the form of the motion/pursuit ratio. In the current study, we explored the lower limits of the parameter space in which observers could accurately perform near/far relative depth-sign discriminations for a translating random-dot stimulus. Stationary observers pursued a translating random dot stimulus containing relative image motion. Their task was to indicate the location of the peak in an approximate square-wave stimulus. We measured thresholds for depth from motion parallax, quantified as motion/pursuit ratios, as well as lower motion thresholds and pursuit accuracy. Depth thresholds were relatively stable at pursuit velocities 5-20 deg/s, and increased at lower and higher velocities. The pattern of results indicates that minimum motion/pursuit ratios are limited by motion and pursuit signals, both independently and in combination with each other. At low and high pursuit velocities, depth thresholds were limited by inaccurate pursuit signals. At moderate pursuit velocities, depth thresholds were limited by motion signals. PMID:26232612

  16. The construction of landslides achieves by using 1969 CORONA (KH-4B) image and aerial photos- A case study of the catchment of Te-chi reservoir

    NASA Astrophysics Data System (ADS)

    Jen, Chia-Hung; Dirk, Wenske; Lin, Jiun-Chuan; Böse, Margot

    2010-05-01

    Landslides are common phenomenon in Taiwan for the extreme climate, intensive tectonic movement and highly fracture bedrock. In the study of landslides, to make the historical archive is critical for both long term monitoring and landform evolution research. For the first three decades since the 1950s, only few maps and written documents are available for the high mountain areas, so historical remote sensing data can be a viable way to achieve detailed information about human activities and landscape reaction in terms of increasing denudation. In this study, we try to use different kind of data to identify landslides, including CORONA imagery of 969, ortho-rectified aerial photo map of 1980 and ortho-rectified aerial photo of 2004. The historical CORONA imagery can be orthorectified and georeferenced therefore can be used as a source of data for landslides identification and landslide archive construction. The study area is in the upper catchment of Ta-chia River. This area is the homeland to Taiyal aboriginal tribe. The Tachia River is "Taiwan's TVA" in terms of its vast hydroelectric power potential. The rough terrain makes accessibility very difficult, isolating the upper Tachia basin from the rest of Taiwan's densely populated areas. The construction of the Central Cross-Island Highway officially started in July 1956 and was completed in May 1960. It connects the towns of Tong-shi in the west and Taroko in the east, across the upper Ta-chia basin. There are branches off to the town of Pu-li in the south and I-lan in the north, so the upper Ta-chia basin becomes the pivotal node for cross-island traffic in four directions. Apart from its military purposes, the Central Cross-Island Highway has a substantial impact on the mountainous areas of upper Tachia basin, the most important aspect being the increase of population and farming. The rough terrain makes the human accessibility very lower so the upper Ta-chia basin is isolated from the rest of densely populated

  17. Variable depth core sampler

    DOEpatents

    Bourgeois, Peter M.; Reger, Robert J.

    1996-01-01

    A variable depth core sampler apparatus comprising a first circular hole saw member, having longitudinal sections that collapses to form a point and capture a sample, and a second circular hole saw member residing inside said first hole saw member to support the longitudinal sections of said first hole saw member and prevent them from collapsing to form a point. The second hole saw member may be raised and lowered inside said first hole saw member.

  18. Variable depth core sampler

    DOEpatents

    Bourgeois, P.M.; Reger, R.J.

    1996-02-20

    A variable depth core sampler apparatus is described comprising a first circular hole saw member, having longitudinal sections that collapses to form a point and capture a sample, and a second circular hole saw member residing inside said first hole saw member to support the longitudinal sections of said first hole saw member and prevent them from collapsing to form a point. The second hole saw member may be raised and lowered inside said first hole saw member. 7 figs.

  19. Burn Depth Monitor

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Supra Medical Systems is successfully marketing a device that detects the depth of burn wounds in human skin. To develop the product, the companyused technology developed by NASA Langley physicists looking for better ultrasonic detection of small air bubbles and cracks in metal. The device is being marketed to burn wound analysis and treatment centers. Through a Space Act agreement, NASA and the company are also working to further develop ultrasonic instruments for new medical applications.

  20. Burn Depth Monitor

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Supra Medical Systems is successfully marketing a device that detects the depth of burn wounds in human skin. To develop the product, the company used technology developed by NASA Langley physicists looking for better ultrasonic detection of small air bubbles and cracks in metal. The device is being marketed to burn wound analysis and treatment centers. Through a Space Act agreement, NASA and the company are also working to further develop ultrasonic instruments for new medical applications

  1. Burn Depth Monitor

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Supra Medical Systems is successfully marketing a device that detects the depth of burn wounds in human skin. To develop the product, the company used technology developed by NASA Langley physicists looking for better ultrasonic detection of small air bubbles and cracks in metal. The device is being marketed to burn wound analysis and treatment centers. Through a Space Act agreement, NASA and the company are also working to further develop ultrasonic instruments for new medical applications.

  2. SIMS depth profiling of polymer blends with protein based drugs

    NASA Astrophysics Data System (ADS)

    Mahoney, Christine M.; Yu, Jinxiang; Fahey, Albert; Gardella, Joseph A.

    2006-07-01

    We report the results of the surface and in-depth characterization of two component blend films of poly( L-lactic acid) (PLLA) and Pluronic surfactant [poly(ethylene oxide) (A) poly(propylene oxide) (B) ABA block copolymer]. These blend systems are of particular importance for protein drug delivery, where it is expected that the Pluronic surfactant will retain the activity of the protein drug and enhance the biocompatibility of the device. Angle dependant X-ray photoelectron spectroscopy (XPS) and time-of-flight secondary ion mass spectrometry (ToF-SIMS) employing an SF 5+ polyatomic primary ion source were both used for monitoring the surfactant's concentration as a function of depth. The results show an increased concentration of surfactant at the surface, where the surface segregation initially increases with increasing bulk concentration and then remains constant above 5% (w/w) Pluronic. This surface segregated region is immediately followed by a depletion region with a homogeneous mixture in the bulk of the film. These results suggest the selection of the surfactant bulk concentration of the thin film matrices for drugs/proteins delivery should achieve a relatively homogeneous distribution of stabilizer/protein in the PLLA matrix. Analysis of three component blends of PLLA, Pluronic and insulin are also investigated. In the three component blends, ToF-SIMS imaging shows the spatial distribution of surfactant/protein mixtures. These data are reported also as depth profiles.

  3. Variable depth core sampler

    SciTech Connect

    Bourgeois, P.M.; Reger, R.J.

    1994-12-31

    This invention relates to a sampling means, more particularly to a device to sample hard surfaces at varying depths. Often it is desirable to take samples of a hard surface wherein the samples are of the same diameter but of varying depths. Current practice requires that a full top-to-bottom sample of the material be taken, using a hole saw, and boring a hole from one end of the material to the other. The sample thus taken is removed from the hole saw and the middle of said sample is then subjected to further investigation. This paper describes a variable depth core sampler comprimising a circular hole saw member, having longitudinal sections that collapse to form a point and capture a sample, and a second saw member residing inside the first hole saw member to support the longitudinal sections of the first member and prevent them from collapsing to form a point. The second hole saw member may be raised and lowered inside the the first hole saw member.

  4. Focus cues affect perceived depth

    PubMed Central

    Watt, Simon J.; Akeley, Kurt; Ernst, Marc O.; Banks, Martin S.

    2007-01-01

    Depth information from focus cues—accommodation and the gradient of retinal blur—is typically incorrect in three-dimensional (3-D) displays because the light comes from a planar display surface. If the visual system incorporates information from focus cues into its calculation of 3-D scene parameters, this could cause distortions in perceived depth even when the 2-D retinal images are geometrically correct. In Experiment 1 we measured the direct contribution of focus cues to perceived slant by varying independently the physical slant of the display surface and the slant of a simulated surface specified by binocular disparity (binocular viewing) or perspective/texture (monocular viewing). In the binocular condition, slant estimates were unaffected by display slant. In the monocular condition, display slant had a systematic effect on slant estimates. Estimates were consistent with a weighted average of slant from focus cues and slant from disparity/texture, where the cue weights are determined by the reliability of each cue. In Experiment 2, we examined whether focus cues also have an indirect effect on perceived slant via the distance estimate used in disparity scaling. We varied independently the simulated distance and the focal distance to a disparity-defined 3-D stimulus. Perceived slant was systematically affected by changes in focal distance. Accordingly, depth constancy (with respect to simulated distance) was significantly reduced when focal distance was held constant compared to when it varied appropriately with the simulated distance to the stimulus. The results of both experiments show that focus cues can contribute to estimates of 3-D scene parameters. Inappropriate focus cues in typical 3-D displays may therefore contribute to distortions in perceived space. PMID:16441189

  5. Real-time viewpoint image synthesis using strips of multi-camera images

    NASA Astrophysics Data System (ADS)

    Date, Munekazu; Takada, Hideaki; Kojima, Akira

    2015-03-01

    A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.

  6. Investigating the San Andreas Fault System in the Northern Salton Trough by a Combination of Seismic Tomography and Pre-stack Depth Migration: Results from the Salton Seismic Imaging Project (SSIP)

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Ryberg, T.; Fuis, G. S.; Goldman, M.; Catchings, R.; Rymer, M. J.; Hole, J. A.; Stock, J. M.

    2013-12-01

    The Salton Trough in southern California is a tectonically active pull-apart basin which was formed in migrating step-overs between strike-slip faults, of which the San Andreas fault (SAF) and the Imperial fault are current examples. It is located within the large-scale transition between the onshore SAF strike-slip system to the north and the marine rift system of the Gulf of California to the south. Crustal stretching and sinking formed the distinct topographic features and sedimentary successions of the Salton Trough. The active SAF and related fault systems can produce potentially large damaging earthquakes. The Salton Seismic Imaging Project (SSIP), funded by NSF and USGS, was undertaken to generate seismic data and images to improve the knowledge of fault geometry and seismic velocities within the sedimentary basins and underlying crystalline crust around the SAF in this key region. The results from these studies are required as input for modeling of earthquake scenarios and prediction of strong ground motion in the surrounding populated areas and cities. We present seismic data analysis and results from tomography and pre-stack depth migration for a number of seismic profiles (Lines 1, 4-7) covering mainly the northern Salton Trough. The controlled-source seismic data were acquired in 2011. The seismic lines have lengths ranging from 37 to 72 km. On each profile, 9-17 explosion sources with charges of 110-460 kg were recorded by 100-m spaced vertical component receivers. On Line 7, additional OBS data were acquired within the Salton Sea. Travel times of first arrivals were picked and inverted for initial 1D velocity models. Alternatively, the starting models were derived from the crustal-scale velocity models developed by the Southern California Earthquake Center. The final 2D velocity models were obtained using the algorithm of Hole (1992; JGR). We have also tested the tomography packages FAST and SIMUL2000, resulting in similar velocity structures. An

  7. Achieving Success in Small Business. A Self-Instruction Program for Small Business Owner-Managers. Creating an Effective Business Image.

    ERIC Educational Resources Information Center

    Virginia Polytechnic Inst. and State Univ., Blacksburg. Div. of Vocational-Technical Education.

    This self-instructional module on creating an effective business image is the fourth in a set of twelve modules designed for small business owner-managers. Competencies for this module are (1) identify the key factors which contribute to formation of a business image and (2) assess your current image and determine if it communicates the…

  8. Boundary Depth Information Using Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Wang, Ruisheng

    2016-06-01

    Depth information is widely used for representation, reconstruction and modeling of 3D scene. Generally two kinds of methods can obtain the depth information. One is to use the distance cues from the depth camera, but the results heavily depend on the device, and the accuracy is degraded greatly when the distance from the object is increased. The other one uses the binocular cues from the matching to obtain the depth information. It is more and more mature and convenient to collect the depth information of different scenes by stereo matching methods. In the objective function, the data term is to ensure that the difference between the matched pixels is small, and the smoothness term is to smooth the neighbors with different disparities. Nonetheless, the smoothness term blurs the boundary depth information of the object which becomes the bottleneck of the stereo matching. This paper proposes a novel energy function for the boundary to keep the discontinuities and uses the Hopfield neural network to solve the optimization. We first extract the region of interest areas which are the boundary pixels in original images. Then, we develop the boundary energy function to calculate the matching cost. At last, we solve the optimization globally by the Hopfield neural network. The Middlebury stereo benchmark is used to test the proposed method, and results show that our boundary depth information is more accurate than other state-of-the-art methods and can be used to optimize the results of other stereo matching methods.

  9. Design of an optical system with large depth of field using in the micro-assembly

    NASA Astrophysics Data System (ADS)

    Li, Rong; Chang, Jun; Zhang, Zhi-jing; Ye, Xin; Zheng, Hai-jing

    2013-08-01

    Micro system currently is the mainstream of application and demand of the field of micro fabrication of civilian and national defense. Compared with the macro assembly, the requirements on location accuracy of the micro-assembly system are much higher. Usually the dimensions of the components of the micro-assembly are mostly between a few microns to several hundred microns. The general assembly precision requires for the sub-micron level. Micro system assembly is the bottleneck of micro fabrication currently. The optical stereo microscope used in the field of micro assembly technology can achieve high-resolution imaging, but the depth of field of the optical imaging system is too small. Thus it's not conducive to the three-dimensional observation process of the micro-assembly. This paper summarizes the development of micro system assembly at home and abroad firstly. Based on the study of the core features of the technology, a program is proposed which uses wave front coding technology to increase the depth of field of the optical imaging system. In the wave front coding technology, by combining traditional optical design with digital image processing creatively, the depth of field can be greatly increased, moreover, all defocus-related aberrations, such as spherical aberration, chromatic aberration, astigmatism, Ptzvel(field) curvature, distortion, and other defocus induced by the error of assembling and temperature change, can be corrected or minimized. In this paper, based on the study of theory, a set of optical microscopy imaging system is designed. This system is designed and optimized by optical design software CODE V and ZEMAX. At last, the imaging results of the traditional optical stereo microscope and the optical stereo microscope applied wave front coding technology are compared. The results show that: the method has a practical operability and the phase plate obtained by optimized has a good effect on improving the imaging quality and increasing the

  10. Depth profiling of gold nanoparticles and characterization of point spread functions in reconstructed and human skin using multiphoton microscopy.

    PubMed

    Labouta, Hagar I; Hampel, Martina; Thude, Sibylle; Reutlinger, Katharina; Kostka, Karl-Heinz; Schneider, Marc

    2012-01-01

    Multiphoton microscopy has become popular in studying dermal nanoparticle penetration. This necessitates studying the imaging parameters of multiphoton microscopy in skin as an imaging medium, in terms of achievable detection depths and the resolution limit. This would simulate real-case scenarios rather than depending on theoretical values determined under ideal conditions. This study has focused on depth profiling of sub-resolution gold nanoparticles (AuNP) in reconstructed (fixed and unfixed) and human skin using multiphoton microscopy. Point spread functions (PSF) were determined for the used water-immersion objective of 63×/NA = 1.2. Factors such as skin-tissue compactness and the presence of wrinkles were found to deteriorate the accuracy of depth profiling. A broad range of AuNP detectable depths (20-100 μm) in reconstructed skin was observed. AuNP could only be detected up to ∼14 μm depth in human skin. Lateral (0.5 ± 0.1 μm) and axial (1.0 ± 0.3 μm) PSF in reconstructed and human specimens were determined. Skin cells and intercellular components didn't degrade the PSF with depth. In summary, the imaging parameters of multiphoton microscopy in skin and practical limitations encountered in tracking nanoparticle penetration using this approach were investigated. PMID:22147676

  11. Prestack depth migration for complex 2D structure using phase-screen propagators

    SciTech Connect

    Roberts, P.; Huang, Lian-Jie; Burch, C.; Fehler, M.; Hildebrand, S.

    1997-11-01

    We present results for the phase-screen propagator method applied to prestack depth migration of the Marmousi synthetic data set. The data were migrated as individual common-shot records and the resulting partial images were superposed to obtain the final complete Image. Tests were performed to determine the minimum number of frequency components required to achieve the best quality image and this in turn provided estimates of the minimum computing time. Running on a single processor SUN SPARC Ultra I, high quality images were obtained in as little as 8.7 CPU hours and adequate images were obtained in as little as 4.4 CPU hours. Different methods were tested for choosing the reference velocity used for the background phase-shift operation and for defining the slowness perturbation screens. Although the depths of some of the steeply dipping, high-contrast features were shifted slightly the overall image quality was fairly insensitive to the choice of the reference velocity. Our jests show the phase-screen method to be a reliable and fast algorithm for imaging complex geologic structures, at least for complex 2D synthetic data where the velocity model is known.

  12. A modified model of the just noticeable depth difference and its application to depth sensation enhancement.

    PubMed

    Jung, Seung-Won

    2013-10-01

    The just noticeable depth difference (JNDD) describes the threshold of human perception of the difference in the depth. In flat-panel-based three-dimensional (3-D) displays, the JNDD is typically measured by changing the depth difference between displayed image objects until the difference is perceivable. However, not only the depth, but also the perceived size changes when the depth difference increases. In this paper, we present a modified JNDD measurement method that adjusts the physical size of the object such that the perceived size of the object is maintained. We then apply the proposed JNDD measurement method to depth sensation enhancement. When the depth value difference between the objects is increased to enable the viewer to perceive the depth difference, the size of the objects is adjusted to maintain the perceived size of the objects. In addition, since the size change of the objects can produce a whole region, a depth-adaptive hole-inpainting technique is proposed to compensate for the hole region with high accuracy. The experimental results demonstrate the effectiveness of the proposed method. PMID:23686954

  13. Binocular disparity magnitude affects perceived depth magnitude despite inversion of depth order.

    PubMed

    Matthews, Harold; Hill, Harold; Palmisano, Stephen

    2011-01-01

    The hollow-face illusion involves a misperception of depth order: our perception follows our top-down knowledge that faces are convex, even though bottom-up depth information reflects the actual concave surface structure. While pictorial cues can be ambiguous, stereopsis should unambiguously indicate the actual depth order. We used computer-generated stereo images to investigate how, if at all, the sign and magnitude of binocular disparities affect the perceived depth of the illusory convex face. In experiment 1 participants adjusted the disparity of a convex comparison face until it matched a reference face. The reference face was either convex or hollow and had binocular disparities consistent with an average face or had disparities exaggerated, consistent with a face stretched in depth. We observed that apparent depth increased with disparity magnitude, even when the hollow faces were seen as convex (ie when perceived depth order was inconsistent with disparity sign). As expected, concave faces appeared flatter than convex faces, suggesting that disparity sign also affects perceived depth. In experiment 2, participants were presented with pairs of real and illusory convex faces. In each case, their task was to judge which of the two stimuli appeared to have the greater depth. Hollow faces with exaggerated disparities were again perceived as deeper. PMID:22132512

  14. Extended depth of field system for long distance iris acquisition

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Lin; Hsieh, Sheng-Hsun; Hung, Kuo-En; Yang, Shi-Wen; Li, Yung-Hui; Tien, Chung-Hao

    2012-10-01

    Using biometric signatures for identity recognition has been practiced for centuries. Recently, iris recognition system attracts much attention due to its high accuracy and high stability. The texture feature of iris provides a signature that is unique for each subject. Currently most commercial iris recognition systems acquire images in less than 50 cm, which is a serious constraint that needs to be broken if we want to use it for airport access or entrance that requires high turn-over rate . In order to capture the iris patterns from a distance, in this study, we developed a telephoto imaging system with image processing techniques. By using the cubic phase mask positioned front of the camera, the point spread function was kept constant over a wide range of defocus. With adequate decoding filter, the blurred image was restored, where the working distance between the subject and the camera can be achieved over 3m associated with 500mm focal length and aperture F/6.3. The simulation and experimental results validated the proposed scheme, where the depth of focus of iris camera was triply extended over the traditional optics, while keeping sufficient recognition accuracy.

  15. Real-mode depth-fused display with viewer tracking.

    PubMed

    Park, Soon-gi; Hong, Jong-Young; Lee, Chang-Kun; Lee, Byoungho

    2015-10-01

    A real-mode depth-fused display is proposed by employing an integral imaging method in the depth-fused display system with viewer tracking. By giving depth-fusing effect between a transparent display and a floated planar two-dimensional image generated by the real-mode integral imaging method, a three-dimensional image is generated in front of the display plane unlike conventional depth-fused displays. The viewing angle of the system is expanded with a viewer tracking method. In addition, dynamic vertical and horizontal motion parallax can be given according to the tracked position of the viewer. As the depth-fusing effect is not dependent on the viewing distance, accommodation cue and motion parallax are provided for a wide range of viewing position. We demonstrate the feasibility of our proposed method by experimental system. PMID:26480184

  16. An iterative trilateral filter algorithm for depth map

    NASA Astrophysics Data System (ADS)

    Gao, Kai; Piao, Yan; Zhang, Jing-he

    2015-03-01

    Depth map is critical in Free-viewpoint television (FTV) system, and the quality of reconstructed depth map impacts the quality of rendering view. Depth map obtained from TOF camera, not only appears with large flat area and sharp edges, but also contains lots of noises. In order to achieve the aim of decreasing the noise and keeping the accurate of edges in the depth map, an iterative trilateral filter is proposed by combining bilateral filter and the introduced factor of illumination normal in this paper. The experimental results show that the proposed method can reduce the noise obviously - and keep the edge of the depth map from TOF camera well.

  17. A semi-automatic multi-view depth estimation method

    NASA Astrophysics Data System (ADS)

    Wildeboer, Meindert Onno; Fukushima, Norishige; Yendo, Tomohiro; Panahpour Tehrani, Mehrdad; Fujii, Toshiaki; Tanimoto, Masayuki

    2010-07-01

    In this paper, we propose a semi-automatic depth estimation algorithm whereby the user defines object depth boundaries and disparity initialization. Automatic depth estimation methods generally have difficulty to obtain good depth results around object edges and in areas with low texture. The goal of our method is to improve the depth in these areas and reduce view synthesis artifacts in Depth Image Based Rendering. Good view synthesis quality is very important in applications such as 3DTV and Free-viewpoint Television (FTV). In our proposed method, initial disparity values for smooth areas can be input through a so-called manual disparity map, and depth boundaries are defined by a manually created edge map which can be supplied for one or multiple frames. For evaluation we used MPEG multi-view videos and we demonstrate our algorithm can significantly improve the depth maps and reduce view synthesis artifacts.

  18. Molecular Depth Profiling by Wedged Crater Beveling

    PubMed Central

    Mao, Dan; Lu, Caiyan; Winograd, Nicholas; Wucher, Andreas

    2011-01-01

    Time-of-flight secondary ion mass spectrometry and atomic force microscopy are employed to characterize a wedge-shaped crater eroded by a 40keV C60+ cluster ion beam on an organic film of Irganox 1010 doped with Irganox 3114 delta layers. From an examination of the resulting surface, the information about depth resolution, topography and erosion rate can be obtained as a function of crater depth for every depth in a single experiment. It is shown that when measurements are performed at liquid nitrogen temperature, a constant erosion rate and reduced bombardment induced surface roughness is observed. At room temperature, however, the erosion rate drops by ~1/3 during the removal of the 400 nm Irganox film and the roughness gradually increased to from 1 nm ~4 nm. From SIMS lateral images of the beveled crater and AFM topography results, depth resolution was further improved by employing glancing angles of incidence and lower primary ion beam energy. Sub-10 nm depth resolution was observed under the optimized conditions on a routine basis. In general, we show that the wedge-crater beveling is an important tool for elucidating the factors that are important for molecular depth profiling experiments. PMID:21744861

  19. Bessel beam Grueneisen photoacoustic microscopy with extended depth of field

    NASA Astrophysics Data System (ADS)

    Shi, Junhui; Wang, Lidai; Noordam, Cedric; Wang, Lihong V.

    2016-03-01

    The short focal depth of a Gaussian beam limits the volumetric imaging speed of optical resolution photoacoustic microscopy (OR-PAM). A Bessel beam, which is diffraction-free, provides a long focal depth, but its side-lobes may deteriorate image quality when the Bessel beam is directly employed to excite photoacoustic signals in ORPAM. Here, we present a nonlinear approach based on the Grueneisen relaxation effect to suppress the side-lobe artifacts in photoacoustic imaging. This method extends the focal depth of OR-PAM and speeds up volumetric imaging. We experimentally demonstrated a 1-mm focal depth with a 7-μm lateral resolution and volumetrically imaged a carbon fiber and red blood cell samples.

  20. Depth-resolved soft x-ray photoelectron emission microscopy in nanostructures via standing-wave excited photoemission

    SciTech Connect

    Kronast, F.; Ovsyannikov, R.; Kaiser, A.; Wiemann, C.; Yang, S.-H.; Locatelli, A.; Burgler, D.E.; Schreiber, R.; Salmassi, F.; Fischer, P.; Durr, H.A.; Schneider, C.M.; Eberhardt, W.; Fadley, C.S.

    2008-11-24

    We present an extension of conventional laterally resolved soft x-ray photoelectron emission microscopy. A depth resolution along the surface normal down to a few {angstrom} can be achieved by setting up standing x-ray wave fields in a multilayer substrate. The sample is an Ag/Co/Au trilayer, whose first layer has a wedge profile, grown on a Si/MoSi2 multilayer mirror. Tuning the incident x-ray to the mirror Bragg angle we set up standing x-ray wave fields. We demonstrate the resulting depth resolution by imaging the standing wave fields as they move through the trilayer wedge structure.

  1. Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display.

    PubMed

    Jung, Jae-Hyun; Yeom, Jiwoon; Hong, Jisoo; Hong, Keehoon; Min, Sung-Wook; Lee, Byoungho

    2011-10-10

    In three-dimensional television (3D TV) broadcasting, we find the effect of fundamental depth resolution and the cardboard effect to the perceived depth resolution on multi-view display is important. The observer distance and the specification of multi-view display quantize the expressible depth range, which affect the perception of depth resolution of the observer. In addition, the multi-view 3D TV needs the view synthesis process using depth image-based rendering which induces the cardboard effect from the relation among the stereo pickup, the multi-view synthesis and the multi-view display. In this paper, we analyze the fundamental depth resolution and the cardboard effect from the synthesis process in the multi-view 3D TV broadcasting. After the analysis, the numerical comparison and subjective tests with 20 participants are performed to find the effect of fundamental depth resolution and the cardboard effect to the perceived depth resolution. PMID:21997055

  2. Photoacoustic molecular imaging

    NASA Astrophysics Data System (ADS)

    Kiser, William L., Jr.; Reinecke, Daniel; DeGrado, Timothy; Bhattacharyya, Sibaprasad; Kruger, Robert A.

    2007-02-01

    It is well documented that photoacoustic imaging has the capability to differentiate tissue based on the spectral characteristics of tissue in the optical regime. The imaging depth in tissue exceeds standard optical imaging techniques, and systems can be designed to achieve excellent spatial resolution. A natural extension of imaging the intrinsic optical contrast of tissue is to demonstrate the ability of photoacoustic imaging to detect contrast agents based on optically absorbing dyes that exhibit well defined absorption peaks in the infrared. The ultimate goal of this project is to implement molecular imaging, in which Herceptin TM, a monoclonal antibody that is used as a therapeutic agent in breast cancer patients that over express the HER2 gene, is labeled with an IR absorbing dye, and the resulting in vivo bio-distribution is mapped using multi-spectral, infrared stimulation and subsequent photoacoustic detection. To lay the groundwork for this goal and establish system sensitivity, images were collected in tissue mimicking phantoms to determine maximum detection depth and minimum detectable concentration of Indocyanine Green (ICG), a common IR absorbing dye, for a single angle photoacoustic acquisition. A breast mimicking phantom was constructed and spectra were also collected for hemoglobin and methanol. An imaging schema was developed that made it possible to separate the ICG from the other tissue mimicking components in a multiple component phantom. We present the results of these experiments and define the path forward for the detection of dye labeled Herceptin TM in cell cultures and mice models.

  3. Depth inpainting by tensor voting.

    PubMed

    Kulkarni, Mandar; Rajagopalan, Ambasamudram N

    2013-06-01

    Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data. PMID:24323102

  4. Phase II dose escalation study of image-guided adaptive radiotherapy for prostate cancer: Use of dose-volume constraints to achieve rectal isotoxicity

    SciTech Connect

    Vargas, Carlos; Yan Di; Kestin, Larry L.; Krauss, Daniel; Lockman, David M.; Brabbins, Donald S.; Martinez, Alvaro A. . E-mail: amartinez@beaumont.edu

    2005-09-01

    Purpose: In our Phase II prostate cancer Adaptive Radiation Therapy (ART) study, the highest possible dose was selected on the basis of normal tissue tolerance constraints. We analyzed rectal toxicity rates in different dose levels and treatment groups to determine whether equivalent toxicity rates were achieved as hypothesized when the protocol was started. Methods and Materials: From 1999 to 2002, 331 patients with clinical stage T1 to T3, node-negative prostate cancer were prospectively treated with three-dimensional conformal adaptive RT. A patient-specific confidence-limited planning target volume was constructed on the basis of 5 CT scans and 4 sets of electronic portal images after the first 4 days of treatment. For each case, the rectum (rectal solid) was contoured in its entirety. The rectal wall was defined by use of a 3-mm wall thickness (median volume: 29.8 cc). The prescribed dose level was chosen using the following rectal wall dose constraints: (1) Less than 30% of the rectal wall volume can receive more than 75.6 Gy. (2) Less than 5% of the rectal wall can receive more than 82 Gy. Low-risk patients (PSA < 10, Stage {<=} T2a, Gleason score < 7) were treated to the prostate alone (Group 1). All other patients, intermediate and high risk, where treated to the prostate and seminal vesicles (Group 2). The risk of chronic toxicity (NCI Common Toxicity Criteria 2.0) was assessed for the different dose levels prescribed. HIC approval was acquired for all patients. Median follow-up was 1.6 years. Results: Grade 2 chronic rectal toxicity was experienced by 34 patients (10%) (9% experienced rectal bleeding, 6% experienced proctitis, 3% experienced diarrhea, and 1% experienced rectal pain) at a median interval of 1.1 year. Nine patients (3%) experienced grade 3 or higher chronic rectal toxicity (1 Grade 4) at a median interval of 1.2 years. The 2-year rates of Grade 2 or higher and Grade 3 or higher chronic rectal toxicity were 17% and 3%, respectively. No

  5. Depth Perception Not Found in Human Observers for Static or Dynamic Anti-Correlated Random Dot Stereograms

    PubMed Central

    Hibbard, Paul B.; Scott-Brown, Kenneth C.; Haigh, Emma C.; Adrain, Melanie

    2014-01-01

    One of the greatest challenges in visual neuroscience is that of linking neural activity with perceptual experience. In the case of binocular depth perception, important insights have been achieved through comparing neural responses and the perception of depth, for carefully selected stimuli. One of the most important types of stimulus that has been used here is the anti-correlated random dot stereogram (ACRDS). In these stimuli, the contrast polarity of one half of a stereoscopic image is reversed. While neurons in cortical area V1 respond reliably to the binocular disparities in ACRDS, they do not create a sensation of depth. This discrepancy has been used to argue that depth perception must rely on neural activity elsewhere in the brain. Currently, the psychophysical results on which this argument rests are not clear-cut. While it is generally assumed that ACRDS do not support the perception of depth, some studies have reported that some people, some of the time, perceive depth in some types of these stimuli. Given the importance of these results for understanding the neural correlates of stereopsis, we studied depth perception in ACRDS using a large number of observers, in order to provide an unambiguous conclusion about the extent to which these stimuli support the perception of depth. We presented observers with random dot stereograms in which correlated dots were presented in a surrounding annulus and correlated or anti-correlated dots were presented in a central circular region. While observers could reliably report the depth of the central region for correlated stimuli, we found no evidence for depth perception in static or dynamic anti-correlated stimuli. Confidence ratings for stereoscopic perception were uniformly low for anti-correlated stimuli, but showed normal variation with disparity for correlated stimuli. These results establish that the inability of observers to perceive depth in ACRDS is a robust phenomenon. PMID:24416195

  6. Neutron depth profiling by large angle coincidence spectroscopy

    SciTech Connect

    Vacik, J.; Cervena, J.; Hnatowicz, V.; Havranek, V.; Fink, D.

    1995-12-31

    Extremely low concentrations of several technologically important elements (mainly lithium and boron) have been studied by a modified neutron depth profiling technique. Large angle coincidence spectroscopy using neutrons to probe solids with a thickness not exceeding several micrometers has proved to be a powerful analytical method with an excellent detection sensitivity. Depth profiles in the ppb atomic range are accessible for any solid material. A depth resolution of about 20 nanometers can be achieved.

  7. High accuracy hole filling for Kinect depth maps

    NASA Astrophysics Data System (ADS)

    Wang, Jianxin; An, Ping; Zuo, Yifan; You, Zhixiang; Zhang, Zhaoyang

    2014-10-01

    Hole filling of depth maps is a core technology of the Kinect based visual system. In this paper, we propose a hole filling algorithm for Kinect depth maps based on separately repairing of the foreground and background. There are two-part processing in the proposed algorithm. Firstly, a fast pre-processing to the Kinect depth map holes is performed. In this part, we fill the background holes of Kinect depth maps with the deepest depth image which is constructed by combining the spatio-temporal information of the pixels in Kinect depth map with the corresponding color information in the Kinect color image. The second step is the enhancement for the pre-processing depth maps. We propose a depth enhancement algorithm based on the joint information of geometry and color. Since the geometry information is more robust than the color, we correct the depth by affine transform in prior to utilizing the color cues. Then we determine the filter parameters adaptively based on the local features of the color image which solves the texture copy problem and protects the fine structures. Since L1 norm optimization is more robust to data outliers than L2 norm optimization, we force the filtered value to be the solution for L1 norm optimization. Experimental results show that the proposed algorithm can protect the intact foreground depth, improve the accuracy of depth at object edges, and eliminate the flashing phenomenon of depth at objects edges. In addition, the proposed algorithm can effectively fill the big depth map holes generated by optical reflection.

  8. An improved edge detection algorithm for depth map inpainting

    NASA Astrophysics Data System (ADS)

    Chen, Weihai; Yue, Haosong; Wang, Jianhua; Wu, Xingming

    2014-04-01

    Three-dimensional (3D) measurement technology has been widely used in many scientific and engineering areas. The emergence of Kinect sensor makes 3D measurement much easier. However the depth map captured by Kinect sensor has some invalid regions, especially at object boundaries. These missing regions should be filled firstly. This paper proposes a depth-assisted edge detection algorithm and improves existing depth map inpainting algorithm using extracted edges. In the proposed algorithm, both color image and raw depth data are used to extract initial edges. Then the edges are optimized and are utilized to assist depth map inpainting. Comparative experiments demonstrate that the proposed edge detection algorithm can extract object boundaries and inhibit non-boundary edges caused by textures on object surfaces. The proposed depth inpainting algorithm can predict missing depth values successfully and has better performance than existing algorithm around object boundaries.

  9. THEMIS Observations of Atmospheric Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Smith, Michael D.; Bandfield, Joshua L.; Christensen, Philip R.; Richardson, Mark I.

    2003-01-01

    The Mars Odyssey spacecraft entered into Martian orbit in October 2001 and after successful aerobraking began mapping in February 2002 (approximately Ls=330 deg.). Images taken by the Thermal Emission Imaging System (THEMIS) on-board the Odyssey spacecraft allow the quantitative retrieval of atmospheric dust and water-ice aerosol optical depth. Atmospheric quantities retrieved from THEMIS build upon existing datasets returned by Mariner 9, Viking, and Mars Global Surveyor (MGS). Data from THEMIS complements the concurrent MGS Thermal Emission Spectrometer (TES) data by offering a later local time (approx. 2:00 for TES vs. approx. 4:00 - 5:30 for THEMIS) and much higher spatial resolution.

  10. Higher resolution stimulus facilitates depth perception: MT+ plays a significant role in monocular depth perception.

    PubMed

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Hiruma, Nobuyuki

    2014-01-01

    Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies. PMID:25327168

  11. The effect of crosstalk on depth magnitude in thin structures

    NASA Astrophysics Data System (ADS)

    Tsirlin, Inna; Wilcox, Laurie M.; Allison, Robert S.

    2011-03-01

    Stereoscopic displays must present separate images to the viewer's left and right eyes. Crosstalk is the unwanted contamination of one eye's image from the image of the other eye. It has been shown to cause distortions, reduce image quality and visual comfort and increase perceived workload when performing visual tasks. Crosstalk also affects one's ability to perceive stereoscopic depth although little consideration has been given to the perception of depth magnitude in the presence of crosstalk. In this paper we extend a previous study (Tsirlin, Allison & Wilcox, 2010, submitted) on the perception of depth magnitude in stereoscopic occluding and non-occluding surfaces to the special case of crosstalk in thin structures. Crosstalk in thin structures differs qualitatively from that in larger objects due to the separation of the ghost and real images and thus theoretically could have distinct perceptual consequences. To address this question we used a psychophysical paradigm, where observers estimated the perceived depth difference between two thin vertical bars using a measurement scale. Our data show that crosstalk degrades perceived depth. As crosstalk levels increased the magnitude of perceived depth decreased, especially for stimuli with larger relative disparities. In contrast to the effect of crosstalk on depth magnitude in larger objects, in thin structures, a significant detrimental effect was found at all disparities. Our findings, when considered with the other perceptual consequences of crosstalk, suggest that its presence in S3D media even in modest amounts will reduce observers' satisfaction.

  12. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    PubMed Central

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-01-01

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942

  13. Depth propagation and surface construction in 3-D vision.

    PubMed

    Georgeson, Mark A; Yates, Tim A; Schofield, Andrew J

    2009-01-01

    In stereo vision, regions with ambiguous or unspecified disparity can acquire perceived depth from unambiguous regions. This has been called stereo capture, depth interpolation or surface completion. We studied some striking induced depth effects suggesting that depth interpolation and surface completion are distinct stages of visual processing. An inducing texture (2-D Gaussian noise) had sinusoidal modulation of disparity, creating a smooth horizontal corrugation. The central region of this surface was replaced by various test patterns whose perceived corrugation was measured. When the test image was horizontal 1-D noise, shown to one eye or to both eyes without disparity, it appeared corrugated in much the same way as the disparity-modulated (DM) flanking regions. But when the test image was 2-D noise, or vertical 1-D noise, little or no depth was induced. This suggests that horizontal orientation was a key factor. For a horizontal sine-wave luminance grating, strong depth was induced, but for a square-wave grating, depth was induced only when its edges were aligned with the peaks and troughs of the DM flanking surface. These and related results suggest that disparity (or local depth) propagates along horizontal 1-D features, and then a 3-D surface is constructed from the depth samples acquired. The shape of the constructed surface can be different from the inducer, and so surface construction appears to operate on the results of a more local depth propagation process. PMID:18977239

  14. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  15. Computational imaging for miniature cameras

    NASA Astrophysics Data System (ADS)

    Salahieh, Basel

    Miniature cameras play a key role in numerous imaging applications ranging from endoscopy and metrology inspection devices to smartphones and head-mount acquisition systems. However, due to the physical constraints, the imaging conditions, and the low quality of small optics, their imaging capabilities are limited in terms of the delivered resolution, the acquired depth of field, and the captured dynamic range. Computational imaging jointly addresses the imaging system and the reconstructing algorithms to bypass the traditional limits of optical systems and deliver better restorations for various applications. The scene is encoded into a set of efficient measurements which could then be computationally decoded to output a richer estimate of the scene as compared with the raw images captured by conventional imagers. In this dissertation, three task-based computational imaging techniques are developed to make low-quality miniature cameras capable of delivering realistic high-resolution reconstructions, providing full-focus imaging, and acquiring depth information for high dynamic range objects. For the superresolution task, a non-regularized direct superresolution algorithm is developed to achieve realistic restorations without being penalized by improper assumptions (e.g., optimizers, priors, and regularizers) made in the inverse problem. An adaptive frequency-based filtering scheme is introduced to upper bound the reconstruction errors while still producing more fine details as compared with previous methods under realistic imaging conditions. For the full-focus imaging task, a computational depth-based deconvolution technique is proposed to bring a scene captured by an ordinary fixed-focus camera to a full-focus based on a depth-variant point spread function prior. The ringing artifacts are suppressed on three levels: block tiling to eliminate boundary artifacts, adaptive reference maps to reduce ringing initiated by sharp edges, and block-wise deconvolution or

  16. Noncontact depth-resolved micro-scale optical coherence elastography of the cornea

    PubMed Central

    Wang, Shang; Larin, Kirill V.

    2014-01-01

    High-resolution elastographic assessment of the cornea can greatly assist clinical diagnosis and treatment of various ocular diseases. Here, we report on the first noncontact depth-resolved micro-scale optical coherence elastography of the cornea achieved using shear wave imaging optical coherence tomography (SWI-OCT) combined with the spectral analysis of the corneal Lamb wave propagation. This imaging method relies on a focused air-puff device to load the cornea with highly-localized low-pressure short-duration air stream and applies phase-resolved OCT detection to capture the low-amplitude deformation with nano-scale sensitivity. The SWI-OCT system is used here to image the corneal Lamb wave propagation with the frame rate the same as the OCT A-line acquisition speed. Based on the spectral analysis of the corneal temporal deformation profiles, the phase velocity of the Lamb wave is obtained at different depths for the major frequency components, which shows the depthwise distribution of the corneal stiffness related to its structural features. Our pilot experiments on ex vivo rabbit eyes demonstrate the feasibility of this method in depth-resolved micro-scale elastography of the cornea. The assessment of the Lamb wave dispersion is also presented, suggesting the potential for the quantitative measurement of corneal viscoelasticity. PMID:25426312

  17. Holographic coherent anti-Stokes Raman scattering bio-imaging

    PubMed Central

    Shi, Kebin; Edwards, Perry S.; Hu, Jing; Xu, Qian; Wang, Yanming; Psaltis, Demetri; Liu, Zhiwen

    2012-01-01

    CARS holography captures both the amplitude and the phase of a complex anti-Stokes field, and can perform three-dimensional imaging by digitally focusing onto different depths inside a specimen. The application of CARS holography for bio-imaging is demonstrated. It is shown that holographic CARS imaging of sub-cellular components in live HeLa cells can be achieved. PMID:22808443

  18. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  19. Passenger flow statistics across the field of view based on the depth map of the double Xtion sensors

    NASA Astrophysics Data System (ADS)

    Yin, Zhang-qin; Gu, Guo-hua; Bai, Xiao-feng; Zhao, Tie-kun; Chen, Hai-xin

    2013-08-01

    It introduces a new method to achieve the passenger flow statistics in stereo vision according to the original depth image output by the monocular Xtion sensor, aiming at the problem of algorithm with large amounts of data and realization of single field with dual camera on the basis of stereo vision. Double Xtion sensors are used to expand the range of view angle because of the monocular Xtion sensor's limitations, whose view range is 45°*58° with small transverse view range and can't meet the passenger flow statistics. Due to the characteristics of constant physical space dimensions, use the improved SIFT (Scale Invariant Features Transform) feature algorithm to realize the auto - stereoscopic splice of binocular original depth images. Firstly, the feature points of the reference image (the image to be matched) and the subsequent image (the image to be matched with the reference image) are obtained by SIFT algorithm, getting the location, scale and direction of the feature points and the feature points are described by means of the 128-dimensional vector .Secondly, complete the match of the feature points of the two images to calculate overlapping area, using the nearest neighbor method. Finally, image stitching is completed based on multi-resolution wavelet transform, which contains three-dimensional spatial information of the human body, thus use a method to analysis comprehensively the depth image for field detection and tracking based on the features such as the head shape, the head area the spatial position relation of the human head and shoulder and so on. The experimental results show that this method not only improve the detection accuracy and efficiency, reduce the amount of operation data, so that the system is simple in structure, but also solve many problems of passenger flow statistics based on video stream in the system, accuracy up to 93%, having high and practical application value.

  20. Biomedical photoacoustic imaging

    PubMed Central

    Beard, Paul

    2011-01-01

    Photoacoustic (PA) imaging, also called optoacoustic imaging, is a new biomedical imaging modality based on the use of laser-generated ultrasound that has emerged over the last decade. It is a hybrid modality, combining the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging. In essence, a PA image can be regarded as an ultrasound image in which the contrast depends not on the mechanical and elastic properties of the tissue, but its optical properties, specifically optical absorption. As a consequence, it offers greater specificity than conventional ultrasound imaging with the ability to detect haemoglobin, lipids, water and other light-absorbing chomophores, but with greater penetration depth than purely optical imaging modalities that rely on ballistic photons. As well as visualizing anatomical structures such as the microvasculature, it can also provide functional information in the form of blood oxygenation, blood flow and temperature. All of this can be achieved over a wide range of length scales from micrometres to centimetres with scalable spatial resolution. These attributes lend PA imaging to a wide variety of applications in clinical medicine, preclinical research and basic biology for studying cancer, cardiovascular disease, abnormalities of the microcirculation and other conditions. With the emergence of a variety of truly compelling in vivo images obtained by a number of groups around the world in the last 2–3 years, the technique has come of age and the promise of PA imaging is now beginning to be realized. Recent highlights include the demonstration of whole-body small-animal imaging, the first demonstrations of molecular imaging, the introduction of new microscopy modes and the first steps towards clinical breast imaging being taken as well as a myriad of in vivo preclinical imaging studies. In this article, the underlying physical principles of the technique, its practical

  1. Biomedical photoacoustic imaging.

    PubMed

    Beard, Paul

    2011-08-01

    Photoacoustic (PA) imaging, also called optoacoustic imaging, is a new biomedical imaging modality based on the use of laser-generated ultrasound that has emerged over the last decade. It is a hybrid modality, combining the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging. In essence, a PA image can be regarded as an ultrasound image in which the contrast depends not on the mechanical and elastic properties of the tissue, but its optical properties, specifically optical absorption. As a consequence, it offers greater specificity than conventional ultrasound imaging with the ability to detect haemoglobin, lipids, water and other light-absorbing chomophores, but with greater penetration depth than purely optical imaging modalities that rely on ballistic photons. As well as visualizing anatomical structures such as the microvasculature, it can also provide functional information in the form of blood oxygenation, blood flow and temperature. All of this can be achieved over a wide range of length scales from micrometres to centimetres with scalable spatial resolution. These attributes lend PA imaging to a wide variety of applications in clinical medicine, preclinical research and basic biology for studying cancer, cardiovascular disease, abnormalities of the microcirculation and other conditions. With the emergence of a variety of truly compelling in vivo images obtained by a number of groups around the world in the last 2-3 years, the technique has come of age and the promise of PA imaging is now beginning to be realized. Recent highlights include the demonstration of whole-body small-animal imaging, the first demonstrations of molecular imaging, the introduction of new microscopy modes and the first steps towards clinical breast imaging being taken as well as a myriad of in vivo preclinical imaging studies. In this article, the underlying physical principles of the technique, its practical

  2. Extended depth of field in an intrinsically wavefront-encoded biometric iris camera

    NASA Astrophysics Data System (ADS)

    Bergkoetter, Matthew D.; Bentley, Julie L.

    2014-12-01

    This work describes a design process which greatly increases the depth of field of a simple three-element lens system intended for biometric iris recognition. The system is optimized to produce a point spread function which is insensitive to defocus, so that recorded images may be deconvolved without knowledge of the exact object distance. This is essentially a variation on the technique of wavefront encoding, however the desired encoding effect is achieved by aberrations intrinsic to the lens system itself, without the need for a pupil phase mask.

  3. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis

    PubMed Central

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687

  4. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.

    PubMed

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction. PMID:27367687

  5. Z-depth integration: a new technique for manipulating z-depth properties in composited scenes

    NASA Astrophysics Data System (ADS)

    Steckel, Kayla; Whittinghill, David

    2014-02-01

    This paper presents a new technique in the production pipeline of asset creation for virtual environments called Z-Depth Integration (ZeDI). ZeDI is intended to reduce the time required to place elements at the appropriate z-depth within a scene. Though ZeDI is intended for use primarily in two-dimensional scene composition, depth-dependent "flat" animated objects are often critical elements of augmented and virtual reality applications (AR/VR). ZeDI is derived from "deep image compositing", a capacity implemented within the OpenEXR file format. In order to trick the human eye into perceiving overlapping scene elements as being in front of or behind one another, the developer must manually manipulate which pixels of an element are visible in relation to other objects embedded within the environment's image sequence. ZeDI improves on this process by providing a means for interacting with procedurally extracted z-depth data from a virtual environment scene. By streamlining the process of defining objects' depth characteristics, it is expected that the time and energy required for developers to create compelling AR/VR scenes will be reduced. In the proof of concept presented in this manuscript, ZeDI is implemented for pre-rendered virtual scene construction via an AfterEffects software plug-in.

  6. Depth-fused three-dimensional display using polarization distribution

    NASA Astrophysics Data System (ADS)

    Park, Soon-gi; Min, Sung-Wook

    2010-11-01

    We propose novel depth-fused three-dimensional (DFD) method using polarization distribution, which is one kind of multifocal plane display that provides autostereoscopic image with small visual fatigue. The DFD method is based on the characteristic of human depth perception when the luminance-modulated two-dimensional (2D) images are overlapped. The perceived depth position is decided by the luminance ratio of each plane. The proposed system includes the polarization selective scattering films and the polarization modulating device. The polarization selective scattering film has the characteristics of partial scattering according to the polarization state and transmits the rest light from the scattering. When the films are stacked with the scattering axis rotated, each layer of film provides different scattering ratio according to the incident polarization. Consequently, the appropriate modulation of polarization can provide DFD image through the system. The depth map provides depth information of each pixel as a gray scale image. Thus, when a depth map is displayed on a polarization modulating device, it is converted into a polarization distributed depth map. The conventional twisted nematic liquid crystal display can be used as a polarization modulating device without complicated modification. We demonstrate the proposed system with simple experiment, and compare the characteristic of the system with simulated result.

  7. Magnetic depths to basalts: extension of spectral depths method

    NASA Astrophysics Data System (ADS)

    Clifton, Roger

    2015-11-01

    Although spectral depth determination has played a role in magnetic interpretation for over four decades, automating the procedure has been inhibited by the need for manual intervention. This paper introduces the concept of a slope spectrum of an equivalent layer, to be used in an automated depth interpretation algorithm suitable for application to very large datasets such as the complete Northern Territory aeromagnetic grid. In order to trace the extensive basalts across the Northern Territory, profiles of spectral depths have been obtained at 5 km intervals across the NT stitched grid of total magnetic intensity (TMI). Each profile is a graph from 0 to 1000 m of the probability of a magnetic layer occurring at each depth. Automating the collection of the 50 000 profiles required the development of a formula that relates slopes along the power spectrum to depths to an equivalent magnetic layer. Model slabs were populated with a large number of randomly located dipoles and their power spectra correlated with modelled depth to provide the formula. Depth profiles are too noisy to be used singly, but when a series of depth profiles are lined up side-by-side as a transect, significant magnetic layers can be traced for large distances. Transects frequently show a second layer. The formula is quite general in its derivation and would apply to any mid-latitude area where significant magnetic bodies can be modelled as extensive layers. Because the method requires a radial power spectrum, it fails to provide signal at depths much shallower than the flight line spacing. The method is convenient for a fast first pass at depth estimation, but its horizontal resolution is rather coarse and errors can be quite large.

  8. Method and apparatus to measure the depth of skin burns

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.

    2002-01-01

    A new device for measuring the depth of surface tissue burns based on the rate at which the skin temperature responds to a sudden differential temperature stimulus. This technique can be performed without physical contact with the burned tissue. In one implementation, time-dependent surface temperature data is taken from subsequent frames of a video signal from an infrared-sensitive video camera. When a thermal transient is created, e.g., by turning off a heat lamp directed at the skin surface, the following time-dependent surface temperature data can be used to determine the skin burn depth. Imaging and non-imaging versions of this device can be implemented, thereby enabling laboratory-quality skin burn depth imagers for hospitals as well as hand-held skin burn depth sensors the size of a small pocket flashlight for field use and triage.

  9. Objective methods for achieving an early prediction of the effectiveness of regional block anesthesia using thermography and hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Klaessens, John H. G. M.; Landman, Mattijs; de Roode, Rowland; Noordmans, Herke J.; Verdaasdonk, Rudolf M.

    2011-03-01

    An objective method to measure the effectiveness of regional anesthesia can reduce time and unintended pain inflicted to the patient. A prospective observational study was performed on 22 patients during a local anesthesia before undergoing hand surgery. Two non-invasive techniques thermal and oxygenation imaging were applied to observe the region affected by the peripheral block and the results were compared to the standard cold sensation test. The supraclavicular block was placed under ultrasound guidance around the brachial plexus by injecting 20 cc Ropivacaine. The sedation causes a relaxation of the muscles around the blood vessels resulting in dilatation and hence an increase of blood perfusion, skin temperature and skin oxygenation in the lower arm and hand. Temperatures were acquired with an IR thermal camera (FLIR ThermoCam SC640). The data were recorded and analyzed with the ThermaCamTMResearcher and Matlab software. Narrow band spectral images were acquired at selected wavelengths with a CCD camera either combined with a Liquid Crystal Tunable Filter (420-730 nm) or a tunable hyper-wavelength LED light source (450-880nm). Concentration changes of oxygenated and deoxygenated hemoglobin in the dermis of the skin were calculated using the modified Lambert Beer equation. Both imaging methods showed distinct oxygenation and temperature differences at the surface of the skin of the hand with a good correlation to the anesthetized areas. A temperature response was visible within 5 minutes compared to the standard of 30 minutes. Both non-contact methods show to be more objective and can have an earlier prediction for the effectiveness of the anesthetic block.

  10. The neural mechanism of binocular depth discrimination

    PubMed Central

    Barlow, H. B.; Blakemore, C.; Pettigrew, J. D.

    1967-01-01

    1. Binocularly driven units were investigated in the cat's primary visual cortex. 2. It was found that a stimulus located correctly in the visual fields of both eyes was more effective in driving the units than a monocular stimulus, and much more effective than a binocular stimulus which was correctly positioned in only one eye: the response to the correctly located image in one eye is vetoed if the image is incorrectly located in the other eye. 3. The vertical and horizontal disparities of the paired retinal images that yielded the maximum response were measured in 87 units from seven cats: the range of horizontal disparities was 6·6°, of vertical disparities 2·2°. 4. With fixed convergence, different units will be optimally excited by objects lying at different distances. This may be the basic mechanism underlying depth discrimination in the cat. PMID:6065881

  11. Spatial resolution of MFM measurements of penetration depth

    NASA Astrophysics Data System (ADS)

    Spanton, Eric; Luan, Lan; Kirtley, John; Moler, Kathryn

    2012-02-01

    The penetration depth and its temperature dependence are key ways to characterize superconductors. Measurements of the local Meissner response of a superconductor can determine the local penetration depth. To quantify the spatial resolution of such measurements, we seek to characterize the point spread function of magnetic force microscope (MFM) measurements of the penetration depth both numerically and experimentally. Modeling various geometries of MFM tips (pyramid, dipole, and long thin cylinder) in the presence of various geometries of spatial variation in the penetration depth (point variation, columnar defects, and planar defects or twin boundaries) shows the importance of the MFM tip geometry to achieving both excellent spatial resolution and quantitatively interpretable results. We compare these models to experimental data on pnictides and cuprates to set upper limits on the sub-micron-scale variation of the penetration depth. These results demonstrate both the feasibility and the technical challenges of submicron penetration depth mapping.

  12. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolutio