Science.gov

Sample records for achievable imaging depth

  1. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods.

  2. Extended depth of field imaging for high speed object analysis

    NASA Technical Reports Server (NTRS)

    Ortyn, William (Inventor); Basiji, David (Inventor); Frost, Keith (Inventor); Liang, Luchuan (Inventor); Bauer, Richard (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)

    2011-01-01

    A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.

  3. PSF engineering in multifocus microscopy for increased depth volumetric imaging

    PubMed Central

    Hajj, Bassam; El Beheiry, Mohamed; Dahan, Maxime

    2016-01-01

    Imaging and localizing single molecules with high accuracy in a 3D volume is a challenging task. Here we combine multifocal microscopy, a recently developed volumetric imaging technique, with point spread function engineering to achieve an increased depth for single molecule imaging. Applications in 3D single molecule localization-based super-resolution imaging is shown over an axial depth of 4 µm as well as for the tracking of diffusing beads in a fluid environment over 8 µm. PMID:27231584

  4. Single image defogging by multiscale depth fusion.

    PubMed

    Wang, Yuan-Kai; Fan, Ching-Tang

    2014-11-01

    Restoration of fog images is important for the deweathering issue in computer vision. The problem is ill-posed and can be regularized within a Bayesian context using a probabilistic fusion model. This paper presents a multiscale depth fusion (MDF) method for defog from a single image. A linear model representing the stochastic residual of nonlinear filtering is first proposed. Multiscale filtering results are probabilistically blended into a fused depth map based on the model. The fusion is formulated as an energy minimization problem that incorporates spatial Markov dependence. An inhomogeneous Laplacian-Markov random field for the multiscale fusion regularized with smoothing and edge-preserving constraints is developed. A nonconvex potential, adaptive truncated Laplacian, is devised to account for spatially variant characteristics such as edge and depth discontinuity. Defog is solved by an alternate optimization algorithm searching for solutions of depth map by minimizing the nonconvex potential in the random field. The MDF method is experimentally verified by real-world fog images including cluttered-depth scene that is challenging for defogging at finer details. The fog-free images are restored with improving contrast and vivid colors but without over-saturation. Quantitative assessment of image quality is applied to compare various defog methods. Experimental results demonstrate that the accurate estimation of depth map by the proposed edge-preserved multiscale fusion should recover high-quality images with sharp details.

  5. Simulating Kinect Infrared and Depth Images.

    PubMed

    Landau, Michael J; Choo, Benjamin Y; Beling, Peter A

    2016-12-01

    With the emergence of the Microsoft Kinect sensor, many developer communities and research groups have found countless uses and have already published a wide variety of papers that utilize the raw depth images for their specific goals. New methods and applications that use the device generally require an appropriately large ensemble of data sets with accompanying ground truth for testing purposes, as well as accurate models that account for the various systematic and stochastic contributors to Kinect errors. Current error models, however, overlook the intermediate infrared (IR) images that directly contribute to noisy depth estimates. We, therefore, propose a high fidelity Kinect IR and depth image predictor and simulator that models the physics of the transmitter/receiver system, unique IR dot pattern, disparity/depth processing technology, and random intensity speckle and IR noise in the detectors. The model accounts for important characteristics of Kinect's stereo triangulation system, including depth shadowing, IR dot splitting, spreading, and occlusions, correlation-based disparity estimation between windows of measured and reference IR images, and subpixel refinement. Results show that the simulator accurately produces axial depth error from imaged flat surfaces with various tilt angles, as well as the bias and standard lateral error of an object's horizontal and vertical edge.

  6. Simulating Kinect Infrared and Depth Images.

    PubMed

    Landau, Michael J; Choo, Benjamin Y; Beling, Peter A

    2015-11-13

    With the emergence of the Microsoft Kinect sensor, many developer communities and research groups have found countless uses and have already published a wide variety of papers that utilize the raw depth images for their specific goals. New methods and applications that use the device generally require an appropriately large ensemble of data sets with accompanying ground truth for testing purposes, as well as accurate models that account for the various systematic and stochastic contributors to Kinect errors. Current error models, however, overlook the intermediate infrared (IR) images that directly contribute to noisy depth estimates. We, therefore, propose a high fidelity Kinect IR and depth image predictor and simulator that models the physics of the transmitter/receiver system, unique IR dot pattern, disparity/depth processing technology, and random intensity speckle and IR noise in the detectors. The model accounts for important characteristics of Kinect's stereo triangulation system, including depth shadowing, IR dot splitting, spreading, and occlusions, correlation-based disparity estimation between windows of measured and reference IR images, and subpixel refinement. Results show that the simulator accurately produces axial depth error from imaged flat surfaces with various tilt angles, as well as the bias and standard lateral error of an object's horizontal and vertical edge.

  7. mEdgeBoxes: objectness estimation for depth image

    NASA Astrophysics Data System (ADS)

    Fang, Zhiwen; Cao, Zhiguo; Xiao, Yang; Zhu, Lei; Lu, Hao

    2015-12-01

    Object detection is one of the most important researches in computer vision. Recently, category-independent objectness in RGB images has been a hot field for its generalization ability and efficiency as a pre-filtering procedure of the object detection. Many traditional applications have been transferred from the RGB images to the depth images since the economical depth sensors, such as Kinect, were popularized. The depth data represents the distance information. Because of the special characteristic, the methods of objectness evaluation in RGB images are often invalid in depth images. In this study, we propose mEdgeboxes to evaluate the objectness in depth image. Aside from detecting the edge from the raw depth information, we extract another edge map from the orientation information based on the normal vector. Two kinds of the edge map are integrated and are fed to Edgeboxes1 in order to produce the object proposals. The experimental results on two challenging datasets demonstrate that the detection rate of the proposed objectness estimation method can achieve over 90% with 1000 windows. It is worth noting that our approach generally outperforms the state-of-the-art methods on the detection rate.

  8. Analysis on enhanced depth of field for integral imaging microscope.

    PubMed

    Lim, Young-Tae; Park, Jae-Hyeung; Kwon, Ki-Chul; Kim, Nam

    2012-10-08

    Depth of field of the integral imaging microscope is studied. In the integral imaging microscope, 3-D information is encoded as a form of elemental images Distance between intermediate plane and object point decides the number of elemental image and depth of field of integral imaging microscope. From the analysis, it is found that depth of field of the reconstructed depth plane image by computational integral imaging reconstruction is longer than depth of field of optical microscope. From analyzed relationship, experiment using integral imaging microscopy and conventional microscopy is also performed to confirm enhanced depth of field of integral imaging microscopy.

  9. Coding depth perception from image defocus.

    PubMed

    Supèr, Hans; Romeo, August

    2014-12-01

    As a result of the spider experiments in Nagata et al. (2012), it was hypothesized that the depth perception mechanisms of these animals should be based on how much images are defocused. In the present paper, assuming that relative chromatic aberrations or blur radii values are known, we develop a formulation relating the values of these cues to the actual depth distance. Taking into account the form of the resulting signals, we propose the use of latency coding from a spiking neuron obeying Izhikevich's 'simple model'. If spider jumps can be viewed as approximately parabolic, some estimates allow for a sensory-motor relation between the time to the first spike and the magnitude of the initial velocity of the jump.

  10. Color and depth priors in natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2013-06-01

    Natural scene statistics have played an increasingly important role in both our understanding of the function and evolution of the human vision system, and in the development of modern image processing applications. Because range (egocentric distance) is arguably the most important thing a visual system must compute (from an evolutionary perspective), the joint statistics between image information (color and luminance) and range information are of particular interest. It seems obvious that where there is a depth discontinuity, there must be a higher probability of a brightness or color discontinuity too. This is true, but the more interesting case is in the other direction--because image information is much more easily computed than range information, the key conditional probabilities are those of finding a range discontinuity given an image discontinuity. Here, the intuition is much weaker; the plethora of shadows and textures in the natural environment imply that many image discontinuities must exist without corresponding changes in range. In this paper, we extend previous work in two ways--we use as our starting point a very high quality data set of coregistered color and range values collected specifically for this purpose, and we evaluate the statistics of perceptually relevant chromatic information in addition to luminance, range, and binocular disparity information. The most fundamental finding is that the probabilities of finding range changes do in fact depend in a useful and systematic way on color and luminance changes; larger range changes are associated with larger image changes. Second, we are able to parametrically model the prior marginal and conditional distributions of luminance, color, range, and (computed) binocular disparity. Finally, we provide a proof of principle that this information is useful by showing that our distribution models improve the performance of a Bayesian stereo algorithm on an independent set of input images. To summarize

  11. Predictive depth coding of wavelet transformed images

    NASA Astrophysics Data System (ADS)

    Lehtinen, Joonas

    1999-10-01

    In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.

  12. Nanometric depth resolution from multi-focal images in microscopy

    PubMed Central

    Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.

    2011-01-01

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948

  13. A computationally efficient denoising and hole-filling method for depth image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Soulan; Chen, Chen; Kehtarnavaz, Nasser

    2016-04-01

    Depth maps captured by Kinect depth cameras are being widely used for 3D action recognition. However, such images often appear noisy and contain missing pixels or black holes. This paper presents a computationally efficient method for both denoising and hole-filling in depth images. The denoising is achieved by utilizing a combination of Gaussian kernel filtering and anisotropic filtering. The hole-filling is achieved by utilizing a combination of morphological filtering and zero block filtering. Experimental results using the publicly available datasets are provided indicating the superiority of the developed method in terms of both depth error and computational efficiency compared to three existing methods.

  14. Navigating from a Depth Image Converted into Sound

    PubMed Central

    Stoll, Chloé; Palluel-Germain, Richard; Fristot, Vincent; Pellerin, Denis; Alleysson, David; Graff, Christian

    2015-01-01

    Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs). Objective. We tested such a design as a travel aid in a navigation task. Methods. Our portable device (MeloSee) converted 2D array of a depth image into melody in real-time. Distance from the sensor was translated into sound intensity, stereo-modulated laterally, and the pitch represented verticality. Twenty-one blindfolded young adults navigated along four different paths during two sessions separated by one-week interval. In some instances, a dual task required them to recognize a temporal pattern applied through a tactile vibrator while they navigated. Results. Participants learnt how to use the system on both new paths and on those they had already navigated from. Based on travel time and errors, performance improved from one week to the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. Conclusions. The use of Kinect-type sensors to implement SSDs is promising, but it is restricted to indoor use and it is inefficient on too short range. PMID:27019586

  15. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    NASA Astrophysics Data System (ADS)

    Zheng, Yipeng; Tan, Wenjiang; Si, Jinhai; Ren, YuHu; Xu, Shichao; Tong, Junyi; Hou, Xun

    2016-09-01

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. This imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.

  16. Imaging depth and multiple scattering in laser speckle contrast imaging

    PubMed Central

    Davis, Mitchell A.; Kazmi, S. M. Shams; Dunn, Andrew K.

    2014-01-01

    Abstract. Laser speckle contrast imaging (LSCI) is a powerful and simple method for full field imaging of blood flow. However, the depth dependence and the degree of multiple scattering have not been thoroughly investigated. We employ three-dimensional Monte Carlo simulations of photon propagation combined with high resolution vascular anatomy to investigate these two issues. We found that 95% of the detected signal comes from the top 700 μm of tissue. Additionally, we observed that single-intravascular scattering is an accurate description of photon sampling dynamics, but that regions of interest (ROIs) in areas free of obvious surface vessels had fewer intravascular scattering events than ROI over resolved surface vessels. Furthermore, we observed that the local vascular anatomy can strongly affect the depth dependence of LSCI. We performed simulations over a wide range of intravascular and extravascular scattering properties to confirm the applicability of these results to LSCI imaging over a wide range of visible and near-infrared wavelengths. PMID:25089945

  17. Objective, comparative assessment of the penetration depth of temporal-focusing microscopy for imaging various organs

    NASA Astrophysics Data System (ADS)

    Rowlands, Christopher J.; Bruns, Oliver T.; Bawendi, Moungi G.; So, Peter T. C.

    2015-06-01

    Temporal focusing is a technique for performing axially resolved widefield multiphoton microscopy with a large field of view. Despite significant advantages over conventional point-scanning multiphoton microscopy in terms of imaging speed, the need to collect the whole image simultaneously means that it is expected to achieve a lower penetration depth in common biological samples compared to point-scanning. We assess the penetration depth using a rigorous objective criterion based on the modulation transfer function, comparing it to point-scanning multiphoton microscopy. Measurements are performed in a variety of mouse organs in order to provide practical guidance as to the achievable penetration depth for both imaging techniques. It is found that two-photon scanning microscopy has approximately twice the penetration depth of temporal-focusing microscopy, and that penetration depth is organ-specific; the heart has the lowest penetration depth, followed by the liver, lungs, and kidneys, then the spleen, and finally white adipose tissue.

  18. Objective, comparative assessment of the penetration depth of temporal-focusing microscopy for imaging various organs

    PubMed Central

    Rowlands, Christopher J.; Bruns, Oliver T.; Bawendi, Moungi G.; So, Peter T. C.

    2015-01-01

    Abstract. Temporal focusing is a technique for performing axially resolved widefield multiphoton microscopy with a large field of view. Despite significant advantages over conventional point-scanning multiphoton microscopy in terms of imaging speed, the need to collect the whole image simultaneously means that it is expected to achieve a lower penetration depth in common biological samples compared to point-scanning. We assess the penetration depth using a rigorous objective criterion based on the modulation transfer function, comparing it to point-scanning multiphoton microscopy. Measurements are performed in a variety of mouse organs in order to provide practical guidance as to the achievable penetration depth for both imaging techniques. It is found that two-photon scanning microscopy has approximately twice the penetration depth of temporal-focusing microscopy, and that penetration depth is organ-specific; the heart has the lowest penetration depth, followed by the liver, lungs, and kidneys, then the spleen, and finally white adipose tissue. PMID:25844509

  19. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina

    NASA Astrophysics Data System (ADS)

    An, Lin; Shen, Tueng T.; Wang, Ruikang K.

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.

  20. Piezoelectric annular array for large depth of field photoacoustic imaging

    PubMed Central

    Passler, K.; Nuster, R.; Gratt, S.; Burgholzer, P.; Paltauf, G.

    2011-01-01

    A piezoelectric detection system consisting of an annular array is investigated for large depth of field photoacoustic imaging. In comparison to a single ring detection system, X-shaped imaging artifacts are suppressed. Sensitivity and image resolution studies are performed in simulations and in experiments and compared to a simulated spherical detector. In experiment an eight ring detection systems offers an extended depth of field over a range of 16 mm with almost constant lateral resolution. PMID:21991555

  1. Retrospective sputter depth profiling using 3D mass spectral imaging.

    PubMed

    Zheng, Leiliang; Wucher, Andreas; Winograd, Nicholas

    2011-02-01

    A molecular multilayer stack composed of alternating Langmuir-Blodgett films was analyzed by ToF-SIMS imaging in combination with intermediate sputter erosion using a focused C60(+) cluster ion beam. From the resulting dataset, depth profiles of any desired lateral portion of the analyzed field-of-view can be extracted in retrospect, allowing the influence of the gating area on the apparent depth resolution to be assessed. In a similar way, the observed degradation of depth resolution with increasing depth of the analyzed interface can be analyzed in order to determine the 'intrinsic' depth resolution of the method.

  2. Increasing the imaging depth through computational scattering correction (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Koberstein-Schwarz, Benno; Omlor, Lars; Schmitt-Manderbach, Tobias; Mappes, Timo; Ntziachristos, Vasilis

    2016-03-01

    Imaging depth is one of the most prominent limitations in light microscopy. The depth in which we are still able to resolve biological structures is limited by the scattering of light within the sample. We have developed an algorithm to compensate for the influence of scattering. The potential of algorithm is demonstrated on a 3D image stack of a zebrafish embryo captured with a selective plane illumination microscope (SPIM). With our algorithm we were able shift the point in depth, where scattering starts to blur the imaging and effect the image quality by around 30 µm. For the reconstruction the algorithm only uses information from within the image stack. Therefore the algorithm can be applied on the image data from every SPIM system without further hardware adaption. Also there is no need for multiple scans from different views to perform the reconstruction. The underlying model estimates the recorded image as a convolution between the distribution of fluorophores and a point spread function, which describes the blur due to scattering. Our algorithm performs a space-variant blind deconvolution on the image. To account for the increasing amount of scattering in deeper tissue, we introduce a new regularizer which models the increasing width of the point spread function in order to improve the image quality in the depth of the sample. Since the assumptions the algorithm is based on are not limited to SPIM images the algorithm should also be able to work on other imaging techniques which provide a 3D image volume.

  3. Enhanced optical clearing of skin in vivo and optical coherence tomography in-depth imaging

    NASA Astrophysics Data System (ADS)

    Wen, Xiang; Jacques, Steven L.; Tuchin, Valery V.; Zhu, Dan

    2012-06-01

    The strong optical scattering of skin tissue makes it very difficult for optical coherence tomography (OCT) to achieve deep imaging in skin. Significant optical clearing of in vivo rat skin sites was achieved within 15 min by topical application of an optical clearing agent PEG-400, a chemical enhancer (thiazone or propanediol), and physical massage. Only when all three components were applied together could a 15 min treatment achieve a three fold increase in the OCT reflectance from a 300 μm depth and 31% enhancement in image depth Zthreshold.

  4. Calibrating river bathymetry via image to depth quantile transformation

    NASA Astrophysics Data System (ADS)

    Legleiter, C. J.

    2015-12-01

    Remote sensing has emerged as a powerful means of measuring river depths, but standard algorithms such as Optimal Band Ratio Analysis (OBRA) require field measurements to calibrate image-derived estimates. Such reliance upon field-based calibration undermines the advantages of remote sensing. This study introduces an alternative approach based on the probability distribution of depths dd within a reach. Provided a quantity XX related to dd can be derived from a remotely sensed data set, image-to-depth quantile transformation (IDQT) infers depths throughout the image by linking the cumulative distribution function (CDF) of XX to that of dd. The algorithm involves determining, for each pixel in the image, the CDF value for that particular value of X/bar{X} and then inferring the depth at that location from the inverse CDF of the scaled depths d/dbard/bar{d}, where the overbar denotes a reach mean. For X/bar{X}, an empirical CDF can be derived directly from pixel values or a probability distribution fitted. Similarly, the CDF of d/dbard/bar{d} can be obtained from field data or from a theoretical model of the frequency distribution of dd within a reach; gamma distributions have been used for this purpose. In essence, the probability distributions calibrate XX to dd while the image provides the spatial distribution of depths. IDQT offers a number of advantages: 1) direct field measurements of dd during image acquisition are not absolutely necessary; 2) because the XX vs. dd relation need not be linear, negative depth estimates along channel margins and shallow bias in pools are avoided; and 3) because individual pixels are not linked to specific depth measurements, accurate geo-referencing of field and image data sets is not critical. Application of OBRA and IDQT to a gravel-bed river indicated that the new, probabilistic algorithm was as accurate as the standard, regression-based approach and lead to more hydraulically reasonable bathymetric maps.

  5. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    NASA Astrophysics Data System (ADS)

    Legleiter, Carl J.

    2016-05-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R2 = 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  6. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    USGS Publications Warehouse

    Legleiter, Carl

    2016-01-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  7. Total variation based image deconvolution for extended depth-of-field microscopy images

    NASA Astrophysics Data System (ADS)

    Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.

    2015-03-01

    One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.

  8. Increasing the depth of field in Multiview 3D images

    NASA Astrophysics Data System (ADS)

    Lee, Beom-Ryeol; Son, Jung-Young; Yano, Sumio; Jung, Ilkwon

    2016-06-01

    A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.

  9. Depth-resolved imaging by using volume holograms

    NASA Astrophysics Data System (ADS)

    Xu, Zhiqiang; Jiang, Zhuqing; Yang, Jing; Tao, Shiquan

    2009-07-01

    In this paper the reconstructing images of a tiny object with a volume hologram are investigated by examining the effect of Bragg mismatch on the quality of imaging. The imaging depth resolutions of the volume holograms with the different radii are compared. Furthermore, the simultaneous imaging ability of the volume holographic gratings for the different depths of the object space is demonstrated experimentally by recording two holographic gratings in the same material. The results show that the depth resolution of the VHI system is 2.1mm in our experiments, in which a volume hologram is recorded in a 2-mm-thick LiNbO3:Fe:Cu crystal with two recording beams interfering at the wavelength of 532nm, and is located at a working distance of f=75mm away from the object lens.

  10. Recovering depth from focus using iterative image estimation techniques

    SciTech Connect

    Vitria, J.; Llacer, J.

    1993-09-01

    In this report we examine the possibility of using linear and nonlinear image estimation techniques to build a depth map of a three dimensional scene from a sequence of partially focused images. In particular, the techniques proposed to solve the problem of construction of a depth map are: (1) linear methods based on regularization procedures and (2) nonlinear methods based on statistical modeling. In the first case, we have implemented a matrix-oriented method to recover the point spread function (PSF) of a sequence of partially defocused images. In the second case, the chosen method has been a procedure based on image estimation by means of the EM algorithm, a well known technique in image reconstruction in medical applications. This method has been generalized to deal with optically defocused image sequences.

  11. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal

  12. Visually preserving stereoscopic image retargeting using depth carving

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Ma, Huadong; Liu, Liang

    2016-03-01

    This paper presents a method for retargeting a pair of stereoscopic images. Previous works have leveraged seam carving and image warping methods for two-dimensional image editing to address this issue. However, they did not consider the full advantages of the properties of stereoscopic images. Our approach offers substantial performance improvements over the state-of-the-art; the key insights driving the approach are that the input image pair can be decomposed into different depth layers according to the disparity and image segmentation, and the depth cues allow us to address the problem in a three-dimensional (3-D) space domain for best preserving objects. We propose depth carving that extends seam carving in a single image to resize the stereo image pair with disparity consistency. Our method minimizes the shape distortion and preserves object boundaries by creating new occlusions. As a result, the retargeted image pair preserves the stereoscopic quality and protects the original 3-D scene structure. Experimental results demonstrate that our method outperforms the previous methods.

  13. Depth perception from image defocus in a jumping spider.

    PubMed

    Nagata, Takashi; Koyanagi, Mitsumasa; Tsukamoto, Hisao; Saeki, Shinjiro; Isono, Kunio; Shichida, Yoshinori; Tokunaga, Fumio; Kinoshita, Michiyo; Arikawa, Kentaro; Terakita, Akihisa

    2012-01-27

    The principal eyes of jumping spiders have a unique retina with four tiered photoreceptor layers, on each of which light of different wavelengths is focused by a lens with appreciable chromatic aberration. We found that all photoreceptors in both the deepest and second-deepest layers contain a green-sensitive visual pigment, although green light is only focused on the deepest layer. This mismatch indicates that the second-deepest layer always receives defocused images, which contain depth information of the scene in optical theory. Behavioral experiments revealed that depth perception in the spider was affected by the wavelength of the illuminating light, which affects the amount of defocus in the images resulting from chromatic aberration. Therefore, we propose a depth perception mechanism based on how much the retinal image is defocused.

  14. Computational multi-depth single-photon imaging.

    PubMed

    Shin, Dongeek; Xu, Feihu; Wong, Franco N C; Shapiro, Jeffrey H; Goyal, Vivek K

    2016-02-08

    We present an imaging framework that is able to accurately reconstruct multiple depths at individual pixels from single-photon observations. Our active imaging method models the single-photon detection statistics from multiple reflectors within a pixel, and it also exploits the fact that a multi-depth profile at each pixel can be expressed as a sparse signal. We interpret the multi-depth reconstruction problem as a sparse deconvolution problem using single-photon observations, create a convex problem through discretization and relaxation, and use a modified iterative shrinkage-thresholding algorithm to efficiently solve for the optimal multi-depth solution. We experimentally demonstrate that the proposed framework is able to accurately reconstruct the depth features of an object that is behind a partially-reflecting scatterer and 4 m away from the imager with root mean-square error of 11 cm, using only 19 signal photon detections per pixel in the presence of moderate background light. In terms of root mean-square error, this is a factor of 4.2 improvement over the conventional method of Gaussian-mixture fitting for multi-depth recovery.

  15. Noise removal in extended depth of field microscope images through nonlinear signal processing.

    PubMed

    Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J

    2013-04-01

    Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.

  16. Demineralization Depth Using QLF and a Novel Image Processing Software

    PubMed Central

    Wu, Jun; Donly, Zachary R.; Donly, Kevin J.; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization. PMID:20445755

  17. Maximum imaging depth of two-photon autofluorescence microscopy in epithelial tissues.

    PubMed

    Durr, Nicholas J; Weisspfennig, Christian T; Holfeld, Benjamin A; Ben-Yakar, Adela

    2011-02-01

    Endogenous fluorescence provides morphological, spectral, and lifetime contrast that can indicate disease states in tissues. Previous studies have demonstrated that two-photon autofluorescence microscopy (2PAM) can be used for noninvasive, three-dimensional imaging of epithelial tissues down to approximately 150 μm beneath the skin surface. We report ex-vivo 2PAM images of epithelial tissue from a human tongue biopsy down to 370 μm below the surface. At greater than 320 μm deep, the fluorescence generated outside the focal volume degrades the image contrast to below one. We demonstrate that these imaging depths can be reached with 160 mW of laser power (2-nJ per pulse) from a conventional 80-MHz repetition rate ultrafast laser oscillator. To better understand the maximum imaging depths that we can achieve in epithelial tissues, we studied image contrast as a function of depth in tissue phantoms with a range of relevant optical properties. The phantom data agree well with the estimated contrast decays from time-resolved Monte Carlo simulations and show maximum imaging depths similar to that found in human biopsy results. This work demonstrates that the low staining inhomogeneity (∼ 20) and large scattering coefficient (∼ 10 mm(-1)) associated with conventional 2PAM limit the maximum imaging depth to 3 to 5 mean free scattering lengths deep in epithelial tissue.

  18. Extended depth of field imaging at 94 GHz

    NASA Astrophysics Data System (ADS)

    Mait, Joseph N.; Wikner, David A.; Mirotznik, Mark S.; van der Gracht, Joseph; Behrmann, Gregory P.; Good, Brandon L.; Mathews, Scott A.

    2008-04-01

    We describe a computational imaging technique to extend the depth-of field of a 94-GHz imaging system. The technique uses a cubic phase element in the pupil plane of the system to render system operation relatively insensitive to object distance. However, the cubic phase element also introduces aberrations but, since these are fixed and known, we remove them using post-detection signal processing. We present experimental results that validate system performance and indicate a greater than four-fold increase in depth-of-field from 17" to greater than 68".

  19. Enhanced seismic depth imaging of complex fault-fold structures

    NASA Astrophysics Data System (ADS)

    Kirtland Grech, Maria Graziella

    Synthetic seismic data were acquired over numerical and physical models, representing fault-fold structures encountered in the Canadian Rocky Mountain Foothills, to investigate which migration algorithm produces the best image in such complex environments. Results showed that pre-stack depth migration from topography with the known velocity model yielded the optimum migrated image. Errors in the positioning of a target underneath a dipping antisotropic overburden were also studied using multicomponent data. The largest error was observed on P-wave data where anisotropy was highest at 18%. For an overburden thickness of 1500 m, the target was imaged 300 m updip from the true location. Field data from a two-dimensional surface seismic line and a multioffset vertical seismic profile (VSP) from the Foothills of southern Alberta, Canada, were processed using a flow designed to yield an optimum depth image. Traveltime inversion of the first arrivals from all the shots from the multioffset VSP revealed that the Mesozoic shale strata in the area exhibit seismic velocity anisotropy. The anisotropy parameters, ε and delta, were calculated to be 0.1 and 0.05 respectively. Anisotropic pre-stack depth migration code for VSP and surface seismic data, which uses a modified version of a raytracer developed in this thesis for the computation of traveltime tables, was also developed. The algorithm was then used in a new method for integrated VSP and surface seismic depth imaging. Results from the migration of synthetic and field data show that the resulting integrated image is superior to that obtained from the migration of either data set alone or to that obtained from the conventional "splicing" approach. The combination of borehole and surface seismic data for anisotropy analysis, velocity model building, and depth migration, yielded a robust image even when the geology was complex, thus permitting a more accurate interpretation of the exploration target.

  20. Compressive Passive Millimeter Wave Imaging with Extended Depth of Field

    DTIC Science & Technology

    2012-01-01

    Over the past several years, imaging using millimeter wave ( mmW ) and terahertz technology has gained a lot of interest [1], [2], [3]. This interest...weapons are clearly detected in the mmW image. Recently, in [3], Mait et al. presented a computational imaging method to extend the depth-of-field of a...passive mmW imaging sys- tem. The method uses a cubic phase element in the pupil plane of the system to render system operation relatively insensitive

  1. Achievements and challenges of EUV mask imaging

    NASA Astrophysics Data System (ADS)

    Davydova, Natalia; van Setten, Eelco; de Kruif, Robert; Connolly, Brid; Fukugami, Norihito; Kodera, Yutaka; Morimoto, Hiroaki; Sakata, Yo; Kotani, Jun; Kondo, Shinpei; Imoto, Tomohiro; Rolff, Haiko; Ullrich, Albrecht; Lammers, Ad; Schiffelers, Guido; van Dijk, Joep

    2014-07-01

    The impact of various mask parameters on CDU combined in a total mask budget is presented, for 22 nm lines, for reticles used for NXE:3300 qualification. Apart from the standard mask CD measurements, actinic spectrometry of multilayer is used to qualify reflectance uniformity over the image field; advanced 3D metrology is applied for absorber profile characterization including absorber height and side wall angle. The predicted mask impact on CDU is verified using actual exposure data collected on multiple NXE:3300 scanners. Mask 3D effects are addressed, manifesting themselves in best focus shifts for different structures exposed with off-axis illumination. Experimental NXE:3300 results for 16 nm dense lines and 20 nm (semi-)isolated spaces are shown: best focus range reaches 24 nm. A mitigation strategy by absorber height optimization is proposed based on experimental results of a special mask with varying absorber heights. Further development of a black image border for EUV mask is considered. The image border is a pattern free area surrounding image field preventing exposure the image field neighborhood on wafer. Normal EUV absorber is not suitable for this purpose as it has 1-3% EUV reflectance. A current solution is etching of ML down to substrate reducing EUV reflectance to <0.05%. A next step in the development of the black border is the reduction of DUV Out-of-Band reflectance (<1.5%) in order to cope with DUV light present in EUV scanners. Promising results achieved in this direction are shown.

  2. Effects of the "Auditory Discrimination in Depth Program" on Auditory Conceptualization and Reading Achievement.

    ERIC Educational Resources Information Center

    Roberts, Timothy Gerald

    Statistically significant differences were not found between the treatment and non-treatment groups in a study designed to investigate the effectiveness of the Auditory Discrimination in Depth (A.D.D.) Program. The treatment group involved thirty-nine normally achieving and educationally handicapped students who were given the A.D.D. Program…

  3. Spatial Filter Based Bessel-Like Beam for Improved Penetration Depth Imaging in Fluorescence Microscopy

    NASA Astrophysics Data System (ADS)

    Purnapatra, Subhajit B.; Bera, Sampa; Mondal, Partha Pratim

    2012-09-01

    Monitoring and visualizing specimens at a large penetration depth is a challenge. At depths of hundreds of microns, several physical effects (such as, scattering, PSF distortion and noise) deteriorate the image quality and prohibit a detailed study of key biological phenomena. In this study, we use a Bessel-like beam in-conjugation with an orthogonal detection system to achieve depth imaging. A Bessel-like penetrating diffractionless beam is generated by engineering the back-aperture of the excitation objective. The proposed excitation scheme allows continuous scanning by simply translating the detection PSF. This type of imaging system is beneficial for obtaining depth information from any desired specimen layer, including nano-particle tracking in thick tissue. As demonstrated by imaging the fluorescent polymer-tagged-CaCO3 particles and yeast cells in a tissue-like gel-matrix, the system offers a penetration depth that extends up to 650 µm. This achievement will advance the field of fluorescence imaging and deep nano-particle tracking.

  4. Cell depth imaging by point laser scanning fluorescence microscopy with an optical disk pickup head

    NASA Astrophysics Data System (ADS)

    Tsai, Rung-Ywan; Chen, Jung-Po; Lee, Yuan-Chin; Chiang, Hung-Chih; Cheng, Chih-Ming; Huang, Chun-Chieh; Huang, Tai-Ting; Cheng, Chung-Ta; Tiao, Golden

    2015-09-01

    A compact, cost-effective, and position-addressable digital laser scanning microscopy (DLSM) instrument is made using a commercially available Blu-ray disc read-only memory (BD-ROM) pickup head. Fluorescent cell images captured by DLSM have resolutions of 0.38 µm. Because of the position-addressable function, multispectral fluorescence cell images are captured using the same sample slide with different excitation laser sources. Specially designed objective lenses with the same working distance as the image-capturing beam are used for the different excitation laser sources. By accurately controlling the tilting angles of the sample slide or by moving the collimator lens of the image-capturing beam, the fluorescence cell images along different depth positions of the sample are obtained. Thus, z-section images with micrometer-depth resolutions are achievable.

  5. Comparison of curricular breadth, depth, and recurrence and physics achievement of TIMSS Population 3 countries

    NASA Astrophysics Data System (ADS)

    Murdock, John

    This study is a secondary analysis of data from the 1995 administration of the Third International Mathematics and Science Study (TIMSS). The purpose is to compare breadth, depth, and recurrence of the typical physics curriculum in the United States with the typical curricula in different countries and to determine if there are associations between these three curricular constructs and physics achievement. The first data analysis consisted of descriptive statistics (means, standard deviations, and standardized scores) for each of the three curricular variables. This analysis was used to compare the curricular profile in physics of the United States with the profiles of the other countries in the sample. The second data analysis consisted of six sets of correlations relating the three curricular variables with achievement. Five of the correlations were for the five physics content areas and the sixth was for all of physics. This analysis was used to determine if any associations exist between the three curricular constructs and achievement. The results show that the U.S. curriculum has low breadth, low depth, and high recurrence. The U.S. curricular profile was also found to be unique when compared with the profiles of the other countries in the sample. The only statistically significant correlation is between achievement and depth in a positive direction. The correlations between breadth and achievement and between recurrence and achievement were both not statistically significant. Based on the results of this study, depth of curriculum is the only curricular variable that is closely related to physics achievement for the TIMSS sample. Recurrence of curriculum is not related to physics achievement in TIMSS Population 3 countries. The results show no relationship between breadth and achievement, but the physics topics in the TIMSS content framework do not give a complete picture of breadth of physics curriculum in the participating countries. The unique curricular

  6. Measurement depth enhancement in terahertz imaging of biological tissues.

    PubMed

    Oh, Seung Jae; Kim, Sang-Hoon; Jeong, Kiyoung; Park, Yeonji; Huh, Yong-Min; Son, Joo-Hiuk; Suh, Jin-Suck

    2013-09-09

    We demonstrate the use of a THz penetration-enhancing agent (THz-PEA) to enhance the terahertz (THz) wave penetration depth in tissues. The THz-PEA is a biocompatible material having absorption lower than that of water, and it is easily absorbed into tissues. When using glycerol as a THz-PEA, the peak value of the THz signal which was transmitted through the fresh tissue and reflected by a metal target, was almost doubled compared to that of tissue without glycerol. THz time-of-flight imaging (B-scan) was used to display the sequential glycerol delivery images. Enhancement of the penetration depth was confirmed after an artificial tumor was located below fresh skin. We thus concluded that the THz-PEA technique can potentially be employed to enhance the image contrast of the abnormal lesions below the skin.

  7. Robust image, depth, and occlusion generation from uncalibrated stereo

    NASA Astrophysics Data System (ADS)

    Barenbrug, B.; Berretty, R.-P. M.; Klein Gunnewiek, R.

    2008-02-01

    Philips is developing a product line of multi-view auto-stereoscopic 3D displays.1 For interfacing, the image-plus-depth format is used. 2, 3 Being independent of specific display properties, such as number of views, view mapping on pixel grid, etc., this interface format allows optimal multi-view visualisation of content from many different sources, while maintaining interoperability between display types. A vastly growing number of productions from the entertainment industry are aiming at 3D movie theatres. These productions use a two view format, primarily intended for eye-wear assisted viewing. It has been shown 4 how to convert these sequences into the image-plus-depth format. This results in a single layer depth profile, lacking information about areas that are occluded and can be revealed by the stereoscopic parallax. Recently, it has been shown how to compute for intermediate views for a stereo pair. 4, 5 Unfortunately, these approaches are not compatible to the image-plus-depth format, which might hamper the applicability for broadcast 3D television. 3

  8. Depth.

    PubMed

    Koenderink, Jan J; van Doorn, Andrea J; Wagemans, Johan

    2011-01-01

    Depth is the feeling of remoteness, or separateness, that accompanies awareness in human modalities like vision and audition. In specific cases depths can be graded on an ordinal scale, or even measured quantitatively on an interval scale. In the case of pictorial vision this is complicated by the fact that human observers often appear to apply mental transformations that involve depths in distinct visual directions. This implies that a comparison of empirically determined depths between observers involves pictorial space as an integral entity, whereas comparing pictorial depths as such is meaningless. We describe the formal structure of pictorial space purely in the phenomenological domain, without taking recourse to the theories of optics which properly apply to physical space-a distinct ontological domain. We introduce a number of general ways to design and implement methods of geodesy in pictorial space, and discuss some basic problems associated with such measurements. We deal mainly with conceptual issues.

  9. Obtaining anisotropic velocity data for proper depth seismic imaging

    SciTech Connect

    Egerev, Sergey; Yushin, Victor; Ovchinnikov, Oleg; Dubinsky, Vladimir; Patterson, Doug

    2012-05-24

    The paper deals with the problem of obtaining anisotropic velocity data due to continuous acoustic impedance-based measurements while scanning in the axial direction along the walls of the borehole. Diagrams of full conductivity of the piezoceramic transducer were used to derive anisotropy parameters of the rock sample. The measurements are aimed to support accurate depth imaging of seismic data. Understanding these common anisotropy effects is important when interpreting data where it is present.

  10. High Bit-Depth Medical Image Compression with HEVC.

    PubMed

    Parikh, Saurin; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2017-01-27

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as HEVC can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  11. Quantitative determination of maximal imaging depth in all-NIR multiphoton microscopy images of thick tissues

    NASA Astrophysics Data System (ADS)

    Sarder, Pinaki; Akers, Walter J.; Sudlow, Gail P.; Yazdanfar, Siavash; Achilefu, Samuel

    2014-02-01

    We report two methods for quantitatively determining maximal imaging depth from thick tissue images captured using all-near-infrared (NIR) multiphoton microscopy (MPM). All-NIR MPM is performed using 1550 nm laser excitation with NIR detection. This method enables imaging more than five-fold deep in thick tissues in comparison with other NIR excitation microscopy methods. In this study, we show a correlation between the multiphoton signal along the depth of tissue samples and the shape of the corresponding empirical probability density function (pdf) of the photon counts. Histograms from this analysis become increasingly symmetric with the imaging depth. This distribution transitions toward the background distribution at higher imaging depths. Inspired by these observations, we propose two independent methods based on which one can automatically determine maximal imaging depth in the all-NIR MPM images of thick tissues. At this point, the signal strength is expected to be weak and similar to the background. The first method suggests the maximal imaging depth corresponds to the deepest image plane where the ratio between the mean and median of the empirical photon-count pdf is outside the vicinity of 1. The second method suggests the maximal imaging depth corresponds to the deepest image plane where the squared distance between the empirical photon-count mean obtained from the object and the mean obtained from the background is greater than a threshold. We demonstrate the application of these methods in all-NIR MPM images of mouse kidney tissues to study maximal depth penetration in such tissues.

  12. Dual-imaging system for burn depth diagnosis.

    PubMed

    Ganapathy, Priya; Tamminedi, Tejaswi; Qin, Yi; Nanney, Lillian; Cardwell, Nancy; Pollins, Alonda; Sexton, Kevin; Yadegar, Jacob

    2014-02-01

    Currently, determination of burn depth and healing outcomes has been limited to subjective assessment or a single modality, e.g., laser Doppler imaging. Such measures have proven less than ideal. Recent developments in other non-contact technologies such as optical coherence tomography (OCT) and pulse speckle imaging (PSI) offer the promise that an intelligent fusion of information across these modalities can improve visualization of burn regions thereby increasing the sensitivity of the diagnosis. In this work, we combined OCT and PSI images to classify the degree of burn (superficial, partial-thickness and full-thickness burns). Algorithms were developed to integrate and visualize skin structure (with and without burns) from the two modalities. We have completed the proposed initiatives by employing a porcine burn model and compiled results that attest to the utility of our proposed dual-modal fusion approach. Computer-derived data indicating the varying burn depths were validated through immunohistochemical analysis performed on burned skin tissue. The combined performance of OCT and PSI modalities provided an overall ROC-AUC=0.87 (significant at p<0.001) in classifying different burn types measured after 1-h of creating the burn wounds. Porcine model studies to assess feasibility of this dual-imaging system for wound tracking are underway.

  13. A Bayesian framework for human body pose tracking from depth image sequences.

    PubMed

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach.

  14. Review of mesoscopic optical tomography for depth-resolved imaging of hemodynamic changes and neural activities.

    PubMed

    Tang, Qinggong; Lin, Jonathan; Tsytsarev, Vassiliy; Erzurumlu, Reha S; Liu, Yi; Chen, Yu

    2017-01-01

    Understanding the functional wiring of neural circuits and their patterns of activation following sensory stimulations is a fundamental task in the field of neuroscience. Furthermore, charting the activity patterns is undoubtedly important to elucidate how neural networks operate in the living brain. However, optical imaging must overcome the effects of light scattering in the tissue, which limit the light penetration depth and affect both the imaging quantitation and sensitivity. Laminar optical tomography (LOT) is a three-dimensional (3-D) in-vivo optical imaging technique that can be used for functional imaging. LOT can achieve both a resolution of 100 to [Formula: see text] and a penetration depth of 2 to 3 mm based either on absorption or fluorescence contrast, as well as large field-of-view and high acquisition speed. These advantages make LOT suitable for 3-D depth-resolved functional imaging of the neural functions in the brain and spinal cords. We review the basic principles and instrumentations of representative LOT systems, followed by recent applications of LOT on 3-D imaging of neural activities in the rat forepaw stimulation model and mouse whisker-barrel system.

  15. Assessing burn depth in tattooed burn lesions with LASCA Imaging

    PubMed Central

    Krezdorn, N.; Limbourg, A.; Paprottka, F.J.; Könneker; Ipaktchi, R.; Vogt, P.M

    2016-01-01

    Summary Tattoos are on the rise, and so are patients with tattooed burn lesions. A proper assessment with regard to burn depth is often impeded by the tattoo dye. Laser speckle contrast analysis (LASCA) is a technique that evaluates burn lesions via relative perfusion analysis. We assessed the effect of tattoo skin pigmentation on LASCA perfusion imaging in a multicolour tattooed patient. Depth of burn lesions in multi-coloured tattooed and untattooed skin was assessed using LASCA. Relative perfusion was measured in perfusion units (PU) and compared to various pigment colours, then correlated with the clinical evaluation of the lesion. Superficial partial thickness burn (SPTB) lesions showed significantly elevated perfusion units (PU) compared to normal skin; deep partial thickness burns showed decreased PU levels. PU of various tattoo pigments to normal skin showed either significantly lower values (blue, red, pink) or significantly increased values (black) whereas orange and yellow pigment showed values comparable to normal skin. In SPTB, black and blue pigment showed reduced perfusion; yellow pigment was similar to normal SPTB burn. Deep partial thickness burn (DPTB) lesions in tattoos did not show significant differences to normal DPTB lesions for black, green and red. Tattoo pigments alter the results of perfusion patterns assessed with LASCA both in normal and burned skin. Yellow pigments do not seem to interfere with LASCA assessment. However proper determination of burn depth both in SPTB and DPTB by LASCA is limited by the heterogenic alterations of the various pigment colours. PMID:28149254

  16. Assessing burn depth in tattooed burn lesions with LASCA Imaging.

    PubMed

    Krezdorn, N; Limbourg, A; Paprottka, F J; Könneker; Ipaktchi, R; Vogt, P M

    2016-09-30

    Tattoos are on the rise, and so are patients with tattooed burn lesions. A proper assessment with regard to burn depth is often impeded by the tattoo dye. Laser speckle contrast analysis (LASCA) is a technique that evaluates burn lesions via relative perfusion analysis. We assessed the effect of tattoo skin pigmentation on LASCA perfusion imaging in a multicolour tattooed patient. Depth of burn lesions in multi-coloured tattooed and untattooed skin was assessed using LASCA. Relative perfusion was measured in perfusion units (PU) and compared to various pigment colours, then correlated with the clinical evaluation of the lesion. Superficial partial thickness burn (SPTB) lesions showed significantly elevated perfusion units (PU) compared to normal skin; deep partial thickness burns showed decreased PU levels. PU of various tattoo pigments to normal skin showed either significantly lower values (blue, red, pink) or significantly increased values (black) whereas orange and yellow pigment showed values comparable to normal skin. In SPTB, black and blue pigment showed reduced perfusion; yellow pigment was similar to normal SPTB burn. Deep partial thickness burn (DPTB) lesions in tattoos did not show significant differences to normal DPTB lesions for black, green and red. Tattoo pigments alter the results of perfusion patterns assessed with LASCA both in normal and burned skin. Yellow pigments do not seem to interfere with LASCA assessment. However proper determination of burn depth both in SPTB and DPTB by LASCA is limited by the heterogenic alterations of the various pigment colours.

  17. Multi-layer 3D imaging using a few viewpoint images and depth map

    NASA Astrophysics Data System (ADS)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  18. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers

    PubMed Central

    Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m−2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m−2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037

  19. Enhancing imaging depth by multi-angle imaging of embryonic structures

    NASA Astrophysics Data System (ADS)

    Sudheendran, Narendran; Wu, Chen; Dickinson, Mary E.; Larina, Irina V.; Larin, Kirill V.

    2014-03-01

    Because of the ease in generating transgenic/gene knock out models and accessibility to early stages of embryogenesis, mouse and rat models have become invaluable to studying the mechanisms that underlie human birth defects. To study precisely how structural birth defects arise, Ultrasound, MRI, microCT, Optical Projection Tomography (OPT), Optical Coherence Tomography (OCT) and histological methods have all been used for imaging mouse/rat embryos. However, of these methods, only OCT enables live, functional imaging with high spatial and temporal resolution. However, one of the major limitations of conventional OCT imaging is the light depth penetration, which limits acquisition of structural information from the whole embryo. Here we introduce new imaging scheme by OCT imaging from different sides of the embryos that extend the depth penetration of OCT to permit high-resolution imaging of 3D and 4D volumes.

  20. Hybrid Imaging for Extended Depth of Field Microscopy

    NASA Astrophysics Data System (ADS)

    Zahreddine, Ramzi Nicholas

    An inverse relationship exists in optical systems between the depth of field (DOF) and the minimum resolvable feature size. This trade-off is especially detrimental in high numerical aperture microscopy systems where resolution is pushed to the diffraction limit resulting in a DOF on the order of 500 nm. Many biological structures and processes of interest span over micron scales resulting in significant blurring during imaging. This thesis explores a two-step computational imaging technique known as hybrid imaging to create extended DOF (EDF) microscopy systems with minimal sacrifice in resolution. In the first step a mask is inserted at the pupil plane of the microscope to create a focus invariant system over 10 times the traditional DOF, albeit with reduced contrast. In the second step the contrast is restored via deconvolution. Several EDF pupil masks from the literature are quantitatively compared in the context of biological microscopy. From this analysis a new mask is proposed, the incoherently partitioned pupil with binary phase modulation (IPP-BPM), that combines the most advantageous properties from the literature. Total variation regularized deconvolution models are derived for the various noise conditions and detectors commonly used in biological microscopy. State of the art algorithms for efficiently solving the deconvolution problem are analyzed for speed, accuracy, and ease of use. The IPP-BPM mask is compared with the literature and shown to have the highest signal-to-noise ratio and lowest mean square error post-processing. A prototype of the IPP-BPM mask is fabricated using a combination of 3D femtosecond glass etching and standard lithography techniques. The mask is compared against theory and demonstrated in biological imaging applications.

  1. Depth resolution improvement of streak tube imaging lidar using optimal signal width

    NASA Astrophysics Data System (ADS)

    Ye, Guangchao; Fan, Rongwei; Lu, Wei; Dong, Zhiwei; Li, Xudong; He, Ping; Chen, Deying

    2016-10-01

    Streak tube imaging lidar (STIL) is an active imaging system that has a high depth resolution with the use of a pulsed laser transmitter and streak tube receiver to produce three-dimensional (3-D) range images. This work investigates the optimal signal width of the lidar system, which is helpful to improve the depth resolution based on the centroid algorithm. Theoretical analysis indicates that the signal width has a significant effect on the depth resolution and the optimal signal width can be determined for a given STIL system, which is verified by both the simulation and experimental results. An indoor experiment with a planar target was carried out to validate the relation that the range error decreases first and then increases with the signal width, resulting in an optimal signal width of 8.6 pixels. Finer 3-D range images of a cartoon model were acquired by using the optimal signal width and a minimum range error of 5.5 mm was achieved in a daylight environment.

  2. Optics optimization in high-resolution imaging module with extended depth of field

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Bakin, Dmitry; Liu, Changmeng; George, Nicholas

    2008-08-01

    The standard imaging lens for a high resolution sensor was modified to achieve the extended depth of field (EDoF) from 300 mm to infinity. In the module the raw sensor outputs are digitally processed to obtain high contrast images. The overall module is considered as an integrated computational imaging system (ICIS). The simulation results for illustrative designs with different amount of spherical aberrations are provided and compared. Based on the results of simulations we introduced the limiting value of the PSF Strehl ratio as the integral threshold criteria to be used during EDoF lens optimization. A four-element standard lens was modified within the design constraints to achieve the EDoF performance. Two EDoF designs created with different design methods are presented. The imaging modules were compared in terms of Strehl ratios, limiting resolution, modulation frequencies at 50% contrast, and SNR. The output images were simulated for EDoF modules, passed through the image processing pipeline, and compared against the images obtained with the standard lens module.

  3. Ice Cloud Optical Depth Retrievals from CRISM Multispectral Images

    NASA Astrophysics Data System (ADS)

    Klassen, David R.

    2014-11-01

    cubes.Presented here are the results of this PCA/TT work to find the singular set of spectral endmembers and their use in recovering ice cloud optical depth from the MRO-CRISM multispectral image cubes.

  4. Integral imaging microscopy with enhanced depth-of-field using a spatial multiplexing.

    PubMed

    Kwon, Ki-Chul; Erdenebat, Munkh-Uchral; Alam, Md Ashraful; Lim, Young-Tae; Kim, Kwang Gi; Kim, Nam

    2016-02-08

    A depth-of-field enhancement method for integral imaging microscopy system using a spatial multiplexing structure consisting of a beamsplitter with dual video channels and micro lens arrays is proposed. A computational integral imaging reconstruction algorithm generates two sets of depth-sliced images for the acquired depth information of the captured elemental image arrays and the well-focused depth-slices of both image sets are combined where each is focused on a different depth plane of the specimen. A prototype is implemented, and the experimental results demonstrate that the depth-of-field of the reconstructed images in the proposed integral imaging microscopy is significantly increased compared with conventional integral imaging microscopy systems.

  5. Laser speckle contrast imaging with extended depth of field for in-vivo tissue imaging

    PubMed Central

    Sigal, Iliya; Gad, Raanan; Caravaca-Aguirre, Antonio M.; Atchia, Yaaseen; Conkey, Donald B.; Piestun, Rafael; Levi, Ofer

    2013-01-01

    This work presents, to our knowledge, the first demonstration of the Laser Speckle Contrast Imaging (LSCI) technique with extended depth of field (DOF). We employ wavefront coding on the detected beam to gain quantitative information on flow speeds through a DOF extended two-fold compared to the traditional system. We characterize the system in-vitro using controlled microfluidic experiments, and apply it in-vivo to imaging the somatosensory cortex of a rat, showing improved ability to image flow in a larger number of vessels simultaneously. PMID:24466481

  6. Laser speckle contrast imaging with extended depth of field for in-vivo tissue imaging.

    PubMed

    Sigal, Iliya; Gad, Raanan; Caravaca-Aguirre, Antonio M; Atchia, Yaaseen; Conkey, Donald B; Piestun, Rafael; Levi, Ofer

    2013-12-06

    This work presents, to our knowledge, the first demonstration of the Laser Speckle Contrast Imaging (LSCI) technique with extended depth of field (DOF). We employ wavefront coding on the detected beam to gain quantitative information on flow speeds through a DOF extended two-fold compared to the traditional system. We characterize the system in-vitro using controlled microfluidic experiments, and apply it in-vivo to imaging the somatosensory cortex of a rat, showing improved ability to image flow in a larger number of vessels simultaneously.

  7. Diffraction enhanced kinetic depth X-ray imaging

    NASA Astrophysics Data System (ADS)

    Dicken, A.

    An increasing number of fields would benefit from a single analytical probe that can characterise bulk objects that vary in morphology and/or material composition. These fields include security screening, medicine and material science. In this study the X-ray region is shown to be an effective probe for the characterisation of materials. The most prominent analytical techniques that utilise X-radiation are reviewed. The study then focuses on methods of amalgamating the three dimensional power of kinetic depth X-ray (KDFX) imaging with the materials discrimination of angular dispersive X-ray diffraction (ADXRD), thus providing KDEX with a much needed material specific counterpart. A knowledge of the sample position is essential for the correct interpretation of diffraction signatures. Two different sensor geometries (i.e. circumferential and linear) that are able to collect end interpret multiple unknown material diffraction patterns and attribute them to their respective loci within an inspection volume are investigated. The circumferential and linear detector geometries are hypothesised, simulated and then tested in an experimental setting with the later demonstrating a greater ability at discerning between mixed diffraction patterns produced by differing materials. Factors known to confound the linear diffraction method such as sample thickness and radiation energy have been explored and quantified with a possible means of mitigation being identified (i.e. via increasing the sample to detector distance). A series of diffraction patterns (following the linear diffraction approach) were obtained from a single phantom object that was simultaneously interrogated via KDEX imaging. Areas containing diffraction signatures matched from a threat library have been highlighted in the KDEX imagery via colour encoding and match index is inferred by intensity. This union is the first example of its kind and is called diffraction enhanced KDEX imagery. Finally an additional

  8. Passive Millimeter-Wave Imaging with Extended Depth of Field and Sparse Data

    DTIC Science & Technology

    2012-05-01

    ing, extended depth-of-field, image reconstruction, sparsity. 1. INTRODUCTION Over the past several years, imaging using millimeter wave ( mmW ) and...two people with various weapons concealed under clothing. Note that concealed weapons are clearly detected in the mmW image. Recently, in [3], Mait et...al. presented a computational imaging method to extend the depth-of-field of a passive mmW imaging sys- tem. The method uses a cubic phase element in

  9. Undetectable Changes in Image Resolution of Luminance-Contrast Gradients Affect Depth Perception.

    PubMed

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Morita, Toshiya

    2016-01-01

    A great number of studies have suggested a variety of ways to get depth information from two dimensional images such as binocular disparity, shape-from-shading, size gradient/foreshortening, aerial perspective, and so on. Are there any other new factors affecting depth perception? A recent psychophysical study has investigated the correlation between image resolution and depth sensation of Cylinder images (A rectangle contains gradual luminance-contrast changes.). It was reported that higher resolution images facilitate depth perception. However, it is still not clear whether or not the finding generalizes to other kinds of visual stimuli, because there are more appropriate visual stimuli for exploration of depth perception of luminance-contrast changes, such as Gabor patch. Here, we further examined the relationship between image resolution and depth perception by conducting a series of psychophysical experiments with not only Cylinders but also Gabor patches having smoother luminance-contrast gradients. As a result, higher resolution images produced stronger depth sensation with both images. This finding suggests that image resolution affects depth perception of simple luminance-contrast differences (Gabor patch) as well as shape-from-shading (Cylinder). In addition, this phenomenon was found even when the resolution difference was undetectable. This indicates the existence of consciously available and unavailable information in our visual system. These findings further support the view that image resolution is a cue for depth perception that was previously ignored. It partially explains the unparalleled viewing experience of novel high resolution displays.

  10. Depth-Resolved Multispectral Sub-Surface Imaging Using Multifunctional Upconversion Phosphors with Paramagnetic Properties

    PubMed Central

    Ovanesyan, Zaven; Mimun, L. Christopher; Kumar, Gangadharan Ajith; Yust, Brian G.; Dannangoda, Chamath; Martirosyan, Karen S.; Sardar, Dhiraj K.

    2015-01-01

    Molecular imaging is very promising technique used for surgical guidance, which requires advancements related to properties of imaging agents and subsequent data retrieval methods from measured multispectral images. In this article, an upconversion material is introduced for subsurface near-infrared imaging and for the depth recovery of the material embedded below the biological tissue. The results confirm significant correlation between the analytical depth estimate of the material under the tissue and the measured ratio of emitted light from the material at two different wavelengths. Experiments with biological tissue samples demonstrate depth resolved imaging using the rare earth doped multifunctional phosphors. In vitro tests reveal no significant toxicity, whereas the magnetic measurements of the phosphors show that the particles are suitable as magnetic resonance imaging agents. The confocal imaging of fibroblast cells with these phosphors reveals their potential for in vivo imaging. The depth-resolved imaging technique with such phosphors has broad implications for real-time intraoperative surgical guidance. PMID:26322519

  11. Very high frame rate volumetric integration of depth images on mobile devices.

    PubMed

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  12. Instantaneous three-dimensional sensing using spatial light modulator illumination with extended depth of field imaging

    PubMed Central

    Quirin, Sean; Peterka, Darcy S.; Yuste, Rafael

    2013-01-01

    Imaging three-dimensional structures represents a major challenge for conventional microscopies. Here we describe a Spatial Light Modulator (SLM) microscope that can simultaneously address and image multiple targets in three dimensions. A wavefront coding element and computational image processing enables extended depth-of-field imaging. High-resolution, multi-site three-dimensional targeting and sensing is demonstrated in both transparent and scattering media over a depth range of 300-1,000 microns. PMID:23842387

  13. The Effects of Multimedia Learning on Thai Primary Pupils' Achievement in Size and Depth of Vocabulary Knowledge

    ERIC Educational Resources Information Center

    Jingjit, Mathukorn

    2015-01-01

    This study aims to obtain more insight regarding the effect of multimedia learning on third grade of Thai primary pupils' achievement in Size and Depth Vocabulary of English. A quasi-experiment is applied using "one group pretest-posttest design" combined with "time series design," as well as data triangulation. The sample…

  14. Comparison of Curricular Breadth, Depth, and Recurrence and Physics Achievement of TIMSS Population 3 Countries

    ERIC Educational Resources Information Center

    Murdock, John

    2008-01-01

    This study is a secondary analysis of data from the 1995 administration of the Third International Mathematics and Science Study (TIMSS). The purpose is to compare the breadth, depth, and recurrence of the typical physics curriculum in the United States with the typical curricula in different countries and to determine whether there are…

  15. Tripling the maximum imaging depth with third-harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Yildirim, Murat; Durr, Nicholas; Ben-Yakar, Adela

    2015-09-01

    The growing interest in performing high-resolution, deep-tissue imaging has galvanized the use of longer excitation wavelengths and three-photon-based techniques in nonlinear imaging modalities. This study presents a threefold improvement in maximum imaging depth of ex vivo porcine vocal folds using third-harmonic generation (THG) microscopy at 1552-nm excitation wavelength compared to two-photon microscopy (TPM) at 776-nm excitation wavelength. The experimental, analytical, and Monte Carlo simulation results reveal that THG improves the maximum imaging depth observed in TPM significantly from 140 to 420 μm in a highly scattered medium, reaching the expected theoretical imaging depth of seven extinction lengths. This value almost doubles the previously reported normalized imaging depths of 3.5 to 4.5 extinction lengths using three-photon-based imaging modalities. Since tissue absorption is substantial at the excitation wavelength of 1552 nm, this study assesses the tissue thermal damage during imaging by obtaining the depth-resolved temperature distribution through a numerical simulation incorporating an experimentally obtained thermal relaxation time (τ). By shuttering the laser for a period of 2τ, the numerical algorithm estimates a maximum temperature increase of ˜2°C at the maximum imaging depth of 420 μm. The paper demonstrates that THG imaging using 1552 nm as an illumination wavelength with effective thermal management proves to be a powerful deep imaging modality for highly scattering and absorbing tissues, such as scarred vocal folds.

  16. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  17. Material depth reconstruction method of multi-energy X-ray images using neural network.

    PubMed

    Lee, Woo-Jin; Kim, Dae-Seung; Kang, Sung-Won; Yi, Won-Jin

    2012-01-01

    With the advent of technology, multi-energy X-ray imaging is promising technique that can reduce the patient's dose and provide functional imaging. Two-dimensional photon-counting detector to provide multi-energy imaging is under development. In this work, we present a material decomposition method using multi-energy images. To acquire multi-energy images, Monte Carlo simulation was performed. The X-ray spectrum was modeled and ripple effect was considered. Using the dissimilar characteristics in energy-dependent X-ray attenuation of each material, multiple energy X-ray images were decomposed into material depth images. Feedforward neural network was used to fit multi-energy images to material depth images. In order to use the neural network, step wedge phantom images were used for training neuron. Finally, neural network decomposed multi-energy X-ray images into material depth image. To demonstrate the concept of this method, we applied it to simulated images of a 3D head phantom. The results show that neural network method performed effectively material depth reconstruction.

  18. Burn Depth Estimation Using Thermal Excitation and Imaging

    SciTech Connect

    Dickey, F.M.; Holswade, S.C.; Yee, M.L.

    1998-12-17

    Accurate estimation of the depth of partial-thickness burns and the early prediction of a need for surgical intervention are difficult. A non-invasive technique utilizing the difference in thermal relaxation time between burned and normal skin may be useful in this regard. In practice, a thermal camera would record the skin's response to heating or cooling by a small amount-roughly 5{degrees} Celsius for a short duration. The thermal stimulus would be provided by a heat lamp, hot or cold air, or other means. Processing of the thermal transients would reveal areas that returned to equilibrium at different rates, which should correspond to different burn depths. In deeper thickness burns, the outside layer of skin is further removed from the constant-temperature region maintained through blood flow. Deeper thickness areas should thus return to equilibrium more slowly than other areas. Since the technique only records changes in the skin's temperature, it is not sensitive to room temperature, the burn's location, or the state of the patient. Preliminary results are presented for analysis of a simulated burn, formed by applying a patch of biosynthetic wound dressing on top of normal skin tissue.

  19. Exploring High-Achieving Students' Images of Mathematicians

    ERIC Educational Resources Information Center

    Aguilar, Mario Sánchez; Rosas, Alejandro; Zavaleta, Juan Gabriel Molina; Romo-Vázquez, Avenilde

    2016-01-01

    The aim of this study is to describe the images that a group of high-achieving Mexican students hold of mathematicians. For this investigation, we used a research method based on the Draw-A-Scientist Test (DAST) with a sample of 63 Mexican high school students. The group of students' pictorial and written descriptions of mathematicians assisted us…

  20. Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images.

    PubMed

    Tian, Jing; Marziliano, Pina; Baskaran, Mani; Tun, Tin Aung; Aung, Tin

    2013-03-01

    Enhanced Depth Imaging (EDI) optical coherence tomography (OCT) provides high-definition cross-sectional images of the choroid in vivo, and hence is used in many clinical studies. However, the quantification of the choroid depends on the manual labelings of two boundaries, Bruch's membrane and the choroidal-scleral interface. This labeling process is tedious and subjective of inter-observer differences, hence, automatic segmentation of the choroid layer is highly desirable. In this paper, we present a fast and accurate algorithm that could segment the choroid automatically. Bruch's membrane is detected by searching the pixel with the biggest gradient value above the retinal pigment epithelium (RPE) and the choroidal-scleral interface is delineated by finding the shortest path of the graph formed by valley pixels using Dijkstra's algorithm. The experiments comparing automatic segmentation results with the manual labelings are conducted on 45 EDI-OCT images and the average of Dice's Coefficient is 90.5%, which shows good consistency of the algorithm with the manual labelings. The processing time for each image is about 1.25 seconds.

  1. Can the perception of depth in stereoscopic images be influenced by 3D sound?

    NASA Astrophysics Data System (ADS)

    Turner, Amy; Berry, Jonathan; Holliman, Nick

    2011-03-01

    The creation of binocular images for stereoscopic display has benefited from significant research and commercial development in recent years. However, perhaps surprisingly, the effect of adding 3D sound to stereoscopic images has rarely been studied. If auditory depth information can enhance or extend the visual depth experience it could become an important way to extend the limited depth budget on all 3D displays and reduce the potential for fatigue from excessive use of disparity. Objective: As there is limited research in this area our objective was to ask two preliminary questions. First what is the smallest difference in forward depth that can be reliably detected using 3D sound alone? Second does the addition of auditory depth information influence the visual perception of depth in a stereoscopic image? Method: To investigate auditory depth cues we use a simple sound system to test the experimental hypothesis that: participants will perform better than chance at judging the depth differences between two speakers a set distance apart. In our second experiment investigating both auditory and visual depth cues we setup a sound system and a stereoscopic display to test the experimental hypothesis that: participants judge a visual stimulus to be closer if they hear a closer sound when viewing the stimulus. Results: In the auditory depth cue trial every depth difference tested gave significant results demonstrating that the human ear can hear depth differences between physical sources as short as 0.25 m at 1 m. In our trial investigating whether audio information can influence the visual perception of depth we found that participants did report visually perceiving an object to be closer when the sound was played closer to them even though the image depth remained unchanged. Conclusion: The positive results in the two trials show that we can hear small differences in forward depth between sound sources and suggest that it could be practical to extend the apparent

  2. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793

  3. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images.

    PubMed

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer's own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.

  4. Self-Motion and Depth Estimation from Image Sequences

    NASA Technical Reports Server (NTRS)

    Perrone, John

    1999-01-01

    An image-based version of a computational model of human self-motion perception (developed in collaboration with Dr. Leland S. Stone at NASA Ames Research Center) has been generated and tested. The research included in the grant proposal sought to extend the utility of the self-motion model so that it could be used for explaining and predicting human performance in a greater variety of aerospace applications. The model can now be tested with video input sequences (including computer generated imagery) which enables simulation of human self-motion estimation in a variety of applied settings.

  5. Real object-based integral imaging system using a depth camera and a polygon model

    NASA Astrophysics Data System (ADS)

    Jeong, Ji-Seong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Lim, Byung-Muk; Jang, Ho-Wook; Kim, Nam; Yoo, Kwan-Hee

    2017-01-01

    An integral imaging system using a polygon model for a real object is proposed. After depth and color data of the real object are acquired by a depth camera, the grid of the polygon model is converted from the initially reconstructed point cloud model. The elemental image array is generated from the polygon model and directly reconstructed. The polygon model eliminates the failed picking area between the points of a point cloud model, so at least the quality of the reconstructed 3-D image is significantly improved. The theory is verified experimentally, and higher-quality images are obtained.

  6. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  7. Extending the fundamental imaging-depth limit of multi-photon microscopy by imaging with photo-activatable fluorophores.

    PubMed

    Chen, Zhixing; Wei, Lu; Zhu, Xinxin; Min, Wei

    2012-08-13

    It is highly desirable to be able to optically probe biological activities deep inside live organisms. By employing a spatially confined excitation via a nonlinear transition, multiphoton fluorescence microscopy has become indispensable for imaging scattering samples. However, as the incident laser power drops exponentially with imaging depth due to scattering loss, the out-of-focus fluorescence eventually overwhelms the in-focal signal. The resulting loss of imaging contrast defines a fundamental imaging-depth limit, which cannot be overcome by increasing excitation intensity. Herein we propose to significantly extend this depth limit by multiphoton activation and imaging (MPAI) of photo-activatable fluorophores. The imaging contrast is drastically improved due to the created disparity of bright-dark quantum states in space. We demonstrate this new principle by both analytical theory and experiments on tissue phantoms labeled with synthetic caged fluorescein dye or genetically encodable photoactivatable GFP.

  8. 3D planar representation of stereo depth images for 3DTV applications.

    PubMed

    Özkalaycı, Burak O; Alatan, A Aydın

    2014-12-01

    The depth modality of the multiview video plus depth (MVD) format is an active research area, whose main objective is to develop depth image based rendering friendly efficient compression methods. As a part of this research, a novel 3D planar-based depth representation is proposed. The planar approximation of multiple depth images are formulated as an energy-based co-segmentation problem by a Markov random field model. The energy terms of this problem are designed to mimic the rate-distortion tradeoff for a depth compression application. A novel algorithm is developed for practical utilization of the proposed planar approximations in stereo depth compression. The co-segmented regions are also represented as layered planar structures forming a novel single-reference MVD format. The ability of the proposed layered planar MVD representation in decoupling the texture and geometric distortions make it a promising approach. Proposed 3D planar depth compression approaches are compared against the state-of-the-art image/video coding standards by objective and visual evaluation and yielded competitive performance.

  9. High performance computational integral imaging system using multi-view video plus depth representation

    NASA Astrophysics Data System (ADS)

    Shi, Shasha; Gioia, Patrick; Madec, Gérard

    2012-12-01

    Integral imaging is an attractive auto-stereoscopic three-dimensional (3D) technology for next-generation 3DTV. But its application is obstructed by poor image quality, huge data volume and high processing complexity. In this paper, a new computational integral imaging (CII) system using multi-view video plus depth (MVD) representation is proposed to solve these problems. The originality of this system lies in three aspects. Firstly, a particular depth-image-based rendering (DIBR) technique is used in encoding process to exploit the inter-view correlation between different sub-images (SIs). Thereafter, the same DIBR method is applied in the display side to interpolate virtual SIs and improve the reconstructed 3D image quality. Finally, a novel parallel group projection (PGP) technique is proposed to simplify the reconstruction process. According to experimental results, the proposed CII system improves compression efficiency and displayed image quality, while reducing calculation complexity. [Figure not available: see fulltext.

  10. Depth extraction of three-dimensional objects using block matching for slice images in synthetic aperture integral imaging.

    PubMed

    Lee, Joon-Jae; Lee, Byung-Gook; Yoo, Hoon

    2011-10-10

    We describe a computational method for depth extraction of three-dimensional (3D) objects using block matching for slice images in synthetic aperture integral imaging (SAII). SAII is capable of providing high-resolution 3D slice images for 3D objects because the picked-up elemental images are high-resolution ones. In the proposed method, the high-resolution elemental images are recorded by moving a camera; a computational reconstruction algorithm based on ray backprojection generates a set of 3D slice images from the recorded elemental images. To extract depth information of the 3D objects, we propose a new block-matching algorithm between a reference elemental image and a set of 3D slice images. The property of the slices images is that the focused areas are the right location for an object, whereas the blurred areas are considered to be empty space; thus, this can extract robust and accurate depth information of the 3D objects. To demonstrate our method, we carry out the preliminary experiments of 3D objects; the results indicate that our method is superior to a conventional method in terms of depth-map quality.

  11. Penetration depth measurement of near-infrared hyperspectral imaging light for milk powder

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The increasingly common application of near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging ligh...

  12. Real-time planar segmentation of depth images: from three-dimensional edges to segmented planes

    NASA Astrophysics Data System (ADS)

    Javan Hemmat, Hani; Bondarev, Egor; de With, Peter H. N.

    2015-09-01

    Real-time execution of processing algorithms for handling depth images in a three-dimensional (3-D) data framework is a major challenge. More specifically, considering depth images as point clouds and performing planar segmentation requires heavy computation, because available planar segmentation algorithms are mostly based on surface normals and/or curvatures, and, consequently, do not provide real-time performance. Aiming at the reconstruction of indoor environments, the spaces mainly consist of planar surfaces, so that a possible 3-D application would strongly benefit from a real-time algorithm. We introduce a real-time planar segmentation method for depth images avoiding any surface normal calculation. First, we detect 3-D edges in a depth image and generate line segments between the identified edges. Second, we fuse all the points on each pair of intersecting line segments into a plane candidate. Third and finally, we implement a validation phase to select planes from the candidates. Furthermore, various enhancements are applied to improve the segmentation quality. The GPU implementation of the proposed algorithm segments depth images into planes at the rate of 58 fps. Our pipeline-interleaving technique increases this rate up to 100 fps. With this throughput rate improvement, the application benefit of our algorithm may be further exploited in terms of quality and enhancing the localization.

  13. Extending the effective imaging depth in spectral domain optical coherence tomography by dual spatial frequency encoding

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Wang, Qingqing; Liu, Youwen; Wang, Jiming

    2016-03-01

    We present a spatial frequency domain multiplexing method for extending the imaging depth range of a SDOCT system without any expensive device. This method uses two reference arms with different round-trip optical delay to probe different depth regions within the sample. Two galvo scanners with different pivot-offset distances in the reference arms are used for spatial frequency modulation and multiplexing. While simultaneously driving the galvo scanners in the reference arms and the sample arm, the spatial spectrum of the acquired two-dimensional OCT spectral interferogram corresponding to the shallow and deep depth of the sample will be shifted to the different frequency bands in the spatial frequency domain. After data filtering, image reconstruction and fusion the spatial frequency multiplexing SDOCT system can provide an approximately 1.9 fold increase in the effective ranging depth compared with that of a conventional single-reference-arm full-range SDOCT system.

  14. Study and application of body shape recognition based on depth image

    NASA Astrophysics Data System (ADS)

    Han, Yu-chong; Qin, Jun; Li, Yu-nong; Tao, Jun-jun; Fei, Qin

    2014-02-01

    Depth images have advantages of simple processing, fog penetration, and little affection by light, thus a body shape detection algorithm based on depth image was proposed to judge personnel evacuation. This study started by making body shape dataset using a depth sensor, then extracting the HOG-depth feature. The best parameters were found, including the range of gradient direction and the number of bins. Next step was to train and classify the body shape dataset using different classifiers, and gentle Adaboost algorithm based on CART weak classifiers got the best result. Then we discussed the effect of traversal method of sliding window, and found a better pixel number of every moving step. At last, the intellectualized control method under actual personnel evacuating situation was completed from the view of software implementation.

  15. Structured light 3D depth map enhancement and gesture recognition using image content adaptive filtering

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Nash, James; Atanassov, Kalin; Goma, Sergio

    2013-03-01

    A structured-light system for depth estimation is a type of 3D active sensor that consists of a structured-light projector that projects an illumination pattern on the scene (e.g. mask with vertical stripes) and a camera which captures the illuminated scene. Based on the received patterns, depths of different regions in the scene can be inferred. In this paper, we use side information in the form of image structure to enhance the depth map. This side information is obtained from the received light pattern image reflected by the scene itself. The processing steps run real time. This post-processing stage in the form of depth map enhancement can be used for better hand gesture recognition, as is illustrated in this paper.

  16. Chirp Z transform based enhanced frequency resolution for depth resolvable non stationary thermal wave imaging

    NASA Astrophysics Data System (ADS)

    Suresh, B.; Subhani, Sk.; Vijayalakshmi, A.; Vardhan, V. H.; Ghali, V. S.

    2017-01-01

    This paper proposes a novel post processing modality to enhance depth resolution in frequency modulated thermal wave imaging using chirp Z transform. It explores the spectral zooming feature of the proposed modality to enhance depth resolution and validates it through the experimentation carried over a carbon fiber reinforced plastic and mild steel specimens. Further, defect detection capability of the proposed modality has been compared with that of the other contemporary modalities by taking the defect signal to noise ratio into consideration.

  17. Chirp Z transform based enhanced frequency resolution for depth resolvable non stationary thermal wave imaging.

    PubMed

    Suresh, B; Subhani, Sk; Vijayalakshmi, A; Vardhan, V H; Ghali, V S

    2017-01-01

    This paper proposes a novel post processing modality to enhance depth resolution in frequency modulated thermal wave imaging using chirp Z transform. It explores the spectral zooming feature of the proposed modality to enhance depth resolution and validates it through the experimentation carried over a carbon fiber reinforced plastic and mild steel specimens. Further, defect detection capability of the proposed modality has been compared with that of the other contemporary modalities by taking the defect signal to noise ratio into consideration.

  18. Depth-Enhanced Integral Imaging with a Stepped Lens Array or a Composite Lens Array for Three-Dimensional Display

    NASA Astrophysics Data System (ADS)

    Choi, Heejin; Park, Jae-Hyeung; Hong, Jisoo; Lee, Byoungho

    2004-08-01

    In spite of the many advantages of integral imaging, the depth of reconstructed three-dimensional (3D) image is limited to around the only one image plane. Here, we propose a novel method for increasing the depth of a reconstructed image using a stepped lens array (SLA) or a composite lens array (CLA). We confirm our idea by fabricating SLA and CLA with two image planes each. By using a SLA or a CLA, it is possible to form the 3D image around several image planes and to increase the depth of the reconstructed 3D image.

  19. Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    So, Peter T.

    2016-03-01

    Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens.

  20. Multispectral upconversion luminescence intensity ratios for ascertaining the tissue imaging depth

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Wang, Yu; Kong, Xianggui; Liu, Xiaomin; Zhang, Youlin; Tu, Langping; Ding, Yadan; Aalders, Maurice C. G.; Buma, Wybren Jan; Zhang, Hong

    2014-07-01

    Upconversion nanoparticles (UCNPs) have in recent years emerged as excellent contrast agents for in vivo luminescence imaging of deep tissues. But information abstracted from these images is in most cases restricted to 2-dimensions, without the depth information. In this work, a simple method has been developed to accurately ascertain the tissue imaging depth based on the relative luminescence intensity ratio of multispectral NaYF4:Yb3+,Er3+ UCNPs. A theoretical mode was set up, where the parameters in the quantitative relation between the relative intensities of the upconversion luminescence spectra and the depth of the UCNPs were determined using tissue mimicking liquid phantoms. The 540 nm and 650 nm luminescence intensity ratios (G/R ratio) of NaYF4:Yb3+,Er3+ UCNPs were monitored following excitation path (Ex mode) and emission path (Em mode) schemes, respectively. The model was validated by embedding NaYF4:Yb3+,Er3+ UCNPs in layered pork muscles, which demonstrated a very high accuracy of measurement in the thickness up to centimeter. This approach shall promote significantly the power of nanotechnology in medical optical imaging by expanding the imaging information from 2-dimensional to real 3-dimensional.Upconversion nanoparticles (UCNPs) have in recent years emerged as excellent contrast agents for in vivo luminescence imaging of deep tissues. But information abstracted from these images is in most cases restricted to 2-dimensions, without the depth information. In this work, a simple method has been developed to accurately ascertain the tissue imaging depth based on the relative luminescence intensity ratio of multispectral NaYF4:Yb3+,Er3+ UCNPs. A theoretical mode was set up, where the parameters in the quantitative relation between the relative intensities of the upconversion luminescence spectra and the depth of the UCNPs were determined using tissue mimicking liquid phantoms. The 540 nm and 650 nm luminescence intensity ratios (G/R ratio) of NaYF4:Yb3

  1. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  2. Penetration Depth Measurement of Near-Infrared Hyperspectral Imaging Light for Milk Powder.

    PubMed

    Huang, Min; Kim, Moon S; Chao, Kuanglin; Qin, Jianwei; Mo, Changyeun; Esquerre, Carlos; Delwiche, Stephen; Zhu, Qibing

    2016-03-25

    The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging light for milk powder. Hyperspectral NIR reflectance images were collected for eight different milk powder products that included five brands of non-fat milk powder and three brands of whole milk powder. For each milk powder, five different powder depths ranging from 1 mm-5 mm were prepared on the top of a base layer of melamine, to test spectral-based detection of the melamine through the milk. A relationship was established between the NIR reflectance spectra (937.5-1653.7 nm) and the penetration depth was investigated by means of the partial least squares-discriminant analysis (PLS-DA) technique to classify pixels as being milk-only or a mixture of milk and melamine. With increasing milk depth, classification model accuracy was gradually decreased. The results from the 1-mm, 2-mm and 3-mm models showed that the average classification accuracy of the validation set for milk-melamine samples was reduced from 99.86% down to 94.93% as the milk depth increased from 1 mm-3 mm. As the milk depth increased to 4 mm and 5 mm, model performance deteriorated further to accuracies as low as 81.83% and 58.26%, respectively. The results suggest that a 2-mm sample depth is recommended for the screening/evaluation of milk powders using an online NIR hyperspectral imaging system similar to that used in this study.

  3. Penetration Depth Measurement of Near-Infrared Hyperspectral Imaging Light for Milk Powder

    PubMed Central

    Huang, Min; Kim, Moon S.; Chao, Kuanglin; Qin, Jianwei; Mo, Changyeun; Esquerre, Carlos; Delwiche, Stephen; Zhu, Qibing

    2016-01-01

    The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging light for milk powder. Hyperspectral NIR reflectance images were collected for eight different milk powder products that included five brands of non-fat milk powder and three brands of whole milk powder. For each milk powder, five different powder depths ranging from 1 mm–5 mm were prepared on the top of a base layer of melamine, to test spectral-based detection of the melamine through the milk. A relationship was established between the NIR reflectance spectra (937.5–1653.7 nm) and the penetration depth was investigated by means of the partial least squares-discriminant analysis (PLS-DA) technique to classify pixels as being milk-only or a mixture of milk and melamine. With increasing milk depth, classification model accuracy was gradually decreased. The results from the 1-mm, 2-mm and 3-mm models showed that the average classification accuracy of the validation set for milk-melamine samples was reduced from 99.86% down to 94.93% as the milk depth increased from 1 mm–3 mm. As the milk depth increased to 4 mm and 5 mm, model performance deteriorated further to accuracies as low as 81.83% and 58.26%, respectively. The results suggest that a 2-mm sample depth is recommended for the screening/evaluation of milk powders using an online NIR hyperspectral imaging system similar to that used in this study. PMID:27023555

  4. Large depth-high resolution full 3D imaging of the anterior segments of the eye using high speed optical frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Kerbage, C.; Lim, H.; Sun, W.; Mujat, M.; de Boer, J. F.

    2007-06-01

    Three dimensional rapid large depth range imaging of the anterior segments of the human eye by an optical frequency domain imaging system is presented. The tunable source spans from 1217 to 1356 nm with an average output power of 60 mW providing a measured axial resolution of 10 μm in air based on the coherence envelope. The effective depth range is 4 mm, defined as the distance over which the sensitivity drops by 6 dB, achieved by frequency shifting the optical signal using acousto-optic modulators. The measured maximum sensitivity is 109 dB at a sample arm power of 14.7mW and A-lines rate of 43,900 per second. Images consisting of 512 depth profiles are acquired at an acquisition rate of 85 frames per second. We demonstrate an optical frequency domain imaging system capable of mapping in vivo the entire area of the human anterior segment (13.4 x 12 x 4.2 mm) in 1.4 seconds.

  5. Evaluation of optical imaging and spectroscopy approaches for cardiac tissue depth assessment

    SciTech Connect

    Lin, B; Matthews, D; Chernomordik, V; Gandjbakhche, A; Lane, S; Demos, S G

    2008-02-13

    NIR light scattering from ex vivo porcine cardiac tissue was investigated to understand how imaging or point measurement approaches may assist development of methods for tissue depth assessment. Our results indicate an increase of average image intensity as thickness increases up to approximately 2 mm. In a dual fiber spectroscopy configuration, sensitivity up to approximately 3 mm with an increase to 6 mm when spectral ratio between selected wavelengths was obtained. Preliminary Monte Carlo results provided reasonable fit to the experimental data.

  6. Multiple-image-depth modeling for hotspot and AF printing detections

    NASA Astrophysics Data System (ADS)

    Tang, Y. P.; Chou, C. S.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2012-03-01

    Typical OPC models focus on predicting wafer contour or CD; therefore, the modeling approach emphasizes careful determination of feature and edge locations in the photo-resist (PR) as well as the exposure threshold, so that the 'cut' model image matches the wafer SEM contours or cut-line CDs most closely. This is an exquisite approach with regard to the contour-based OPC, for the model is calibrated directly from wafer CDs. However, for other applications such as hotspot detection or assist feature (AF) printing prediction that might occur at the top or the bottom of the PR, the typical OPC model approach may not be accurate enough. Usually, these kinds of phenomenon can only be properly described by rigorous simulation, which is very time-consuming and hence not suitable for OPC. In this paper, the approach of building the OPC model with multiple image depths will be discussed. This approach references the images at the bottom and/or the top of the PR. This way, the behavior of the images which are not shown at the normal image depth can be predicted more accurately without distorting the optical model. This compromised OPC modeling approach is beneficial for runtime reduction compared to the rigorous simulation, and for better accuracy compared to conventional model. The applications for AF printing and hotspot predictions using the multiple image depth approach will be demonstrated.

  7. Augmented depth perception visualization in 2D/3D image fusion.

    PubMed

    Wang, Jian; Kreiser, Matthias; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-12-01

    2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved.

  8. Two-photon instant structured illumination microscopy improves the depth penetration of super-resolution imaging in thick scattering samples.

    PubMed

    Winter, Peter W; York, Andrew G; Nogare, Damian Dalle; Ingaramo, Maria; Christensen, Ryan; Chitnis, Ajay; Patterson, George H; Shroff, Hari

    2014-09-20

    Fluorescence imaging methods that achieve spatial resolution beyond the diffraction limit (super-resolution) are of great interest in biology. We describe a super-resolution method that combines two-photon excitation with structured illumination microscopy (SIM), enabling three-dimensional interrogation of live organisms with ~150 nm lateral and ~400 nm axial resolution, at frame rates of ~1 Hz. By performing optical rather than digital processing operations to improve resolution, our microscope permits super-resolution imaging with no additional cost in acquisition time or phototoxicity relative to the point-scanning two-photon microscope upon which it is based. Our method provides better depth penetration and inherent optical sectioning than all previously reported super-resolution SIM implementations, enabling super-resolution imaging at depths exceeding 100 μm from the coverslip surface. The capability of our system for interrogating thick live specimens at high resolution is demonstrated by imaging whole nematode embryos and larvae, and tissues and organs inside zebrafish embryos.

  9. Analysis of airborne imaging spectrometer data for the Ruby Mountains, Montana, by use of absorption-band-depth images

    NASA Technical Reports Server (NTRS)

    Brickey, David W.; Crowley, James K.; Rowan, Lawrence C.

    1987-01-01

    Airborne Imaging Spectrometer-1 (AIS-1) data were obtained for an area of amphibolite grade metamorphic rocks that have moderate rangeland vegetation cover. Although rock exposures are sparse and patchy at this site, soils are visible through the vegetation and typically comprise 20 to 30 percent of the surface area. Channel averaged low band depth images for diagnostic soil rock absorption bands. Sets of three such images were combined to produce color composite band depth images. This relative simple approach did not require extensive calibration efforts and was effective for discerning a number of spectrally distinctive rocks and soils, including soils having high talc concentrations. The results show that the high spectral and spatial resolution of AIS-1 and future sensors hold considerable promise for mapping mineral variations in soil, even in moderately vegetated areas.

  10. Improving Resolution and Depth of Astronomical Observations via Modern Mathematical Methods for Image Analysis

    NASA Astrophysics Data System (ADS)

    Castellano, M.; Ottaviani, D.; Fontana, A.; Merlin, E.; Pilo, S.; Falcone, M.

    2015-09-01

    In the past years modern mathematical methods for image analysis have led to a revolution in many fields, from computer vision to scientific imaging. However, some recently developed image processing techniques successfully exploited by other sectors have been rarely, if ever, experimented on astronomical observations. We present here tests of two classes of variational image enhancement techniques: "structure-texture decomposition" and "super-resolution" showing that they are effective in improving the quality of observations. Structure-texture decomposition allows to recover faint sources previously hidden by the background noise, effectively increasing the depth of available observations. Super-resolution yields an higher-resolution and a better sampled image out of a set of low resolution frames, thus mitigating problematics in data analysis arising from the difference in resolution/sampling between different instruments, as in the case of EUCLID VIS and NIR imagers.

  11. Development of Extended-Depth Swept Source Optical Coherence Tomography for Applications in Ophthalmic Imaging of the Anterior and Posterior Eye

    NASA Astrophysics Data System (ADS)

    Dhalla, Al-Hafeez Zahir

    Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for

  12. Analytic expression of fluorescence ratio detection correlates with depth in multi-spectral sub-surface imaging

    PubMed Central

    Leblond, F; Ovanesyan, Z; Davis, S C; Valdés, P A; Kim, A; Hartov, A; Wilson, B C; Pogue, B W; Paulsen, K D; Roberts, D W

    2016-01-01

    Here we derived analytical solutions to diffuse light transport in biological tissue based on spectral deformation of diffused near-infrared measurements. These solutions provide a closed-form mathematical expression which predicts that the depth of a fluorescent molecule distribution is linearly related to the logarithm of the ratio of fluorescence at two different wavelengths. The slope and intercept values of the equation depend on the intrinsic values of absorption and reduced scattering of tissue. This linear behavior occurs if the following two conditions are satisfied: the depth is beyond a few millimeters, and the tissue is relatively homogenous. We present experimental measurements acquired with a broad-beam non-contact multi-spectral fluorescence imaging system using a hemoglobin-containing diffusive phantom. Preliminary results confirm that a significant correlation exists between the predicted depth of a distribution of protoporphyrin IX (PpIX) molecules and the measured ratio of fluorescence at two different wavelengths. These results suggest that depth assessment of fluorescence contrast can be achieved in fluorescence-guided surgery to allow improved intra-operative delineation of tumor margins. PMID:21971201

  13. Combining depth and gray images for fast 3D object recognition

    NASA Astrophysics Data System (ADS)

    Pan, Wang; Zhu, Feng; Hao, Yingming

    2016-10-01

    Reliable and stable visual perception systems are needed for humanoid robotic assistants to perform complex grasping and manipulation tasks. The recognition of the object and its precise 6D pose are required. This paper addresses the challenge of detecting and positioning a textureless known object, by estimating its complete 6D pose in cluttered scenes. A 3D perception system is proposed in this paper, which can robustly recognize CAD models in cluttered scenes for the purpose of grasping with a mobile manipulator. Our approach uses a powerful combination of two different camera technologies, Time-Of-Flight (TOF) and RGB, to segment the scene and extract objects. Combining the depth image and gray image to recognize instances of a 3D object in the world and estimate their 3D poses. The full pose estimation process is based on depth images segmentation and an efficient shape-based matching. At first, the depth image is used to separate the supporting plane of objects from the cluttered background. Thus, cluttered backgrounds are circumvented and the search space is extremely reduced. And a hierarchical model based on the geometry information of a priori CAD model of the object is generated in the offline stage. Then using the hierarchical model we perform a shape-based matching in 2D gray images. Finally, we validate the proposed method in a number of experiments. The results show that utilizing depth and gray images together can reach the demand of a time-critical application and reduce the error rate of object recognition significantly.

  14. Chromatic confocal microscopy for multi-depth imaging of epithelial tissue.

    PubMed

    Olsovsky, Cory; Shelton, Ryan; Carrasco-Zevallos, Oscar; Applegate, Brian E; Maitland, Kristen C

    2013-05-01

    We present a novel chromatic confocal microscope capable of volumetric reflectance imaging of microstructure in non-transparent tissue. Our design takes advantage of the chromatic aberration of aspheric lenses that are otherwise well corrected. Strong chromatic aberration, generated by multiple aspheres, longitudinally disperses supercontinuum light onto the sample. The backscattered light detected with a spectrometer is therefore wavelength encoded and each spectrum corresponds to a line image. This approach obviates the need for traditional axial mechanical scanning techniques that are difficult to implement for endoscopy and susceptible to motion artifact. A wavelength range of 590-775 nm yielded a >150 µm imaging depth with ~3 µm axial resolution. The system was further demonstrated by capturing volumetric images of buccal mucosa. We believe these represent the first microstructural images in non-transparent biological tissue using chromatic confocal microscopy that exhibit long imaging depth while maintaining acceptable resolution for resolving cell morphology. Miniaturization of this optical system could bring enhanced speed and accuracy to endomicroscopic in vivo volumetric imaging of epithelial tissue.

  15. Digital approximation to extended depth of field in no telecentric imaging systems

    NASA Astrophysics Data System (ADS)

    Meneses, J. E.; Contreras, C. R.

    2011-01-01

    A method used to digitally extend the depth of field of an imaging system consists to move the object of study along the optical axis of the system and different images will contain different areas that are sharp; those images are stored and processed digitally to obtain a fused image, in that image will be sharp all regions of the object. The implementation of this method, although widely used, imposes certain experimental conditions that should be evaluated for to study the degree of validity of the image final obtained. An experimental condition is related with the conservation of the geometric magnification factor when there is a relative movement between the object and the observation system; this implies that the system must be telecentric, which leads to a reduction of the observation field and the use of expensive systems if the application includes microscopic observation. This paper presents a technique that makes possible to extend depth of filed of an imaging system non telecentric; this system is used to realize applications in Optical Metrology with systems that have great observation field.

  16. Spatial frequencies from human periosteum at different depths using two-photon microscopic images

    NASA Astrophysics Data System (ADS)

    Sordillo, Laura A.; Shi, Lingyan; Bhagroo, Stephen; Nguyen, Theinan; Lubicz, Stephanie; Pu, Yang; Budansky, Yuri; Hatak, Noella; Alfano, R. R.

    2014-03-01

    The outer layer of human bone, the periosteum, was studied using two-photon (2P) fluorescence microscopy. This layer of the periosteum is composed mostly of fibrous collagen. The inner cambium layer has less collagen and contains osteoblasts necessary for bone remodeling. The spatial frequencies from the layers of the periosteum of human bone at different depths were investigated using images acquired with two-photon excitation microscopy. This 2P spectroscopic method offers deeper depth penetration into samples, high fluorescence collection efficiency, and a reduction in photobleaching and photodamage. Using 130 femtosecond pulses with an 800 nm wavelength excitation, a 40× microscope objective, and a photomultiplier tube (PMT) detector, high contrast images of the collagen present in the periosteum at various micrometers depths from the surface were obtained. Fourier transform analysis of the 2P images was used to assess the structure of the periosteum at different depths in terms of spatial frequencies. The spatial frequency spectra from the outer and inner periosteal regions show significant spectral peak differences which can provide information on the structure of the layers of the periosteum. One may be able to use spatial frequency spectra for optical detection of abnormalities of the periosteum which can occur in disease.

  17. Depth-resolved holographic optical coherence imaging using a high-sensitivity photorefractive polymer device

    NASA Astrophysics Data System (ADS)

    Salvador, M.; Prauzner, J.; Köber, S.; Meerholz, K.; Jeong, K.; Nolte, D. D.

    2008-12-01

    We present coherence-gated holographic imaging using a highly sensitive photorefractive (PR) polymer composite as the recording medium. Due to the high sensitivity of the composite holographic recording at intensities as low as 5 mW/cm2 allowed for a frame exposure time of only 500ms. Motivated by regenerative medical applications, we demonstrate optical depth sectioning of a polymer foam for use as a cell culture matrix. An axial resolution of 18 μm and a transverse resolution of 30 μm up to a depth of 600 μm was obtained using an off-axis recording geometry.

  18. Full-range imaging of eye accommodation by high-speed long-depth range optical frequency domain imaging

    PubMed Central

    Furukawa, Hiroyuki; Hiro-Oka, Hideaki; Satoh, Nobuyuki; Yoshimura, Reiko; Choi, Donghak; Nakanishi, Motoi; Igarashi, Akihito; Ishikawa, Hitoshi; Ohbayashi, Kohji; Shimizu, Kimiya

    2010-01-01

    We describe a high-speed long-depth range optical frequency domain imaging (OFDI) system employing a long-coherence length tunable source and demonstrate dynamic full-range imaging of the anterior segment of the eye including from the cornea surface to the posterior capsule of the crystalline lens with a depth range of 12 mm without removing complex conjugate image ambiguity. The tunable source spanned from 1260 to 1360 nm with an average output power of 15.8 mW. The fast A-scan rate of 20,000 per second provided dynamic OFDI and dependence of the whole anterior segment change on time following abrupt relaxation from the accommodated to the relaxed status, which was measured for a healthy eye and that with an intraocular lens. PMID:21258564

  19. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    PubMed Central

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-01-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids. PMID:26576666

  20. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    NASA Astrophysics Data System (ADS)

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-11-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids.

  1. Depth-section imaging of swine kidney by spectrally encoded microscopy

    NASA Astrophysics Data System (ADS)

    Liao, Jiuling; Gao, Wanrong

    2016-10-01

    The kidneys are essential regulatory organs whose main function is to regulate the balance of electrolytes in the blood, along with maintaining pH homeostasis. The study of the microscopic structure of the kidney will help identify kidney diseases associated with specific renal histology change. Spectrally encoded microscopy (SEM) is a new reflectance microscopic imaging technique in which a grating is used to illuminate different positions along a line on the sample with different wavelengths, reducing the size of system and imaging time. In this paper, a SEM device is described which is based on a super luminescent diode source and a home-built spectrometer. The lateral resolution was measured by imaging the USAF resolution target. The axial response curve was obtained as a reflect mirror was scanned through the focal plane axially. In order to test the feasibility of using SEM for depth-section imaging of an excised swine kidney tissue, the images of the samples were acquired by scanning the sample at 10 μm per step along the depth direction. Architectural features of the kidney tissue could be clearly visualized in the SEM images, including glomeruli and blood vessels. Results from this study suggest that SEM may be useful for locating regions with probabilities of kidney disease or cancer.

  2. Large area and depth-profiling dislocation imaging and strain analysis in Si/SiGe/Si heterostructures.

    PubMed

    Chen, Xin; Zuo, Daniel; Kim, Seongwon; Mabon, James; Sardela, Mauro; Wen, Jianguo; Zuo, Jian-Min

    2014-10-01

    We demonstrate the combined use of large area depth-profiling dislocation imaging and quantitative composition and strain measurement for a strained Si/SiGe/Si sample based on nondestructive techniques of electron beam-induced current (EBIC) and X-ray diffraction reciprocal space mapping (XRD RSM). Depth and improved spatial resolution is achieved for dislocation imaging in EBIC by using different electron beam energies at a low temperature of ~7 K. Images recorded clearly show dislocations distributed in three regions of the sample: deep dislocation networks concentrated in the "strained" SiGe region, shallow misfit dislocations at the top Si/SiGe interface, and threading dislocations connecting the two regions. Dislocation densities at the top of the sample can be measured directly from the EBIC results. XRD RSM reveals separated peaks, allowing a quantitative measurement of composition and strain corresponding to different layers of different composition ratios. High-resolution scanning transmission electron microscopy cross-section analysis clearly shows the individual composition layers and the dislocation lines in the layers, which supports the EBIC and XRD RSM results.

  3. Large Area and Depth-Profiling Dislocation Imaging and Strain Analysis in Si/SiGe/Si Heterostructures

    SciTech Connect

    Chen, Xin; Zuo, Daniel; Kim, Seongwon; Mabon, James; Sardela, Mauro; Wen, Jianguo; Zuo, Jian-Min

    2014-08-27

    We demonstrate the combined use of large area depth-profiling dislocation imaging and quantitative composition and strain measurement for a strained Si/SiGe/Si sample based on nondestructive techniques of electron beam-induced current (EBIC) and X-ray diffraction reciprocal space mapping (XRD RSM). Depth and improved spatial resolution is achieved for dislocation imaging in EBIC by using different electron beam energies at a low temperature of ~7 K. Images recorded clearly show dislocations distributed in three regions of the sample: deep dislocation networks concentrated in the “strained” SiGe region, shallow misfit dislocations at the top Si/SiGe interface, and threading dislocations connecting the two regions. Dislocation densities at the top of the sample can be measured directly from the EBIC results. XRD RSM reveals separated peaks, allowing a quantitative measurement of composition and strain corresponding to different layers of different composition ratios. High-resolution scanning transmission electron microscopy cross-section analysis clearly shows the individual composition layers and the dislocation lines in the layers, which supports the EBIC and XRD RSM results.

  4. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models

    PubMed Central

    Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir

    2016-01-01

    Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570

  5. Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models.

    PubMed

    Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir

    2016-10-19

    Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors.

  6. Robust Depth Estimation and Image Fusion Based on Optimal Area Selection

    PubMed Central

    Lee, Ik-Hyun; Mahmood, Muhammad Tariq; Choi, Tae-Sun

    2013-01-01

    Mostly, 3D cameras having depth sensing capabilities employ active depth estimation techniques, such as stereo, the triangulation method or time-of-flight. However, these methods are expensive. The cost can be reduced by applying optical passive methods, as they are inexpensive and efficient. In this paper, we suggest the use of one of the passive optical methods named shape from focus (SFF) for 3D cameras. In the proposed scheme, first, an adaptive window is computed through an iterative process using a criterion. Then, the window is divided into four regions. In the next step, the best focused area among the four regions is selected based on variation in the data. The effectiveness of the proposed scheme is validated using image sequences of synthetic and real objects. Comparative analysis based on statistical metrics correlation, mean square error (MSE), universal image quality index (UIQI) and structural similarity (SSIM) shows the effectiveness of the proposed scheme. PMID:24008281

  7. Principal component analysis of TOF-SIMS spectra, images and depth profiles: an industrial perspective

    NASA Astrophysics Data System (ADS)

    Pacholski, Michaeleen L.

    2004-06-01

    Principal component analysis (PCA) has been successfully applied to time-of-flight secondary ion mass spectrometry (TOF-SIMS) spectra, images and depth profiles. Although SIMS spectral data sets can be small (in comparison to datasets typically discussed in literature from other analytical techniques such as gas or liquid chromatography), each spectrum has thousands of ions resulting in what can be a difficult comparison of samples. Analysis of industrially-derived samples means the identity of most surface species are unknown a priori and samples must be analyzed rapidly to satisfy customer demands. PCA enables rapid assessment of spectral differences (or lack there of) between samples and identification of chemically different areas on sample surfaces for images. Depth profile analysis helps define interfaces and identify low-level components in the system.

  8. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  9. Depth Perception and the History of Three-Dimensional Art: Who Produced the First Stereoscopic Images?

    PubMed Central

    2017-01-01

    The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci’s Mona Lisa is the world’s first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí’s images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone. PMID:28203349

  10. Depth Perception and the History of Three-Dimensional Art: Who Produced the First Stereoscopic Images?

    PubMed

    Brooks, Kevin R

    2017-01-01

    The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci's Mona Lisa is the world's first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí's images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone.

  11. Depth imaging in highly scattering underwater environments using time-correlated single-photon counting

    NASA Astrophysics Data System (ADS)

    Maccarone, Aurora; McCarthy, Aongus; Halimi, Abderrahim; Tobin, Rachael; Wallace, Andy M.; Petillot, Yvan; McLaughlin, Steve; Buller, Gerald S.

    2016-10-01

    This paper presents an optical depth imaging system optimized for highly scattering environments such as underwater. The system is based on the time-correlated single-photon counting (TCSPC) technique and the time-of-flight approach. Laboratory-based measurements demonstrate the potential of underwater depth imaging, with specific attention given to environments with a high level of scattering. The optical system comprised a monostatic transceiver unit, a fiber-coupled supercontinuum laser source with a wavelength tunable acousto-optic filter (AOTF), and a fiber-coupled single-element silicon single-photon avalanche diode (SPAD) detector. In the optical system, the transmit and receive channels in the transceiver unit were overlapped in a coaxial optical configuration. The targets were placed in a 1.75 meter long tank, and raster scanned using two galvo-mirrors. Laboratory-based experiments demonstrate depth profiling performed with up to nine attenuation lengths between the transceiver and target. All of the measurements were taken with an average laser power of less than 1mW. Initially, the data was processed using a straightforward pixel-wise cross-correlation of the return timing signal with the system instrumental timing response. More advanced algorithms were then used to process these cross-correlation results. These results illustrate the potential for the reconstruction of images in highly scattering environments, and to permit the investigation of much shorter acquisition time scans. These algorithms take advantage of the data sparseness under the Discrete Cosine Transform (DCT) and the correlation between adjacent pixels, to restore the depth and reflectivity images.

  12. Influences of the pickup process on the depth of field of integral imaging display

    NASA Astrophysics Data System (ADS)

    Yang, Shenwu; Sang, Xinzhu; Gao, Xin; Yu, Xunbo; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru

    2017-03-01

    A typical integral imaging (InIm) display consists of the pickup process and the display process. In the InIm display process, the facet braiding phenomenon influences on the depth of field (DOF). Actually, the DOF of the InIm system not only depends on the the facet braiding in the display process, but also seriously depends on the pickup process. Only the pickuped object within the certain region can capture clear elemental images, and blurry elemental images could be attained outside the region. The region that can be recorded clear elemental images is called the pickup DOF of InIm, which influences on the total DOF of the InIm display. The pickup DOF calculation formula of InIm is presented, which is compared with the DOF caused by facet braiding. Experimental results agree with the theoretical analysis, which is benificial for designing the InIm display system.

  13. Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy.

    PubMed

    Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S; Yuste, Rafael; Ahrens, Misha B

    2016-03-01

    Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning-removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416×832×160  μm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain.

  14. Realtime hand detection system using convex shape detector in sequential depth images

    NASA Astrophysics Data System (ADS)

    Tai, Chung-Li; Li, Chia-Chang; Liao, Duan-Li

    2013-12-01

    In this paper, a real-time hand detection and tracking system is proposed. A calibrated stereo vision system is used to obtain disparity images and real world coordinates are available by geometry transformation. Unlike other pixel-based shape detector that edge information is necessary, the proposed convex shape detector, which is based on real world coordinates, is applied directly in depth images to detect hands regardless of distance. Around waving gesture recognition and simple hand tracking are also implemented in this work. The acceptable accuracy of the proposed system is examined in verification process. Experimental results of hand detection and tracking prove the robustness and the feasibility of the proposed method.

  15. Enhanced depth resolution in terahertz imaging using phase-shift interferometry

    NASA Astrophysics Data System (ADS)

    Johnson, Jon L.; Dorney, Timothy D.; Mittleman, Daniel M.

    2001-02-01

    We describe an imaging technique for few-cycle optical pulses. An object to be imaged is placed at the focus of a lens in one arm of a Michaelson interferometer. This introduces a phase shift of approximately π between the two arms of the interferometer, via the Gouy phase shift. The resulting destructive interference provides a nearly background-free measurement, and a dramatic enhancement in depth resolution. We demonstrate this using single-cycle pulses of terahertz radiation, and show that it is possible to resolve features thinner than 2% of the coherence length of the radiation. This technique could have important applications in low-coherence optical tomographic measurements.

  16. Improved depth of field in the scanning electron microscope derived from through-focus image stacks.

    PubMed

    Boyde, Alan

    2004-01-01

    The depth of field limit in the scanning electron microscope (SEM) can be overcome by recording stacks of through-focus images (as in conventional and confocal optical microscopy) which are postprocessed to generate an all-in-focus image. Images are recorded under constant electron optical conditions by mechanical Z-axis movement of the sample. This gives rise to a change in magnification through the stack due to the perspective projection of the SEM image. Calculation of the necessary scaling as well as the derivation of best focus information at every patch in the image--and a contour map function derived from the selected patch depths--are incorporated in a new software package (Auto-Montage Pro). The utility of these procedures is demonstrated with examples from the study of human osteoporotic bone, where results show uncoupling of resorption and formation. The procedure can be combined with pseudo-colour coding for the direction of apparent illumination when using backscattered electron (BSE) detectors in contrasting positions.

  17. Noninvasive determination of burn depth in children by digital infrared thermal imaging

    NASA Astrophysics Data System (ADS)

    Medina-Preciado, Jose David; Kolosovas-Machuca, Eleazar Samuel; Velez-Gomez, Ezequiel; Miranda-Altamirano, Ariel; González, Francisco Javier

    2013-06-01

    Digital infrared thermal imaging is used to assess noninvasively the severity of burn wounds in 13 pediatric patients. A delta-T (ΔT) parameter obtained by subtracting the temperature of a healthy contralateral region from the temperature of the burn wound is compared with the burn depth measured histopathologically. Thermal imaging results show that superficial dermal burns (IIa) show increased temperature compared with their contralateral healthy region, while deep dermal burns (IIb) show a lower temperature than their contralateral healthy region. This difference in temperature is statistically significant (p<0.0001) and provides a way of distinguishing deep dermal from superficial dermal burns. These results show that digital infrared thermal imaging could be used as a noninvasive procedure to assess burn wounds. An additional advantage of using thermal imaging, which can image a large skin surface area, is that it can be used to identify regions with different burn depths and estimate the size of the grafts needed for deep dermal burns.

  18. 3D Sorghum Reconstructions from Depth Images Identify QTL Regulating Shoot Architecture1[OPEN

    PubMed Central

    2016-01-01

    Dissecting the genetic basis of complex traits is aided by frequent and nondestructive measurements. Advances in range imaging technologies enable the rapid acquisition of three-dimensional (3D) data from an imaged scene. A depth camera was used to acquire images of sorghum (Sorghum bicolor), an important grain, forage, and bioenergy crop, at multiple developmental time points from a greenhouse-grown recombinant inbred line population. A semiautomated software pipeline was developed and used to generate segmented, 3D plant reconstructions from the images. Automated measurements made from 3D plant reconstructions identified quantitative trait loci for standard measures of shoot architecture, such as shoot height, leaf angle, and leaf length, and for novel composite traits, such as shoot compactness. The phenotypic variability associated with some of the quantitative trait loci displayed differences in temporal prevalence; for example, alleles closely linked with the sorghum Dwarf3 gene, an auxin transporter and pleiotropic regulator of both leaf inclination angle and shoot height, influence leaf angle prior to an effect on shoot height. Furthermore, variability in composite phenotypes that measure overall shoot architecture, such as shoot compactness, is regulated by loci underlying component phenotypes like leaf angle. As such, depth imaging is an economical and rapid method to acquire shoot architecture phenotypes in agriculturally important plants like sorghum to study the genetic basis of complex traits. PMID:27528244

  19. Single image depth estimation based on convolutional neural network and sparse connected conditional random field

    NASA Astrophysics Data System (ADS)

    Zhu, Leqing; Wang, Xun; Wang, Dadong; Wang, Huiyan

    2016-10-01

    Deep convolutional neural networks (DCNNs) have attracted significant interest in the computer vision community in the recent years and have exhibited high performance in resolving many computer vision problems, such as image classification. We address the pixel-level depth prediction from a single image by combining DCNN and sparse connected conditional random field (CRF). Owing to the invariance properties of DCNNs that make them suitable for high-level tasks, their outputs are generally not localized enough for detailed pixel-level regression. A multiscale DCNN and sparse connected CRF are combined to overcome this localization weakness. We have evaluated our framework using the well-known NYU V2 depth dataset, and the results show that the proposed method can improve the depth prediction accuracy both qualitatively and quantitatively, as compared to previous works. This finding shows the potential use of the proposed method in three-dimensional (3-D) modeling or 3-D video production from the given two-dimensional (2-D) images or 2-D videos.

  20. Upconversion fluorescent nanoparticles as a potential tool for in-depth imaging

    NASA Astrophysics Data System (ADS)

    Nagarajan, Sounderya; Zhang, Yong

    2011-09-01

    Upconversion nanoparticles (UCNs) are nanoparticles that are excited in the near infrared (NIR) region with emission in the visible or NIR regions. This makes these particles attractive for use in biological imaging as the NIR light can penetrate the tissue better with minimal absorption/scattering. This paper discusses the study of the depth to which cells can be imaged using these nanoparticles. UCNs with NaYF4 nanocrystals doped with Yb3 + , Er3 + (visible emission)/Yb3 + , Tm3 + (NIR emission) were synthesized and modified with silica enabling their dispersion in water and conjugation of biomolecules to their surface. The size of the sample was characterized using transmission electron microscopy and the fluorescence measured using a fluorescence spectrometer at an excitation of 980 nm. Tissue phantoms were prepared by reported methods to mimic skin/muscle tissue and it was observed that the cells could be imaged up to a depth of 3 mm using the NIR emitting UCNs. Further, the depth of detection was evaluated for UCNs targeted to gap junctions formed between cardiac cells.

  1. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement.

  2. X-ray imaging using avalanche multiplication in amorphous selenium: Investigation of depth dependent avalanche noise

    SciTech Connect

    Hunt, D. C.; Tanioka, Kenkichi; Rowlands, J. A.

    2007-03-15

    The past decade has seen the swift development of the flat-panel detector (FPD), also known as the active matrix flat-panel imager, for digital radiography. This new technology is applicable to other modalities, such as fluoroscopy, which require the acquisition of multiple images, but could benefit from some improvements. In such applications where more than one image is acquired less radiation is available to form each image and amplifier noise becomes a serious problem. Avalanche multiplication in amorphous selenium (a-Se) can provide the necessary amplification prior to read out so as to reduce the effect of electronic noise of the FPD. However, in direct conversion detectors avalanche multiplication can lead to a new source of gain fluctuation noise called depth dependent avalanche noise. A theoretical model was developed to understand depth dependent avalanche noise. Experiments were performed on a direct imaging system implementing avalanche multiplication in a layer of a-Se to validate the theory. For parameters appropriate for a diagnostic imaging FPD for fluoroscopy the detective quantum efficiency (DQE) was found to drop by as much as 50% with increasing electric field, as predicted by the theoretical model. This drop in DQE can be eliminated by separating the collection and avalanche regions. For example by having a region of low electric field where x rays are absorbed and converted into charge that then drifts into a region of high electric field where the x-ray generated charge undergoes avalanche multiplication. This means quantum noise limited direct conversion FPD for low exposure imaging techniques are a possibility.

  3. A range/depth modulation transfer function (RMTF) framework for characterizing 3D imaging LADAR performance

    NASA Astrophysics Data System (ADS)

    Staple, Bevan; Earhart, R. P.; Slaymaker, Philip A.; Drouillard, Thomas F., II; Mahony, Thomas

    2005-05-01

    3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system"s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.

  4. Airborne imaging spectrometer data of the Ruby Mountains, Montana: Mineral discrimination using relative absorption band-depth images

    USGS Publications Warehouse

    Crowley, J.K.; Brickey, D.W.; Rowan, L.C.

    1989-01-01

    Airborne imaging spectrometer data collected in the near-infrared (1.2-2.4 ??m) wavelength range were used to study the spectral expression of metamorphic minerals and rocks in the Ruby Mountains of southwestern Montana. The data were analyzed by using a new data enhancement procedure-the construction of relative absorption band-depth (RBD) images. RBD images, like bandratio images, are designed to detect diagnostic mineral absorption features, while minimizing reflectance variations related to topographic slope and albedo differences. To produce an RBD image, several data channels near an absorption band shoulder are summed and then divided by the sum of several channels located near the band minimum. RBD images are both highly specific and sensitive to the presence of particular mineral absorption features. Further, the technique does not distort or subdue spectral features as sometimes occurs when using other data normalization methods. By using RBD images, a number of rock and soil units were distinguished in the Ruby Mountains including weathered quartz - feldspar pegmatites, marbles of several compositions, and soils developed over poorly exposed mica schists. The RBD technique is especially well suited for detecting weak near-infrared spectral features produced by soils, which may permit improved mapping of subtle lithologic and structural details in semiarid terrains. The observation of soils rich in talc, an important industrial commodity in the study area, also indicates that RBD images may be useful for mineral exploration. ?? 1989.

  5. Three-Dimensional Image Cytometer Based on Widefield Structured Light Microscopy and High-Speed Remote Depth Scanning

    PubMed Central

    Choi, Heejin; Wadduwage, Dushan N.; Tu, Ting Yuan; Matsudaira, Paul; So, Peter T. C.

    2014-01-01

    A high throughput 3D image cytometer have been developed that improves imaging speed by an order of magnitude over current technologies. This imaging speed improvement was realized by combining several key components. First, a depth-resolved image can be rapidly generated using a structured light reconstruction algorithm that requires only two wide field images, one with uniform illumination and the other with structured illumination. Second, depth scanning is implemented using the high speed remote depth scanning. Finally, the large field of view, high NA objective lens and the high pixelation, high frame rate sCMOS camera enable high resolution, high sensitivity imaging of a large cell population. This system can image at 800 cell/sec in 3D at submicron resolution corresponding to imaging 1 million cells in 20 min. The statistical accuracy of this instrument is verified by quantitatively measuring rare cell populations with ratio ranging from 1:1 to 1:105. PMID:25352187

  6. Depth-of-focus (DoF) analysis of a 193nm superlens imaging structure.

    PubMed

    Shi, Zhong; Kochergin, Vladimir; Wang, Fei

    2009-10-26

    We present a design of a 193 nm superlens imaging structure to enable the printing of 20 nm features. Optical image simulations indicate that the 20 nm resolution is feasible for both the periodic grating feature and the two-slit feature. Nominal depth-of-focus (DoF) position for both features is identified through the image contrast calculations. Simulations show that the two features have a common nominal dose at the nominal DoF to resolve 20 nm critical dimension when a suitable dielectric material is placed between mask and superlens layer. A DoF of micro8 nm is shown to be obtainable for the 20 nm half-pitch grating feature while the respective DoF for the two-slit feature is less than 8 nm which potentially can be enhanced by employing existing lithographic resolution enhancement techniques.

  7. Dual-depth SSOCT for simultaneous complex resolved anterior segment and conventional retinal imaging

    NASA Astrophysics Data System (ADS)

    Dhalla, Al-Hafeez; Bustamante, Theresa; Nanikivil, Derek; Hendargo, Hansford; McNabb, Ryan; Kuo, Anthony; Izatt, Joseph A.

    2012-01-01

    We present a novel optical coherence tomography (OCT) system design that employs coherence revival-based heterodyning and polarization encoding to simultaneously image the ocular anterior segment and the retina. Coherence revival heterodyning allows for multiple depths within a sample to be simultaneously imaged and frequency encoded by carefully controlling the optical pathlength of each sample path. A polarization-encoded sample arm was used to direct orthogonal polarizations to the anterior segment and retina. This design is a significant step toward realizing whole-eye OCT, which would enable customized ray-traced modeling of patient eyes to improve refractive surgical interventions, as well as the elimination of optical artifacts in retinal OCT diagnostics. We demonstrated the feasibility of this system by acquiring images of the anterior segments and retinas of healthy human volunteers.

  8. Conductivity depth imaging of Airborne Electromagnetic data with double pulse transmitting current based on model fusion

    NASA Astrophysics Data System (ADS)

    Li, Jing; Dou, Mei; Lu, Yiming; Peng, Cong; Yu, Zining; Zhu, Kaiguang

    2017-01-01

    The airborne electromagnetic (AEM) systems have been used traditionally in mineral exploration. Typically the system transmits a single pulse waveform to detect conductive anomaly. Conductivity-depth imaging (CDI) of data is generally applied in identifying conductive targets. A CDI algorithm with double-pulse transmitting current based on model fusion is developed. The double-pulse is made up of a half-sine pulse of high power and a trapezoid pulse of low power. This CDI algorithm presents more shallow information than traditional CDI with a single pulse. The electromagnetic response with double-pulse transmitting current is calculated by linear convolution based on forward modeling. The CDI results with half-sine and trapezoid pulse are obtained by look-up table method, and the two results are fused to form a double-pulse conductivity-depth imaging result. This makes it possible to obtain accurate conductivity and depth. Tests on synthetic data demonstrate that CDI algorithm with double-pulse transmitting current based on model fusion maps a wider range of conductivities and does a better job compared with CDI with a single pulse transmitting current in reflecting the whole geological conductivity changes.

  9. Imaging with depth extension: where are the limits in fixed- focus cameras?

    NASA Astrophysics Data System (ADS)

    Bakin, Dmitry; Keelan, Brian

    2008-08-01

    The integration of novel optics designs, miniature CMOS sensors, and powerful digital processing into a single imaging module package is driving progress in handset camera systems in terms of performance, size (thinness) and cost. The miniature cameras incorporating high resolution sensors and fixed-focus Extended Depth of Field (EDOF) optics allow close-range reading of printed material (barcode patterns, business cards), while providing high quality imaging in more traditional applications. These cameras incorporate modified optics and digital processing to recover the soft-focus images and restore sharpness over a wide range of object distances. The effects a variety of parameters of the imaging module on the EDOF range were analyzed for a family of high resolution CMOS modules. The parameters include various optical properties of the imaging lens, and the characteristics of the sensor. The extension factors for the EDOF imaging module were defined in terms of an improved absolute resolution in object space while maintaining focus at infinity. This definition was applied for the purpose of identifying the minimally resolvable object details in mobile cameras with bar-code reading feature.

  10. Subjective quality and depth assessment in stereoscopic viewing of volume-rendered medical images

    NASA Astrophysics Data System (ADS)

    Rousson, Johanna; Couturou, Jeanne; Vetsuypens, Arnout; Platisa, Ljiljana; Kumcu, Asli; Kimpe, Tom; Philips, Wilfried

    2014-03-01

    No study to-date explored the relationship between perceived image quality (IQ) and perceived depth (DP) in stereoscopic medical images. However, this is crucial to design objective quality metrics suitable for stereoscopic medical images. This study examined this relationship using volume-rendered stereoscopic medical images for both dual- and single-view distortions. The reference image was modified to simulate common alterations occurring during the image acquisition stage or at the display side: added white Gaussian noise, Gaussian filtering, changes in luminance, brightness and contrast. We followed a double stimulus five-point quality scale methodology to conduct subjective tests with eight non-expert human observers. The results suggested that DP was very robust to luminance, contrast and brightness alterations and insensitive to noise distortions until standard deviation σ=20 and crosstalk rates of 7%. In contrast, IQ seemed sensitive to all distortions. Finally, for both DP and IQ, the Friedman test indicated that the quality scores for dual-view distortions were significantly worse than scores for single-view distortions for multiple blur levels and crosstalk impairments. No differences were found for most levels of brightness, contrast and noise distortions. So, DP and IQ didn't react equivalently to identical impairments, and both depended whether dual- or single-view distortions were applied.

  11. Reflectivity and depth images based on time-correlated single photon counting technique

    NASA Astrophysics Data System (ADS)

    Duan, Xuejie; Ma, Lin; Kang, Yan; Zhang, Tongyi

    2016-10-01

    We presented three-dimensional image including reflectivity and depth image of a target with two traditional optical imaging systems based on time-correlated single photon counting technique (TCSPC), when it was illuminated by a MHz repetition rate pulsed laser source. The first one is bi-static system of which transmitted and received beams path are separated. Another one called mono-static system of which transmit and receive channels are coaxial, so it was also named by transceiver system. Experimental results produced by both systems showed that the mono-static system had more advantages of less noise from ambient light and no limitation about field area of view. While in practical applications, the target was far away leading to there were few photons return which was prejudicial to build 3D images with traditional imaging system. Thus an advanced one named first photon system was presented. This one was also a mono-static system on hardware system structure, but the control system structure was different with traditional transceiver system described in this paper. The difference was that the first return photon per pixel was recorded across system with first photon system, instead of overall return photons per pixel. That's to say only one detected return photon is needed for per pixel of this system to rebuild 3D images of target with less energy and time.

  12. Enhanced depth imaging OCT and indocyanine green angiography changes in acute macular neuroretinopathy.

    PubMed

    Sanjari, Nasrin; Moein, Hamid-Reza; Soheilian, Roham; Soheilian, Masoud; Peyman, Gholam A

    2013-01-01

    The authors describe indocyanine green angiography (ICGA) and enhanced depth imaging optical coherence tomography (EDI-OCT) in a 46-year-old male patient with acute macular neuroretinopathy (AMN). The chief complaint was decreasing visual acuity and metamorphopsia in both eyes of 1-month duration. Visual field assessment, fluorescein angiography, OCT, ICGA, and EDI-OCT were performed initially and at 3 months. ICGA showed choroidal vascular hyperpermeability and punctuate choroidal hyperfluorescent spots, especially in the left eye. EDI-OCT showed increased choroidal macular thickness, with inner and outer retinal layers affected. EDI-OCT and ICGA reveal that both the choroid and retina can be affected in AMN; however, the primary pathology and localization of depth of involvement in AMN remains unclear.

  13. Depth profiling and imaging capabilities of an ultrashort pulse laser ablation time of flight mass spectrometer

    PubMed Central

    Cui, Yang; Moore, Jerry F.; Milasinovic, Slobodan; Liu, Yaoming; Gordon, Robert J.; Hanley, Luke

    2012-01-01

    An ultrafast laser ablation time-of-flight mass spectrometer (AToF-MS) and associated data acquisition software that permits imaging at micron-scale resolution and sub-micron-scale depth profiling are described. The ion funnel-based source of this instrument can be operated at pressures ranging from 10−8 to ∼0.3 mbar. Mass spectra may be collected and stored at a rate of 1 kHz by the data acquisition system, allowing the instrument to be coupled with standard commercial Ti:sapphire lasers. The capabilities of the AToF-MS instrument are demonstrated on metal foils and semiconductor wafers using a Ti:sapphire laser emitting 800 nm, ∼75 fs pulses at 1 kHz. Results show that elemental quantification and depth profiling are feasible with this instrument. PMID:23020378

  14. 3-D resistivity imaging of buried concrete infrastructure with application to unknown bridge foundation depth determination

    NASA Astrophysics Data System (ADS)

    Everett, M. E.; Arjwech, R.; Briaud, J.; Hurlebaus, S.; Medina-Cetina, Z.; Tucker, S.; Yousefpour, N.

    2010-12-01

    Bridges are always vulnerable to scour and those mainly older ones with unknown foundations constitute a significant risk to public safety. Geophysical testing of bridge foundations using 3-D resistivity imaging is a promising non-destructive technology but its execution and reliable interpretation remains a challenging task. A major difficulty to diagnosing foundation depth is that a single linear electrode profile generally does not provide adequate 3—D illumination to provide a useful image of the bottom of the foundation. To further explore the capabilities of resistivity tomography, we conducted a 3—D resistivity survey at a geotechnical test area which includes groups of buried, steel—reinforced concrete structures, such as slabs and piles, with cylindrical and square cross—sections that serve as proxies for bridge foundations. By constructing a number of 3—D tomograms using selected data subsets and comparing the resulting images, we have identified efficient combinations of data acquired in the vicinity of a given foundation which enable the most cost-effective and reliable depth determination. The numerous issues that are involved in adapting this methodology to actual bridge sites is discussed.

  15. Noninvasive measurement of burn wound depth applying infrared thermal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jaspers, Mariëlle E.; Maltha, Ilse M.; Klaessens, John H.; Vet, Henrica C.; Verdaasdonk, Rudolf M.; Zuijlen, Paul P.

    2016-02-01

    In burn wounds early discrimination between the different depths plays an important role in the treatment strategy. The remaining vasculature in the wound determines its healing potential. Non-invasive measurement tools that can identify the vascularization are therefore considered to be of high diagnostic importance. Thermography is a non-invasive technique that can accurately measure the temperature distribution over a large skin or tissue area, the temperature is a measure of the perfusion of that area. The aim of this study was to investigate the clinimetric properties (i.e. reliability and validity) of thermography for measuring burn wound depth. In a cross-sectional study with 50 burn wounds of 35 patients, the inter-observer reliability and the validity between thermography and Laser Doppler Imaging were studied. With ROC curve analyses the ΔT cut-off point for different burn wound depths were determined. The inter-observer reliability, expressed by an intra-class correlation coefficient of 0.99, was found to be excellent. In terms of validity, a ΔT cut-off point of 0.96°C (sensitivity 71%; specificity 79%) differentiates between a superficial partial-thickness and deep partial-thickness burn. A ΔT cut-off point of -0.80°C (sensitivity 70%; specificity 74%) could differentiate between a deep partial-thickness and a full-thickness burn wound. This study demonstrates that thermography is a reliable method in the assessment of burn wound depths. In addition, thermography was reasonably able to discriminate among different burn wound depths, indicating its potential use as a diagnostic tool in clinical burn practice.

  16. Full-depth Coadds of the WISE and First-year NEOWISE-reactivation Images

    NASA Astrophysics Data System (ADS)

    Meisner, Aaron M.; Lang, Dustin; Schlegel, David J.

    2017-01-01

    The Near Earth Object Wide-field Infrared Survey Explorer (NEOWISE) Reactivation mission released data from its first full year of observations in 2015. This data set includes ∼2.5 million exposures in each of W1 and W2, effectively doubling the amount of WISE imaging available at 3.4 μm and 4.6 μm relative to the AllWISE release. We have created the first ever full-sky set of coadds combining all publicly available W1 and W2 exposures from both the AllWISE and NEOWISE-Reactivation (NEOWISER) mission phases. We employ an adaptation of the unWISE image coaddition framework, which preserves the native WISE angular resolution and is optimized for forced photometry. By incorporating two additional scans of the entire sky, we not only improve the W1/W2 depths, but also largely eliminate time-dependent artifacts such as off-axis scattered moonlight. We anticipate that our new coadds will have a broad range of applications, including target selection for upcoming spectroscopic cosmology surveys, identification of distant/massive galaxy clusters, and discovery of high-redshift quasars. In particular, our full-depth AllWISE+NEOWISER coadds will be an important input for the Dark Energy Spectroscopic Instrument selection of luminous red galaxy and quasar targets. Our full-depth W1/W2 coadds are already in use within the DECam Legacy Survey (DECaLS) and Mayall z-band Legacy Survey (MzLS) reduction pipelines. Much more work still remains in order to fully leverage NEOWISER imaging for astrophysical applications beyond the solar system.

  17. Benchmarking of depth of field for large out-of-plane deformations with single camera digital image correlation

    NASA Astrophysics Data System (ADS)

    Van Mieghem, Bart; Ivens, Jan; Van Bael, Albert

    2017-04-01

    A problem that arises when performing stereo digital image correlation in applications with large out-of-plane displacements is that the images may become unfocused. This unfocusing could result in correlation instabilities or inaccuracies. When performing DIC measurements and expecting large out-of-plane displacements researchers either trust on their experience or use the equations from photography to estimate the parameters affecting the depth of field (DOF) of the camera. A limitation of the latter approach is that the definition of sharpness is a human defined parameter and that it does not reflect the performance of the digital image correlation system. To get a more representative DOF value for DIC applications, a standardised testing method is presented here, making use of real camera and lens combinations as well as actual image correlation results. The method is based on experimental single camera DIC measurements of a backwards moving target. Correlation results from focused and unfocused images are compared and a threshold value defines whether or not the correlation results are acceptable even if the images are (slightly) unfocused. By following the proposed approach, the complete DOF of a specific camera/lens combination as function of the aperture setting and distance from the camera to the target can be defined. The comparison between the theoretical and the experimental DOF results shows that the achievable DOF for DIC applications is larger than what theoretical calculations predict. Practically this means that the cameras can be positioned closer to the target than what is expected from the theoretical approach. This leads to a gain in resolution and measurement accuracy.

  18. The effect of aberrations on objectively assessed image quality and depth of focus.

    PubMed

    Águila-Carrasco, Antonio J Del; Read, Scott A; Montés-Micó, Robert; Iskander, D Robert

    2017-02-01

    The effects of aberrations on image quality and the objectively assessed depth of focus (DoF) were studied. Aberrometry data from 80 young subjects with a range of refractive errors was used for computing the visual Strehl ratio based on the optical transfer function (VSOTF), and then, through-focus simulations were performed in order to calculate the objective DoF (using two different relative thresholds of 50% and 80%; and two different pupil diameters) and the image quality (the peak VSOTF). Both lower order astigmatism and higher order aberration (HOA) terms up to the fifth radial order were considered. The results revealed that, of the HOAs, the comatic terms (third and fifth order) explained most of the variations of the DoF and the image quality in this population of subjects. Furthermore, computer simulations demonstrated that the removal of these terms also had a significant impact on both DoF and the peak VSOTF. Knowledge about the relationship between aberrations, DoF, image quality, and their interactions is essential in optical designs aiming to produce large values of DoF while maintaining an acceptable level of image quality. Comatic aberration terms appear to contribute strongly towards the configuration of both of these visually important parameters.

  19. Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters

    NASA Technical Reports Server (NTRS)

    Bos, Brent; Memarsadeghi, Nargess; Kizhner, Semion; Antonille, Scott

    2013-01-01

    A large depth-of-field particle image velocimeter (PIV) is designed to characterize dynamic dust environments on planetary surfaces. This instrument detects lofted dust particles, and senses the number of particles per unit volume, measuring their sizes, velocities (both speed and direction), and shape factors when the particles are large. To measure these particle characteristics in-flight, the instrument gathers two-dimensional image data at a high frame rate, typically >4,000 Hz, generating large amounts of data for every second of operation, approximately 6 GB/s. To characterize a planetary dust environment that is dynamic, the instrument would have to operate for at least several minutes during an observation period, easily producing more than a terabyte of data per observation. Given current technology, this amount of data would be very difficult to store onboard a spacecraft, and downlink to Earth. Since 2007, innovators have been developing an autonomous image analysis algorithm architecture for the PIV instrument to greatly reduce the amount of data that it has to store and downlink. The algorithm analyzes PIV images and automatically reduces the image information down to only the particle measurement data that is of interest, reducing the amount of data that is handled by more than 10(exp 3). The state of development for this innovation is now fairly mature, with a functional algorithm architecture, along with several key pieces of algorithm logic, that has been proven through field test data acquired with a proof-of-concept PIV instrument.

  20. Assessment of Optic Nerve Head Drusen Using Enhanced Depth Imaging and Swept Source Optical Coherence Tomography

    PubMed Central

    Silverman, Anna L.; Tatham, Andrew J.; Medeiros, Felipe A.; Weinreb, Robert N.

    2015-01-01

    Background Optic nerve head drusen (ONHD) are calcific deposits buried or at the surface of the optic disc. Although ONHD may be associated with progressive visual field defects, the mechanism of drusen-related field loss is poorly understood. Methods for detecting and imaging disc drusen include B-scan ultrasonography, fundus autofluorescence, and optical coherence tomography (OCT). These modalities are useful for drusen detection but are limited by low resolution or poor penetration of deep structures. This review was designed to assess the potential role of new OCT technologies in imaging ONHD. Evidence Acquisition Critical appraisal of published literature and comparison of new imaging devices to established technology. Results The new imaging modalities of enhanced depth imaging optical coherence tomography (EDI-OCT) and swept source optical coherence tomography (SS-OCT) are able to provide unprecedented in vivo detail of ONHD. Using these devices it is now possible to quantify optic disc drusen dimensions and assess integrity of neighboring retinal structures, including the retinal nerve fiber layer. Conclusions EDI-OCT and SS-OCT have the potential to allow better detection of longitudinal changes in drusen and neural retina and improve our understanding of drusen-related visual field loss. PMID:24662838

  1. Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding

    NASA Astrophysics Data System (ADS)

    Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito

    2015-02-01

    Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.

  2. Probing depth and dynamic response of speckles in near infrared region for spectroscopic blood flow imaging

    NASA Astrophysics Data System (ADS)

    Yokoi, Naomichi; Aizu, Yoshihisa

    2016-04-01

    Imaging method based on bio-speckles is a useful means for blood flow visualization of living bodies and, it has been utilized for analyzing the condition or the health state of living bodies. Actually, the sensitivity of blood flow is influenced by tissue optical properties, which depend on the wavelength of illuminating laser light. In the present study, we experimentally investigate characteristics of the blood flow images obtained with two wavelengths of 780 nm and 830 nm in the near-infrared region. Experiments are conducted for sample models using a pork layer, horse blood layer and mirror, and for a human wrist and finger, to investigate optical penetration depth and dynamic response of speckles to the blood flow velocity for two wavelengths.

  3. Depth and all-in-focus images obtained by multi-line-scan light-field approach

    NASA Astrophysics Data System (ADS)

    Štolc, Svorad; Huber-Mörk, Reinhold; Holländer, Branislav; Soukup, Daniel

    2014-03-01

    We present a light-field multi-line-scan image acquisition and processing system intended for the 2.5/3-D inspection of fine surface structures, such as small parts, security print, etc. in an industrial environment. The system consists of an area-scan camera, that allows for a small number of sensor lines to be extracted at high frame rates, and a mechanism for transporting the inspected object at a constant speed. During the acquisition, the object is moved orthogonally to the camera's optical axis as well as the orientation of the sensor lines. In each time step, a predefined subset of lines is read out from the sensor and stored. Afterward, by collecting all corresponding lines acquired over time, a 3-D light field is generated, which consists of multiple views of the object observed from different viewing angles while transported w.r.t. the acquisition device. This structure allows for the construction of so-called epipolar plane images (EPIs) and subsequent EPI-based analysis in order to achieve two main goals: (i) the reliable estimation of a dense depth model and (ii) the construction of an all-in-focus intensity image. Beside specifics of our hardware setup, we also provide a detailed description of algorithmic solutions for the mentioned tasks. Two alternative methods for EPI-based analysis are compared based on artificial and real-world data.

  4. Imaging the Juan de Fuca subduction plate using 3D Kirchoff Prestack Depth Migration

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Bodin, T.; Allen, R. M.; Tauzin, B.

    2014-12-01

    We propose a new Receiver Function migration method to image the subducting plate in the western US that utilizes the US array and regional network data. While the well-developed CCP (common conversion point) poststack migration is commonly used for such imaging; our method applies a 3D prestack depth migration approach. The traditional CCP and post-stack depth mapping approaches implement the ray tracing and moveout correction for the incoming teleseismic plane wave based on a 1D earth reference model and the assumption of horizontal discontinuities. Although this works well in mapping the reflection position of relatively flat discontinuities (such as the Moho or the LAB), CCP is known to give poor results in the presence of lateral volumetric velocity variations and dipping layers. Instead of making the flat layer assumption and 1D moveout correction, seismic rays are traced in a 3D tomographic model with the Fast Marching Method. With travel time information stored, our Kirchoff migration is done where the amplitude of the receiver function at a given time is distributed over all possible conversion points (i.e. along a semi-elipse) on the output migrated depth section. The migrated reflectors will appear where the semicircles constructively interfere, whereas destructive interference will cancel out noise. Synthetic tests show that in the case of a horizontal discontinuity, the prestack Kirchoff migration gives similar results to CCP, but without spurious multiples as this energy is stacked destructively and cancels out. For 45 degree and 60 degree dipping discontinuities, it also performs better in terms of imaging at the right boundary and dip angle. This is especially useful in the Western US case, beneath which the Juan de Fuca plate subducted to ~450km with a dipping angle that may exceed 50 degree. While the traditional CCP method will underestimate the dipping angle, our proposed imaging method will provide an accurate 3D subducting plate image without

  5. Perceived depth in natural images reflects encoding of low-level luminance statistics.

    PubMed

    Cooper, Emily A; Norcia, Anthony M

    2014-08-27

    Sighted animals must survive in an environment that is diverse yet highly structured. Neural-coding models predict that the visual system should allocate its computational resources to exploit regularities in the environment, and that this allocation should facilitate perceptual judgments. Here we use three approaches (natural scenes statistical analysis, a reanalysis of single-unit data from alert behaving macaque, and a behavioral experiment in humans) to address the question of how the visual system maximizes behavioral success by taking advantage of low-level regularities in the environment. An analysis of natural scene statistics reveals that the probability distributions for light increments and decrements are biased in a way that could be exploited by the visual system to estimate depth from relative luminance. A reanalysis of neurophysiology data from Samonds et al. (2012) shows that the previously reported joint tuning of V1 cells for relative luminance and binocular disparity is well matched to a predicted distribution of binocular disparities produced by natural scenes. Finally, we show that a percept of added depth can be elicited in images by exaggerating the correlation between luminance and depth. Together, the results from these three approaches provide further evidence that the visual system allocates its processing resources in a way that is driven by the statistics of the natural environment.

  6. Numerical simulation of phase images and depth reconstruction in pulsed phase thermography

    NASA Astrophysics Data System (ADS)

    Hernandez-Valle, Saul; Peters, Kara

    2015-11-01

    In this work we apply the finite element (FE) method to simulate the results of pulsed phase thermography experiments on laminated composite plates. Specifically, the goal is to simulate the phase component of reflected thermal waves and therefore verify the calculation of defect depth through the identification of the defect blind frequency. The calculation of phase components requires a higher spatial and temporal resolution than that of the calculation of the reflected temperature. An FE modeling strategy is presented, including the estimation of the defect thermal properties, which in this case is represented as a foam insert impregnated with epoxy resin. A comparison of meshing strategies using tetrahedral and hexahedral elements reveals that temperature errors in the tetrahedral results are amplified in the calculation of phase images and blind frequencies. Finally, we investigate the linearity of the measured diffusion length (based on the blind frequency) as a function of defect depth. The simulations demonstrate a nonlinear relationship between the defect depth and diffusion length, calculated from the blind frequency, consistent with previous experimental observations.

  7. Depth-dependent phosphor blur in indirect x-ray imaging sensors

    NASA Astrophysics Data System (ADS)

    Badano, Aldo; Leimbach, Rachel

    2002-05-01

    The influence of phosphor screens on the digital system image quality has been studied in a number of papers. However, there has been no detailed description of the effect of depth of x-ray interaction on the blur characteristics of the phosphor and on optical collection efficiency for both powder and structured screens. We present an analysis based on optical Monte Carlo simulations of the depth-dependent phosphor blur of two classes of single-layer phosphor screens: homogeneous and columnar. The spectral sensitivity of the optical sensor is modeled according to a typical a-Si:H photodiode absorption profile. We used Gd2O2S:Tb and CsI:Tl emission spectra respectively for the powder and columnar phosphor models. We present line-spread (LSF) and modulation transfer (MTF) functions associated with the spread of signal in the phosphor, and optical collection efficiencies. We find good agreement between the Monte Carlo estimates of the MTF and the analytical solutions available in the literature. Our optical collection efficiency results show depth dependence only for the screens with highly scattering and absorptive phosphor with reflective backing, and for the case of scattering phosphor with absorptive backing.

  8. Imaging photoplethysmography for clinical assessment of cutaneous microcirculation at two different depths

    NASA Astrophysics Data System (ADS)

    Marcinkevics, Zbignevs; Rubins, Uldis; Zaharans, Janis; Miscuks, Aleksejs; Urtane, Evelina; Ozolina-Moll, Liga

    2016-03-01

    The feasibility of bispectral imaging photoplethysmography (iPPG) system for clinical assessment of cutaneous microcirculation at two different depths is proposed. The iPPG system has been developed and evaluated for in vivo conditions during various tests: (1) topical application of vasodilatory liniment on the skin, (2) skin local heating, (3) arterial occlusion, and (4) regional anesthesia. The device has been validated by the measurements of a laser Doppler imager (LDI) as a reference. The hardware comprises four bispectral light sources (530 and 810 nm) for uniform illumination of skin, video camera, and the control unit for triggering of the system. The PPG signals were calculated and the changes of perfusion index (PI) were obtained during the tests. The results showed convincing correlations for PI obtained by iPPG and LDI at (1) topical liniment (r=0.98) and (2) heating (r=0.98) tests. The topical liniment and local heating tests revealed good selectivity of the system for superficial microcirculation monitoring. It is confirmed that the iPPG system could be used for assessment of cutaneous perfusion at two different depths, morphologically and functionally different vascular networks, and thus utilized in clinics as a cost-effective alternative to the LDI.

  9. Quantum dot imaging in the second near-infrared optical window: studies on reflectance fluorescence imaging depths by effective fluence rate and multiple image acquisition

    NASA Astrophysics Data System (ADS)

    Jung, Yebin; Jeong, Sanghwa; Nayoun, Won; Ahn, Boeun; Kwag, Jungheon; Geol Kim, Sang; Kim, Sungjee

    2015-04-01

    Quantum dot (QD) imaging capability was investigated by the imaging depth at a near-infrared second optical window (SOW; 1000 to 1400 nm) using time-modulated pulsed laser excitations to control the effective fluence rate. Various media, such as liquid phantoms, tissues, and in vivo small animals, were used and the imaging depths were compared with our predicted values. The QD imaging depth under excitation of continuous 20 mW/cm2 laser was determined to be 10.3 mm for 2 wt% hemoglobin phantom medium and 5.85 mm for 1 wt% intralipid phantom, which were extended by more than two times on increasing the effective fluence rate to 2000 mW/cm2. Bovine liver and porcine skin tissues also showed similar enhancement in the contrast-to-noise ratio (CNR) values. A QD sample was inserted into the abdomen of a mouse. With a higher effective fluence rate, the CNR increased more than twofold and the QD sample became clearly visualized, which was completely undetectable under continuous excitation. Multiple acquisitions of QD images and averaging process pixel by pixel were performed to overcome the thermal noise issue of the detector in SOW, which yielded significant enhancement in the imaging capability, showing up to a 1.5 times increase in the CNR.

  10. Broadband optical mammography instrument for depth-resolved imaging and local dynamic measurements

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Nishanth; Kainerstorfer, Jana M.; Sassaroli, Angelo; Anderson, Pamela G.; Fantini, Sergio

    2016-02-01

    We present a continuous-wave instrument for non-invasive diffuse optical imaging of the breast in a parallel-plate transmission geometry. The instrument measures continuous spectra in the wavelength range 650-1000 nm, with an intensity noise level <1.5% and a spatial sampling rate of 5 points/cm in the x- and y-directions. We collect the optical transmission at four locations, one collinear and three offset with respect to the illumination optical fiber, to recover the depth of optical inhomogeneities in the tissue. We imaged a tissue-like, breast shaped, silicone phantom (6 cm thick) with two embedded absorbing structures: a black circle (1.7 cm in diameter) and a black stripe (3 mm wide), designed to mimic a tumor and a blood vessel, respectively. The use of a spatially multiplexed detection scheme allows for the generation of on-axis and off-axis projection images simultaneously, as opposed to requiring multiple scans, thus decreasing scan-time and motion artifacts. This technique localizes detected inhomogeneities in 3D and accurately assigns their depth to within 1 mm in the ideal conditions of otherwise homogeneous tissue-like phantoms. We also measured induced hemodynamic changes in the breast of a healthy human subject at a selected location (no scanning). We applied a cyclic, arterial blood pressure perturbation by alternating inflation (to a pressure of 200 mmHg) and deflation of a pneumatic cuff around the subject's thigh at a frequency of 0.05 Hz, and measured oscillations with amplitudes up to 1 μM and 0.2 μM in the tissue concentrations of oxyhemoglobin and deoxyhemoglobin, respectively. These hemodynamic oscillations provide information about the vascular structure and functional integrity in tissue, and may be used to assess healthy or abnormal perfusion in a clinical setting.

  11. Estimation of the depth of the thoracic epidural space in children using magnetic resonance imaging

    PubMed Central

    Wani, Tariq M; Rafiq, Mahmood; Nazir, Arif; Azzam, Hatem A; Al Zuraigi, Usama; Tobias, Joseph D

    2017-01-01

    Background The estimation of the distance from the skin to the thoracic epidural space or skin to epidural depth (SED) may increase the success rate and decrease the incidence of complications during placement of a thoracic epidural catheter. Magnetic resonance imaging (MRI) is the most comprehensive imaging modality of the spine, allowing for the accurate determination of tissue spaces and distances. The present study uses MRI-derived measurements to measure the SED and define the ratio between the straight and inclined SEDs at two thoracic levels (T6–7 and T9–10) in children. Methods The T2-weighed sagittal MRI images of 109 children, ranging in age from 1 month to 8 years, undergoing radiological evaluation unrelated to spine pathology were assessed. The SEDs (inclined and straight) were determined, and a comparison between the SEDs at two thoracic levels (T6–7 and T9–10) was made. Univariate and multivariate linear regression models were used to assess the relationship of the inclined thoracic T6–7 and T9–10 SED measurements with age, height, and weight. Results Body weight demonstrated a stronger association with the SED than did the age or height with R2 values of 0.6 for T6–7 and 0.5 for T9–10. The formulae describing the relationship between the weight and the inclined SED were T6–7 inclined (mm) = 7 + 0.9 × kg and T9–10 inclined (mm) = 7 + 0.8 × kg. Conclusion The depth of the pediatric thoracic epidural space shows a stronger correlation with weight than with age or height. Based on the MRI data, the predictive weight-based formulas can serve as guide to clinicians for placement of thoracic epidural catheters.

  12. Optimized non-integer order phase mask to extend the depth of field of an imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Jiang; Miao, Erlong; Sui, Yongxin; Yang, Huaijiang

    2016-09-01

    Wavefront coding is an effective optical technique used to extend the depth of field for an incoherent imaging system. Through introducing an optimized phase mask to the pupil plane, the modulated optical transfer function is defocus-invariant. In this paper, we proposed a new form phase mask using non-integer order and signum function to extend the depth of field. The performance of the phase mask is evaluated by comparing defocused modulation transfer function invariant and Fisher information with other phase masks. Defocused imaging simulation is also carried out. The results demonstrate the advantages of non-integer order phase mask and its effectiveness on the depth of field extension.

  13. Anatomy of the western Java plate interface from depth-migrated seismic images

    USGS Publications Warehouse

    Kopp, H.; Hindle, D.; Klaeschen, D.; Oncken, O.; Reichert, C.; Scholl, D.

    2009-01-01

    Newly pre-stack depth-migrated seismic images resolve the structural details of the western Java forearc and plate interface. The structural segmentation of the forearc into discrete mechanical domains correlates with distinct deformation styles. Approximately 2/3 of the trench sediment fill is detached and incorporated into frontal prism imbricates, while the floor sequence is underthrust beneath the d??collement. Western Java, however, differs markedly from margins such as Nankai or Barbados, where a uniform, continuous d??collement reflector has been imaged. In our study area, the plate interface reveals a spatially irregular, nonlinear pattern characterized by the morphological relief of subducted seamounts and thicker than average patches of underthrust sediment. The underthrust sediment is associated with a low velocity zone as determined from wide-angle data. Active underplating is not resolved, but likely contributes to the uplift of the large bivergent wedge that constitutes the forearc high. Our profile is located 100 km west of the 2006 Java tsunami earthquake. The heterogeneous d??collement zone regulates the friction behavior of the shallow subduction environment where the earthquake occurred. The alternating pattern of enhanced frictional contact zones associated with oceanic basement relief and weak material patches of underthrust sediment influences seismic coupling and possibly contributed to the heterogeneous slip distribution. Our seismic images resolve a steeply dipping splay fault, which originates at the d??collement and terminates at the sea floor and which potentially contributes to tsunami generation during co-seismic activity. ?? 2009 Elsevier B.V.

  14. Forest Walk Methods for Localizing Body Joints from Single Depth Image

    PubMed Central

    Jung, Ho Yub; Lee, Soochahn; Heo, Yong Seok; Yun, Il Dong

    2015-01-01

    We present multiple random forest methods for human pose estimation from single depth images that can operate in very high frame rate. We introduce four algorithms: random forest walk, greedy forest walk, random forest jumps, and greedy forest jumps. The proposed approaches can accurately infer the 3D positions of body joints without additional information such as temporal prior. A regression forest is trained to estimate the probability distribution to the direction or offset toward the particular joint, relative to the adjacent position. During pose estimation, the new position is chosen from a set of representative directions or offsets. The distribution for next position is found from traversing the regression tree from new position. The continual position sampling through 3D space will eventually produce an expectation of sample positions, which we estimate as the joint position. The experiments show that the accuracy is higher than current state-of-the-art pose estimation methods with additional advantage in computation time. PMID:26402029

  15. Nanoscale β-nuclear magnetic resonance depth imaging of topological insulators.

    PubMed

    Koumoulis, Dimitrios; Morris, Gerald D; He, Liang; Kou, Xufeng; King, Danny; Wang, Dong; Hossain, Masrur D; Wang, Kang L; Fiete, Gregory A; Kanatzidis, Mercouri G; Bouchard, Louis-S

    2015-07-14

    Considerable evidence suggests that variations in the properties of topological insulators (TIs) at the nanoscale and at interfaces can strongly affect the physics of topological materials. Therefore, a detailed understanding of surface states and interface coupling is crucial to the search for and applications of new topological phases of matter. Currently, no methods can provide depth profiling near surfaces or at interfaces of topologically inequivalent materials. Such a method could advance the study of interactions. Herein, we present a noninvasive depth-profiling technique based on β-detected NMR (β-NMR) spectroscopy of radioactive (8)Li(+) ions that can provide "one-dimensional imaging" in films of fixed thickness and generates nanoscale views of the electronic wavefunctions and magnetic order at topological surfaces and interfaces. By mapping the (8)Li nuclear resonance near the surface and 10-nm deep into the bulk of pure and Cr-doped bismuth antimony telluride films, we provide signatures related to the TI properties and their topological nontrivial characteristics that affect the electron-nuclear hyperfine field, the metallic shift, and magnetic order. These nanoscale variations in β-NMR parameters reflect the unconventional properties of the topological materials under study, and understanding the role of heterogeneities is expected to lead to the discovery of novel phenomena involving quantum materials.

  16. Lidar Waveform-Based Analysis of Depth Images Constructed Using Sparse Single-Photon Data.

    PubMed

    Altmann, Yoann; Ren, Ximing; McCarthy, Aongus; Buller, Gerald S; McLaughlin, Steve

    2016-05-01

    This paper presents a new Bayesian model and algorithm used for depth and reflectivity profiling using full waveforms from the time-correlated single-photon counting measurement in the limit of very low photon counts. The proposed model represents each Lidar waveform as a combination of a known impulse response, weighted by the target reflectivity, and an unknown constant background, corrupted by Poisson noise. Prior knowledge about the problem is embedded through prior distributions that account for the different parameter constraints and their spatial correlation among the image pixels. In particular, a gamma Markov random field (MRF) is used to model the joint distribution of the target reflectivity, and a second MRF is used to model the distribution of the target depth, which are both expected to exhibit significant spatial correlations. An adaptive Markov chain Monte Carlo algorithm is then proposed to perform Bayesian inference. This algorithm is equipped with a stochastic optimization adaptation mechanism that automatically adjusts the parameters of the MRFs by maximum marginal likelihood estimation. Finally, the benefits of the proposed methodology are demonstrated through a series of experiments using real data.

  17. Beneath the surface: profiling blubber depth in pinnipeds with infrared imaging.

    PubMed

    Mellish, J; Nienaber, J; Polasek, L; Horning, M

    2013-01-01

    Infrared thermography (IRT) was assessed as a non-invasive tool to evaluate body condition in juvenile female harbor seals (Phoca vitulina), (n=6) and adult female Steller sea lions (Eumetopias jubatus), (n=2). Surface temperature determined by IRT and blubber depth assessed with portable imaging ultrasound were monitored concurrently at eight body sites over the course of a year in long-term captive individuals under controlled conditions. Site-specific differences in surface temperature were noted between winter and summer in both species. Overall, surface temperature was slightly higher and more variable in harbor seals (9.8±0.6°C) than Steller sea lions (9.1±0.5°C). Limited site-specific relationships were found between surface temperature and blubber thickness, however, insulation level alone explained a very small portion of the variance. Therefore, while validated IRT data collection can potentially provide valuable information on the health, condition and metabolic state of an animal, it cannot provide a generalized proxy for blubber depth.

  18. Estimating Lunar Pyroclastic Deposit Depth from Imaging Radar Data: Applications to Lunar Resource Assessment

    NASA Technical Reports Server (NTRS)

    Campbell, B. A.; Stacy, N. J.; Campbell, D. B.; Zisk, S. H.; Thompson, T. W.; Hawke, B. R.

    1992-01-01

    Lunar pyroclastic deposits represent one of the primary anticipated sources of raw materials for future human settlements. These deposits are fine-grained volcanic debris layers produced by explosive volcanism contemporaneous with the early stage of mare infilling. There are several large regional pyroclastic units on the Moon (for example, the Aristarchus Plateau, Rima Bode, and Sulpicius Gallus formations), and numerous localized examples, which often occur as dark-halo deposits around endogenic craters (such as in the floor of Alphonsus Crater). Several regional pyroclastic deposits were studied with spectral reflectance techniques: the Aristarchus Plateau materials were found to be a relatively homogeneous blanket of iron-rich glasses. One such deposit was sampled at the Apollo 17 landing site, and was found to have ferrous oxide and titanium dioxide contents of 12 percent and 5 percent, respectively. While the areal extent of these deposits is relatively well defined from orbital photographs, their depths have been constrained only by a few studies of partially filled impact craters and by imaging radar data. A model for radar backscatter from mantled units applicable to both 70-cm and 12.6-cm wavelength radar data is presented. Depth estimates from such radar observations may be useful in planning future utilization of lunar pyroclastic deposits.

  19. Improving visibility depth in passive underwater imaging by use of polarization

    NASA Astrophysics Data System (ADS)

    Chang, Peter C. Y.; Flitton, Jonathan C.; Hopcraft, Keith I.; Jakeman, Eric; Jordan, David L.; Walker, John G.

    2003-05-01

    Results are presented that demonstrate the effectiveness of using polarization discrimination to improve visibility when imaging in a scattering medium. The study is motivated by the desire to improve visibility depth in turbid environments, such as the sea. Most previous research in this area has concentrated on the active illumination of objects with polarized light. We consider passive or ambient illumination, such as that deriving from sunlight or a cloudy sky. The basis for the improvements in visibility observed is that single scattering by small particles introduces a significant amount of polarization into light at scattering angles near 90°: This light can then be distinguished from light scattered by an object that remains almost completely unpolarized. Results were obtained from a Monte Carlo simulation and from a small-scale experiment in which an object was immersed in a cell filled with polystyrene latex spheres suspended in water. In both cases, the results showed an improvement in contrast and visibility depth for obscuration that was due to Rayleigh particles, but less improvement was obtained for larger scatterers.

  20. Improving visibility depth in passive underwater imaging by use of polarization.

    PubMed

    Chang, Peter C Y; Flitton, Jonathan C; Hopcraft, Keith I; Jakeman, Eric; Jordan, David L; Walker, John G

    2003-05-20

    Results are presented that demonstrate the effectiveness of using polarization discrimination to improve visibility when imaging in a scattering medium. The study is motivated by the desire to improve visibility depth in turbid environments, such as the sea. Most previous research in this area has concentrated on the active illumination of objects with polarized light. We consider passive or ambient illumination, such as that deriving from sunlight or a cloudy sky. The basis for the improvements in visibility observed is that single scattering by small particles introduces a significant amount of polarization into light at scattering angles near 90 degrees: This light can then be distinguished from light scattered by an object that remains almost completely unpolarized. Results were obtained from a Monte Carlo simulation and from a small-scale experiment in which an object was immersed in a cell filled with polystyrene latex spheres suspended in water. In both cases, the results showed an improvement in contrast and visibility depth for obscuration that was due to Rayleigh particles, but less improvement was obtained for larger scatterers.

  1. Achievement as a Function of Worksheet Type: Application of a Depth of Processing Model of Memory to the Classroom.

    ERIC Educational Resources Information Center

    Redfield, D. L.; And Others

    A study examined the efficacy of using various types of worksheets (representative of those typically used in instruction) that had been specifically designed to elicit differing achievement effects and to promote cognitive processing at the semantic level. Fifth grade students from five classrooms were divided into groups of high, middle, and low…

  2. Low-Achieving Readers, High Expectations: Image Theatre Encourages Critical Literacy

    ERIC Educational Resources Information Center

    Rozansky, Carol Lloyd; Aagesen, Colleen

    2010-01-01

    Students in an eighth-grade, urban, low-achieving reading class were introduced to critical literacy through engagement in Image Theatre. Developed by liberatory dramatist Augusto Boal, Image Theatre gives participants the opportunity to examine texts in the triple role of interpreter, artist, and sculptor (i.e., image creator). The researchers…

  3. In vivo volumetric depth-resolved vasculature imaging of human limbus and sclera with 1 μm swept source phase-variance optical coherence angiography

    NASA Astrophysics Data System (ADS)

    Poddar, Raju; Zawadzki, Robert J.; Cortés, Dennis E.; Mannis, Mark J.; Werner, John S.

    2015-06-01

    We present in vivo volumetric depth-resolved vasculature images of the anterior segment of the human eye acquired with phase-variance based motion contrast using a high-speed (100 kHz, 105 A-scans/s) swept source optical coherence tomography system (SSOCT). High phase stability SSOCT imaging was achieved by using a computationally efficient phase stabilization approach. The human corneo-scleral junction and sclera were imaged with swept source phase-variance optical coherence angiography and compared with slit lamp images from the same eyes of normal subjects. Different features of the rich vascular system in the conjunctiva and episclera were visualized and described. This system can be used as a potential tool for ophthalmological research to determine changes in the outflow system, which may be helpful for identification of abnormalities that lead to glaucoma.

  4. In vivo volumetric depth-resolved vasculature imaging of human limbus and sclera with 1μm swept source phase-variance optical coherence angiography

    PubMed Central

    Poddar, Raju; Zawadzki, Robert J; Cortés, Dennis E; Mannis, Mark J; Werner, John S

    2015-01-01

    We present nnnnnin vivo volumetric depth-resolved vasculature images of the anterior segment of the human eye acquired with phase-variance based motion contrast using a high-speed (100 kHz, 105 A-scans/s) swept source optical coherence tomography system (SSOCT). High phase stability SSOCT imaging was achieved by using a computationally efficient phase stabilization approach. The human corneo–scleral junction and sclera were imaged with swept source phase-variance optical coherence angiography and compared with slit lamp images from the same eyes of normal subjects. Different features of the rich vascular system in the conjunctiva and episclera were visualized and described. This system can be used as a potential tool for ophthalmological research to determine changes in the outflow system, which may be helpful for identification of abnormalities that lead to glaucoma. PMID:25984290

  5. Enhancement of image quality and imaging depth with Airy light-sheet microscopy in cleared and non-cleared neural tissue

    PubMed Central

    Nylk, Jonathan; McCluskey, Kaley; Aggarwal, Sanya; Tello, Javier A.; Dholakia, Kishan

    2016-01-01

    We have investigated the effect of Airy illumination on the image quality and depth penetration of digitally scanned light-sheet microscopy in turbid neural tissue. We used Fourier analysis of images acquired using Gaussian and Airy light-sheets to assess their respective image quality versus penetration into the tissue. We observed a three-fold average improvement in image quality at 50 μm depth with the Airy light-sheet. We also used optical clearing to tune the scattering properties of the tissue and found that the improvement when using an Airy light-sheet is greater in the presence of stronger sample-induced aberrations. Finally, we used homogeneous resolution probes in these tissues to quantify absolute depth penetration in cleared samples with each beam type. The Airy light-sheet method extended depth penetration by 30% compared to a Gaussian light-sheet. PMID:27867712

  6. Peripapillary choroidal thickness in Chinese children using enhanced depth imaging optical coherence tomography

    PubMed Central

    Wu, Xi-Shi; Shen, Li-Jun; Chen, Ru-Ru; Lyu, Zhe

    2016-01-01

    AIM To evaluate the peripapillary choroidal thickness (PPCT) in Chinese children, and to analyze the influencing factors. METHODS PPCT was measured with enhanced depth imaging optical coherence tomography (EDI-OCT) in 70 children (53 myopes and 17 non-myopes) aged 7 to 18y, with spherical equivalent refractive errors between 0.50 and −5.87 diopters (D). Peripapillary choroidal imaging was performed using circular scans of a diameter of 3.4 mm around the optic disc. PPCT was measured by EDI-OCT in six sectors: nasal (N), superonasal (SN), superotemporal (ST), temporal (T), inferotemporal (IT) and inferonasal (IN), as well as global RNFL thickness (G). RESULTS The mean global PPCT was 165.49±33.76 µm. The temporal, inferonasal, inferotemporal PPCT were significantly thinner than the nasal, superonasal, superotemporal segments PPCT were significantly thinner in the myopic group at temporal, superotemporal and inferotemporal segments. The axial length was significantly associated with the average global (β=−0.419, P=0.014), superonasal (β=−2.009, P=0.049) and inferonasal (β= −2.000, P=0.049) PPCT. The other factors (gender, age, SE) were not significantly associated with PPCT. CONCLUSION PPCT was thinner in the myopic group at temporal, superotemporal and inferotemporal segments. The axial length was found to be negatively correlated to PPCT. We need more further studies about the relationship between PPCT and myopia. PMID:27803863

  7. A preliminary investigation: the impact of microscopic condenser on depth of field in cytogenetic imaging

    NASA Astrophysics Data System (ADS)

    Ren, Liqiang; Qiu, Yuchen; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Wei R.; Liu, Hong

    2013-02-01

    As one of the important components of optical microscopes, the condenser has a considerable impact on system performance, especially on the depth of field (DOF). DOF is a critical technical feature in cytogenetic imaging that may affect the efficiency and accuracy of clinical diagnosis. The purpose of this study is to investigate the influence of microscopic condenser on DOF using a prototype of transmitted optical microscope, based on objective and subjective evaluations. After the description of the relationship between condenser and objective lens and the theoretical analysis of the condenser impact on system numerical aperture and DOF, a standard resolution pattern and several cytogenetic samples are adopted to assess the condenser impact on DOF, respectively. The experimental results of these objective and subjective evaluations are in agreement with the theoretical analysis and show that, under the specific intermediate range of condenser numerical aperture ( NAcond ), the DOF value decreases with the increase of NAcond . Although the above qualitative results are obtained under the experimental conditions with a specific prototype system, the methods presented in this preliminary investigation could offer useful guidelines for optimizing operational parameters in cytogenetic imaging.

  8. Calibrating remotely sensed river bathymetry in the absence of field measurements: Flow REsistance Equation-Based Imaging of River Depths (FREEBIRD)

    NASA Astrophysics Data System (ADS)

    Legleiter, Carl J.

    2015-04-01

    Remote sensing could enable high-resolution mapping of long river segments, but realizing this potential will require new methods for inferring channel bathymetry from passive optical image data without using field measurements for calibration. As an alternative to regression-based approaches, this study introduces a novel framework for Flow REsistance Equation-Based Imaging of River Depths (FREEBIRD). This technique allows for depth retrieval in the absence of field data by linking a linear relation between an image-derived quantity X and depth d to basic equations of open channel flow: continuity and flow resistance. One FREEBIRD algorithm takes as input an estimate of the channel aspect (width/depth) ratio A and a series of cross-sections extracted from the image and returns the coefficients of the X versus d relation. A second algorithm calibrates this relation so as to match a known discharge Q. As an initial test of FREEBIRD, these procedures were applied to panchromatic satellite imagery and publicly available aerial photography of a clear-flowing gravel-bed river. Accuracy assessment based on independent field surveys indicated that depth retrieval performance was comparable to that achieved by direct, field-based calibration methods. Sensitivity analyses suggested that FREEBIRD output was not heavily influenced by misspecification of A or Q, or by selection of other input parameters. By eliminating the need for simultaneous field data collection, these methods create new possibilities for large-scale river monitoring and analysis of channel change, subject to the important caveat that the underlying relationship between X and d must be reasonably strong.

  9. Large field-of-view and depth-specific cortical microvascular imaging underlies regional differences in ischemic brain

    NASA Astrophysics Data System (ADS)

    Qin, Jia; Shi, Lei; Dziennis, Suzan; Wang, Ruikang K.

    2014-02-01

    Ability to non-invasively monitor and quantify of blood flow, blood vessel morphology, oxygenation and tissue morphology is important for improved diagnosis, treatment and management of various neurovascular disorders, e.g., stroke. Currently, no imaging technique is available that can satisfactorily extract these parameters from in vivo microcirculatory tissue beds, with large field of view and sufficient resolution at defined depth without any harm to the tissue. In order for more effective therapeutics, we need to determine the area of brain that is damaged but not yet dead after focal ischemia. Here we develop an integrated multi-functional imaging system, in which SDW-LSCI (synchronized dual wavelength laser speckle imaging) is used as a guiding tool for OMAG (optical microangiography) to investigate the fine detail of tissue hemodynamics, such as vessel flow, profile, and flow direction. We determine the utility of the integrated system for serial monitoring afore mentioned parameters in experimental stroke, middle cerebral artery occlusion (MCAO) in mice. For 90 min MCAO, onsite and 24 hours following reperfusion, we use SDW-LSCI to determine distinct flow and oxygenation variations for differentiation of the infarction, peri-infarct, reduced flow and contralateral regions. The blood volumes are quantifiable and distinct in afore mentioned regions. We also demonstrate the behaviors of flow and flow direction in the arterials connected to MCA play important role in the time course of MCAO. These achievements may improve our understanding of vascular involvement under pathologic and physiological conditions, and ultimately facilitate clinical diagnosis, monitoring and therapeutic interventions of neurovascular diseases, such as ischemic stroke.

  10. Coupling sky images with radiative transfer models: a new method to estimate cloud optical depth

    NASA Astrophysics Data System (ADS)

    Mejia, Felipe A.; Kurtz, Ben; Murray, Keenan; Hinkelman, Laura M.; Sengupta, Manajit; Xie, Yu; Kleissl, Jan

    2016-08-01

    A method for retrieving cloud optical depth (τc) using a UCSD developed ground-based sky imager (USI) is presented. The radiance red-blue ratio (RRBR) method is motivated from the analysis of simulated images of various τc produced by a radiative transfer model (RTM). From these images the basic parameters affecting the radiance and red-blue ratio (RBR) of a pixel are identified as the solar zenith angle (θ0), τc, solar pixel angle/scattering angle (ϑs), and pixel zenith angle/view angle (ϑz). The effects of these parameters are described and the functions for radiance, Iλτc, θ0, ϑs, ϑz, and RBRτc, θ0, ϑs, ϑz are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for τc, where RBR increases with τc up to about τc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Iλmeasϑs, ϑz, in addition to RBRmeasϑs, ϑz, to obtain a unique solution for τc. The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement (ARM) program site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min et al. (2003) method for overcast skies. τc values ranged from 0 to 80 with values over 80, being capped and registered as 80. A τc RMSE of 2.5 between the Min et al. (2003) method and the USI are observed. The MWR and USI have an RMSE of 2.2, which is well within the uncertainty of the MWR. The procedure developed here provides a foundation to test and develop other cloud detection algorithms.

  11. Three-dimensional anterior segment imaging in patients with type 1 Boston Keratoprosthesis with switchable full depth range swept source optical coherence tomography

    PubMed Central

    Poddar, Raju; Cortés, Dennis E.; Werner, John S.; Mannis, Mark J.

    2013-01-01

    Abstract. A high-speed (100 kHz A-scans/s) complex conjugate resolved 1 μm swept source optical coherence tomography (SS-OCT) system using coherence revival of the light source is suitable for dense three-dimensional (3-D) imaging of the anterior segment. The short acquisition time helps to minimize the influence of motion artifacts. The extended depth range of the SS-OCT system allows topographic analysis of clinically relevant images of the entire depth of the anterior segment of the eye. Patients with the type 1 Boston Keratoprosthesis (KPro) require evaluation of the full anterior segment depth. Current commercially available OCT systems are not suitable for this application due to limited acquisition speed, resolution, and axial imaging range. Moreover, most commonly used research grade and some clinical OCT systems implement a commercially available SS (Axsun) that offers only 3.7 mm imaging range (in air) in its standard configuration. We describe implementation of a common swept laser with built-in k-clock to allow phase stable imaging in both low range and high range, 3.7 and 11.5 mm in air, respectively, without the need to build an external MZI k-clock. As a result, 3-D morphology of the KPro position with respect to the surrounding tissue could be investigated in vivo both at high resolution and with large depth range to achieve noninvasive and precise evaluation of success of the surgical procedure. PMID:23912759

  12. Pre-stack depth migration for improved imaging under seafloor canyons: 2D case study of Browse Basin, Australia*

    NASA Astrophysics Data System (ADS)

    Debenham, Helen 124Westlake, Shane

    2014-06-01

    In the Browse Basin, as in many areas of the world, complex seafloor topography can cause problems with seismic imaging. This is related to complex ray paths, and sharp lateral changes in velocity. This paper compares ways in which 2D Kirchhoff imaging can be improved below seafloor canyons, using both time and depth domain processing. In the time domain, to improve on standard pre-stack time migration (PSTM) we apply removable seafloor static time shifts in order to reduce the push down effect under seafloor canyons before migration. This allows for better event continuity in the seismic imaging. However this approach does not fully solve the problem, still giving sub-optimal imaging, leaving amplitude shadows and structural distortion. Only depth domain processing with a migration algorithm that honours the paths of the seismic energy as well as a detailed velocity model can provide improved imaging under these seafloor canyons, and give confidence in the structural components of the exploration targets in this area. We therefore performed depth velocity model building followed by pre-stack depth migration (PSDM), the result of which provided a step change improvement in the imaging, and provided new insights into the area.

  13. Extended depth-of-focus 3D micro integral imaging display using a bifocal liquid crystal lens.

    PubMed

    Shen, Xin; Wang, Yu-Jen; Chen, Hung-Shan; Xiao, Xiao; Lin, Yi-Hsin; Javidi, Bahram

    2015-02-15

    We present a three dimensional (3D) micro integral imaging display system with extended depth of focus by using a polarized bifocal liquid crystal lens. This lens and other optical components are combined as the relay optical element. The focal length of the relay optical element can be controlled to project an elemental image array in multiple positions with various lenslet image planes, by applying different voltages to the liquid crystal lens. The depth of focus of the proposed system can therefore be extended. The feasibility of our proposed system is experimentally demonstrated. In our experiments, the depth of focus of the display system is extended from 3.82 to 109.43 mm.

  14. Detailed imaging of flowing structures at depth using microseismicity: a tool for site investigation?

    NASA Astrophysics Data System (ADS)

    Pytharouli, S.; Lunn, R. J.; Shipton, Z. K.

    2011-12-01

    Field evidence shows that faults and fractures can act as focused pathways or barriers for fluid migration. This is an important property for modern engineering problems, e.g., CO2 sequestration, geological radioactive waste disposal, geothermal energy exploitation, land reclamation and remediation. For such applications the detailed characterization of the location, orientation and hydraulic properties of existing fractures is necessary. These investigations are expensive, requiring the hire of expensive equipment (excavator or drill rigs), which incur standing charges when not in use. In addition, they only provide information for discrete sample 'windows'. Non-intrusive methods have the ability to gather information across an entire area. Methods including electrical resistivity/conductivity and ground penetrating radar (GRP), have been used as tools for site investigations. Their imaging ability is often restricted due to unfavourable on-site conditions e.g. GRP is not useful in cases where a layer of clay or reinforced concrete is present. Our research has shown that high quality seismic data can be successfully used in the detailed imaging of sub-surface structures at depth; using induced microseismicity data recorded beneath the Açu reservoir in Brazil we identified orientations and values of average permeability of open shear fractures at depths up to 2.5km. Could microseismicity also provide information on the fracture width in terms of stress drops? First results from numerical simulations showed that higher stress drop values correspond to narrower fractures. These results were consistent with geological field observations. This study highlights the great potential of using microseismicity data as a supplementary tool for site investigation. Individual large-scale shear fractures in large rock volumes cannot currently be identified by any other geophysical dataset. The resolution of the method is restricted by the detection threshold of the local

  15. Controlling depth of focus in 3D image reconstructions by flexible and adaptive deformation of digital holograms.

    PubMed

    Ferraro, P; Paturzo, M; Memmolo, P; Finizio, A

    2009-09-15

    We show here that through an adaptive deformation of digital holograms it is possible to manage the depth of focus in 3D imaging reconstruction. Deformation is applied to the original hologram with the aim to put simultaneously in focus, and in one reconstructed image plane, different objects lying at different distances from the hologram plane (i.e., CCD sensor). In the same way, by adapting the deformation it is possible to extend the depth of field having a tilted object entirely in focus. We demonstrate the method in both lensless as well as in microscope configuration.

  16. Trap depth optimization to improve optical properties of diopside-based nanophosphors for medical imaging

    NASA Astrophysics Data System (ADS)

    Maldiney, Thomas; Lecointre, Aurélie; Viana, Bruno; Bessière, Aurélie; Gourier, Didier; Bessodes, Michel; Richard, Cyrille; Scherman, Daniel

    2012-02-01

    Regarding its ability to circumvent the autofluorescence signal, persistent luminescence was recently shown to be a powerful tool for in vivo imaging and diagnosis applications in living animal. The concept was introduced with lanthanide-doped persistent luminescence nanoparticles (PLNP), from a lanthanide-doped silicate host Ca0.2Zn0.9Mg0.9Si2O6:Eu2+, Mn2+, Dy3+ emitting in the near-infrared window. In order to improve the behaviour of these probes in vivo and favour diagnosis applications, we showed that biodistribution could be controlled by varying the hydrodynamic diameter, but also the surface charges and functional groups. Stealth PLNP, with neutral surface charge obtained by polyethylene glycol (PEG) coating, can circulate for longer time inside the mice body before being uptaken by the reticulo-endothelial system. However, the main drawback of this first generation of PLNP was the inability to witness long-term monitoring, mainly due to the decay kinetic after several decades of minutes, unveiling the need to work on new materials with improved optical characteristics. We investigated a modified silicate host, diopside CaMgSi2O6, and increased its persistent luminescence properties by studying various Ln3+ dopants (for instance Ce, Pr, Nd, Tm, Ho). Such dopants create electron traps that control the long lasting phosphorescence (LLP). We showed that Pr3+ was the most suitable Ln3+ electron trap in diopside lattice, providing optimal trap depth, and resulting in the most intense luminescence decay curve after UV irradiation. A novel composition CaMgSi2O6:Eu2+,Mn2+,Pr3+ was obtained for in vivo imaging, displaying a strong near-infrared persistent luminescence centred on 685 nm, allowing improved and sensitive detection through living tissues.

  17. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2008-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  18. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2010-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  19. Subpixel shift with Fourier transform to achieve efficient and high-quality image interpolation

    NASA Astrophysics Data System (ADS)

    Chen, Qin-Sheng; Weinhous, Martin S.

    1999-05-01

    A new approach to image interpolation is proposed. Different from the conventional scheme, the interpolation of a digital image is achieved with a sub-unity coordinate shift technique. In the approach, the original image is first shifted by sub-unity distances matching the locations where the image values need to be restored. The original and the shifted images are then interspersed together, yielding an interpolated image. High quality sub-unity image shift which is crucial to the approach is accomplished by implementing the shift theorem of Fourier transformation. It is well known that under the Nyquist sampling criterion, the most accurate image interpolation can be achieved with the interpolating function (sinc function). A major drawback is its computation efficiency. The present approach can achieve an interpolation quality as good as that with the sinc function since the sub-unity shift in Fourier domain is equivalent to shifting the sinc function in spatial domain, while the efficiency, thanks to the fast Fourier transform, is very much improved. In comparison to the conventional interpolation techniques such as linear or cubic B-spline interpolation, the interpolation accuracy is significantly enhanced. In order to compensate for the under-sampling effects in the interpolation of 3D medical images owing to a larger inter-slice distance, proper window functions were recommended. The application of the approach to 2- and 3-D CT and MRI images produced satisfactory interpolation results.

  20. Choroidal changes observed with enhanced depth imaging optical coherence tomography in patients with mild Graves orbitopathy.

    PubMed

    Özkan, B; Koçer, Ç A; Altintaş, Ö; Karabaş, L; Acar, A Z; Yüksel, N

    2016-07-01

    PurposeTo evaluate the choroidal thickness in patients with Graves orbitopathy (GO) using enhanced depth imaging-optical coherence tomography (EDI-OCT).MethodsThirty-one patients with GO were evaluated prospectively. All subjects underwent ophthalmologic examination including best-corrected visual acuity, intraocular pressure measurement, biomicroscopic, and fundus examination. Choroidal thickness was measured at the central fovea. In addition, visual evoked potential measurement and visual field evaluation were performed.ResultsThe mean choroidal thickness was 377.8±7.4 μ in the GO group, and 334±13.7 μ in the control group. (P=0.004). There was a strong correlation between the choridal thickness and the clinical activity scores (CAS) of the patients (r=0.281, P=0.027). Additionally, there was a correlation between the choroidal thickness and the visual-evoked potential (VEP) P100 latency measurements of the patients (r=0.439, P=0.001).ConclusionsThe results of this study demonstrate that choroid is thicker in patients with GO. The choroidal thickness is also correlated with the CAS and VEP P100 latency measurements in these patients.

  1. The Relationship between University Students' Academic Achievement and Perceived Organizational Image

    ERIC Educational Resources Information Center

    Polat, Soner

    2011-01-01

    The purpose of present study was to determine the relationship between university students' academic achievement and perceived organizational image. The sample of the study was the senior students at the faculties and vocational schools in Umuttepe Campus at Kocaeli University. Because the development of organizational image is a long process, the…

  2. Wavelet image processing applied to optical and digital holography: past achievements and future challenges

    NASA Astrophysics Data System (ADS)

    Jones, Katharine J.

    2005-08-01

    The link between wavelets and optics goes back to the work of Dennis Gabor who both invented holography and developed Gabor decompositions. Holography involves 3-D images. Gabor decompositions involves 1-D signals. Gabor decompositions are the predecessors of wavelets. Wavelet image processing of holography, both optical holography and digital holography, will be examined with respect to past achievements and future challenges.

  3. Planelets-A Piecewise Linear Fractional Model for Preserving Scene Geometry in Intra-Coding of Indoor Depth Images.

    PubMed

    Kiani, Vahid; Harati, Ahad; Vahedian, Abedin

    2017-02-01

    Geometrical wavelets have already proved their strength in approximation, compression, and denoising of piecewise constant and piecewise linear images. In this paper, we extend this family by introducing planelets toward an effective representation of indoor depth images. It uses a linear fractional model to capture non-linearity of depth values in the planar regions of the output images of Kinect-like sensors. A block-based compression framework based on planelet approximation is then presented, which uses quadtree decomposition along with spatial predictions as an effective intra-coding scheme. Compared with both classical geometric wavelets and some state-of-the-art image coding algorithms, our method provides desirable quality by explicitly representing edges and planar patches.

  4. Planelets--A Piecewise Linear Fractional Model for Preserving Scene Geometry in Intra-coding of Indoor Depth Images.

    PubMed

    Kiani, Vahid; Harati, Ahad; Vahedian, Abedin

    2016-10-26

    Geometrical wavelets have already proved their strength in approximation, compression, and denoising of piecewise constant and piecewise linear images. In this paper, we extend this family by introducing planelets toward an effective representation of indoor depth images. It uses linear fractional model to capture non-linearity of depth values in planar regions of the output images of Kinect-like sensors. A block-based compression framework based on planelet approximation is then presented which uses quadtree decomposition along with spatial predictions as an effective intracoding scheme. Compared with both classical geometric wavelets and some state-of-the art image coding algorithms, our method provides desirable quality by explicitly representing edges and planar patches.

  5. Advanced imaging with dynamic focus and extended depth using integrated FR4 platform.

    PubMed

    Isikman, Serhan O; Varghese, Samuel; Abdullah, Fahd; Augustine, Robin; Sprague, Randy B; Andron, Voytek; Urey, Hakan

    2009-09-14

    A two-degrees-of-freedom scanned beam imaging system with large dynamic range and dynamic focusing is demonstrated. The laser diode, photo-detector and the optical components are integrated on a moving platform that is made of FR4 (Flame-Retardant 4), a common polymeric substrate used in printed circuit boards. A scan angle of 52 degrees is demonstrated at 60 Hz resonant frequency while the laser is moved 250 um in the out-of-plane direction to achieve dynamic focusing. The laser is scanned by physically rotating the laser diode and the collection optics to achieve high signal-to-noise ratio and good ambient light rejection. The collection optics is engineered such that the collection efficiency decreases when collecting light from close distances to avoid detector saturation. The detection range is extended from contact distance up to 600 mm while the collected power level varies only by a factor of 30 within this long range. Slight modifications will allow increasing the detection range up to one meter. This is the first demonstration of a laser scan engine with such a high degree of integration of electronics, optoelectronics, optics and micromechanics on the same platform.

  6. Depth-resolved imaging of colon tumor using optical coherence tomography and fluorescence laminar optical tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tang, Qinggong; Frank, Aaron; Wang, Jianting; Chen, Chao-wei; Jin, Lily; Lin, Jon; Chan, Joanne M.; Chen, Yu

    2016-03-01

    Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is unable to detect those subsurface lesions. Since cancer development is associated with both morphological and molecular alterations, imaging technologies that can quantitative image tissue's morphological and molecular biomarkers and assess the depth extent of a lesion in real time, without the need for tissue excision, would be a major advance in GI cancer diagnostics and therapy. In this research, we investigated the feasibility of multi-modal optical imaging including high-resolution optical coherence tomography (OCT) and depth-resolved high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. APC (adenomatous polyposis coli) mice model were imaged using OCT and FLOT and the correlated histopathological diagnosis was obtained. Quantitative structural (the scattering coefficient) and molecular imaging parameters (fluorescence intensity) from OCT and FLOT images were developed for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 87.4% (87.3%) for sensitivity (specificity) which gives the most optimal diagnosis (the largest area under receiver operating characteristic (ROC) curve). This project results in a new non-invasive multi-modal imaging platform for improved GI cancer detection, which is expected to have a major impact on detection, diagnosis, and characterization of GI cancers, as well as a wide range of epithelial cancers.

  7. Thin film MRI-high resolution depth imaging with a local surface coil and spin echo SPI.

    PubMed

    Ouriadov, Alexei V; MacGregor, Rodney P; Balcom, Bruce J

    2004-07-01

    A multiple echo, single point imaging technique, employing a local surface coil probe, is presented for examination of thin film samples. Depth images with a nominal resolution of 5 microm were acquired with acquisition times on the order of 10 min. The method may be used to observe dynamic phenomenon such as polymerization, wetting, and drying in thin film samples. It is readily adapted to spatially resolved diffusion coefficient and T2 relaxation time mapping.

  8. Application of Depth of Investigation index method to process resistivity imaging models from glacier forfield

    NASA Astrophysics Data System (ADS)

    Glazer, Michał; Dobinski, Wojciech; Grabiec, Mariusz

    2015-04-01

    At the end of August 2014 ERT measurements were carried out at the Storglaciären glacier forefield (Tarfala Valley, Northern Sweden) to study permafrost occurrence. This glacier has been retreating since 1910. It is one of the most well studied mountain glaciers in the world due to initiation of the first continuous glacier mass balance research program. Near the vicinity of its frontal margin three perpendicular and two parallel resistivity profile lines were located. They varied in terms of number of roll-along extensions and used electrode spacing. At least Schlumberger and dipole-dipole protocols were utilized on every measurement site. Surface of glacier forefield is characterized by occurrence of large moraine deposits which consists of rock blocks with air voids on one hand and voids filled with clay material on the other. It caused large variations of electrodes contact resistance on profile line. Furthermore, possibility of using only weak currents in the research, and presence of high resistivity contrast structures in geological medium made inversion process and interpretation of received resistivity models demanding. To stabilize inversion process efforts were made to erase most noisy and systematic error data. In order to assess the reliability of resistivity models at depth and in terms of the presence of artifacts left by the inversion process Depth of Investigation (DOI) index was applied. It describes accuracy of prepared model with respect to variable parameters of inversion. For preparing DOI maps two inversions on the same data set using different reference models are necessary. Then the results are compared to each other. In regions where the model depend strongly on data DOI will take values near zero, while in regions where resistivity values depend more on inversion parameters DOI will rise. Additionally several synthetic models were made which led to better understanding of resistivity images of some geological structures observed on the

  9. Neural substrates for depth perception of the Necker cube; a functional magnetic resonance imaging study in human subjects.

    PubMed

    Inui, T; Tanaka, S; Okada, T; Nishizawa, S; Katayama, M; Konishi, J

    2000-03-24

    We have studied the cerebral activity for the depth perception of the Necker cube by functional magnetic resonance imaging. Three types of line drawing figures were used as stimuli, the Necker cube, hidden line elimination cube and overlapping squares. Subjects were instructed to perceive both orientations of the depth of the Necker cube. They were instructed to shift their attention voluntarily during viewing overlapping squares to obtain a control for the attentional shift in perceiving the Necker cube. A hidden line elimination cube was used as a control for monocular stereopsis. The results showed a clear symmetrical activation in premotor and parietal areas during the Necker cube perception compared with other conditions. The present result suggests that a neural process similar to mental image manipulation occurs during depth perception of the Necker cube.

  10. Passive seismic imaging at reservoir depths using ambient seismic noise recorded at the Otway Co2 geological storage research facility

    NASA Astrophysics Data System (ADS)

    Issa, Nader A.; Lumley, David; Pevzner, Roman

    2017-03-01

    We demonstrate rapid convergence (<60 min) of passive seismic images down to reservoir depths (∼2.0 km) at the CO2CRC Otway CO2 geological storage research facility, Australia, using ambient seismic noise recorded continuously with a buried geophone array. Our passive seismic images are created by applying seismic data processing and interferometry techniques, and show we can recover both surface and body waves from the ambient noise data. Using a recording time interval in which body waves dominate the ambient seismic noise, we generate passive seismic images that correlate well with the major reflectors imaged by conventional active-source 3D seismic data at the site. We present a mathematical model for image convergence, where the variance converges inversely proportional to recording time, and show for the first time an excellent agreement between a mathematical model and the observed convergence rate of interferometric images made from ambient seismic noise.

  11. Review of spectral imaging technology in biomedical engineering: achievements and challenges.

    PubMed

    Li, Qingli; He, Xiaofu; Wang, Yiting; Liu, Hongying; Xu, Dongrong; Guo, Fangmin

    2013-10-01

    Spectral imaging is a technology that integrates conventional imaging and spectroscopy to get both spatial and spectral information from an object. Although this technology was originally developed for remote sensing, it has been extended to the biomedical engineering field as a powerful analytical tool for biological and biomedical research. This review introduces the basics of spectral imaging, imaging methods, current equipment, and recent advances in biomedical applications. The performance and analytical capabilities of spectral imaging systems for biological and biomedical imaging are discussed. In particular, the current achievements and limitations of this technology in biomedical engineering are presented. The benefits and development trends of biomedical spectral imaging are highlighted to provide the reader with an insight into the current technological advances and its potential for biomedical research.

  12. Three-dimensional surface reconstruction by combining a pico-digital projector for structured light illumination and an imaging system with high magnification and high depth of field

    NASA Astrophysics Data System (ADS)

    Leong-Hoï, A.; Serio, B.; Twardowski, P.; Montgomery, P.

    2014-05-01

    Based on a miniature digital light projector (pico-DLP), a prototype of a Structured Illumination Microscope (SIM) has been developed. The pico-DLP is used to project fringes onto a sample and applying the three-step phase shifting algorithm together with the absolute phase retrieval method, the 3D shape of the object surface is extracted. By using a specific optical system instead of a conventional microscope objective, the device allows 3D reconstructions of surfaces with both a 10× magnification and a high depth of field obtained thanks to a small numerical aperture of 0.06 offering an acceptable lateral resolution of 6.2 μm. An image processing algorithm has been developed to reduce the noise in the acquired images before applying the reconstruction algorithm and so optimize the reconstruction method. Compared with interference microscopy and confocal microscopy that have a shallower depth of field per XY image, the microscope developed achieves a depth of field about 700 μm and requires no vertical scanning, which greatly reduces the acquisition time. Although the system at this stage does not have the same resolution performance as interference microscopy, it is nonetheless faster and cheaper. One possible application of this SIM technique would be to first reconstruct in real-time parts of an object before performing higher resolution 3D measurements with interference microscopy. As with all classical optical instruments, the lateral resolution is limited by diffraction. Work is being carried out with the prototype SIM system to be able to exceed the lateral resolution limits and thus achieve super resolution.

  13. Depth-resolved imaging and detection of micro-retroreflectors within biological tissue using Optical Coherence Tomography

    PubMed Central

    Ivers, Steven N.; Baranov, Stephan A.; Sherlock, Tim; Kourentzi, Katerina; Ruchhoeft, Paul; Willson, Richard; Larin, Kirill V.

    2010-01-01

    A new approach to in vivo biosensor design is introduced, based on the use of an implantable micron-sized retroreflector-based platform and non-invasive imaging of its surface reflectivity by Optical Coherence Tomography (OCT). The possibility of using OCT for the depth-resolved imaging and detection of micro-retroreflectors in highly turbid media, including tissue, is demonstrated. The maximum imaging depth for the detection of the micro-retroreflector-based platform within the surrounding media was found to be 0.91 mm for porcine tissue and 1.65 mm for whole milk. With further development, it may be possible to utilize OCT and micro-retroreflectors as a tool for continuous monitoring of analytes in the subcutaneous tissue. PMID:21258473

  14. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  15. Novel dental dynamic depth profilometric imaging using simultaneous frequency-domain infrared photothermal radiometry and laser luminescence

    NASA Astrophysics Data System (ADS)

    Nicolaides, Lena; Mandelis, Andreas

    2000-01-01

    A high-spatial-resolution dynamic experimental imaging setup, which can provide simultaneous measurements of laser- induced frequency-domain infrared photothermal radiometric and luminescence signals from defects in teeth, has been developed for the first time. The major findings of this work are: (1) radiometric images are complementary to (anticorrelated with) luminescence images, as a result of the nature of the two physical signal generation processes; (2) the radiometric amplitude exhibits much superior dynamic (signal resolution) range to luminescence in distinguishing between intact and cracked sub-surface structures in the enamel; (3) the radiometric signal (amplitude and phase) produces dental images with much better defect localization, delineation, and resolution; (4) radiometric images (amplitude and phase) at a fixed modulation frequency are depth profilometric, whereas luminescence images are not; and (5) luminescence frequency responses from enamel and hydroxyapatite exhibit two relaxation lifetimes, the longer of which (approximately ms) is common to all and is not sensitive to the defect state and overall quality of the enamel. Simultaneous radiometric and luminescence frequency scans for the purpose of depth profiling were performed and a quantitative theoretical two-lifetime rate model of dental luminescence was advanced.

  16. Utility of spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI) to non-invasively diagnose burn depth in a porcine model☆

    PubMed Central

    Burmeister, David M.; Ponticorvo, Adrien; Yang, Bruce; Becerra, Sandra C.; Choi, Bernard; Durkin, Anthony J.; Christy, Robert J.

    2015-01-01

    Surgical intervention of second degree burns is often delayed because of the difficulty in visual diagnosis, which increases the risk of scarring and infection. Non-invasive metrics have shown promise in accurately assessing burn depth. Here, we examine the use of spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI) for predicting burn depth. Contact burn wounds of increasing severity were created on the dorsum of a Yorkshire pig, and wounds were imaged with SFDI/LSI starting immediately after-burn and then daily for the next 4 days. In addition, on each day the burn wounds were biopsied for histological analysis of burn depth, defined by collagen coagulation, apoptosis, and adnexal/vascular necrosis. Histological results show that collagen coagulation progressed from day 0 to day 1, and then stabilized. Results of burn wound imaging using non-invasive techniques were able to produce metrics that correlate to different predictors of burn depth. Collagen coagulation and apoptosis correlated with SFDI scattering coefficient parameter ( μs′) and adnexal/vascular necrosis on the day of burn correlated with blood flow determined by LSI. Therefore, incorporation of SFDI scattering coefficient and blood flow determined by LSI may provide an algorithm for accurate assessment of the severity of burn wounds in real time. PMID:26138371

  17. Utility of spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI) to non-invasively diagnose burn depth in a porcine model.

    PubMed

    Burmeister, David M; Ponticorvo, Adrien; Yang, Bruce; Becerra, Sandra C; Choi, Bernard; Durkin, Anthony J; Christy, Robert J

    2015-09-01

    Surgical intervention of second degree burns is often delayed because of the difficulty in visual diagnosis, which increases the risk of scarring and infection. Non-invasive metrics have shown promise in accurately assessing burn depth. Here, we examine the use of spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI) for predicting burn depth. Contact burn wounds of increasing severity were created on the dorsum of a Yorkshire pig, and wounds were imaged with SFDI/LSI starting immediately after-burn and then daily for the next 4 days. In addition, on each day the burn wounds were biopsied for histological analysis of burn depth, defined by collagen coagulation, apoptosis, and adnexal/vascular necrosis. Histological results show that collagen coagulation progressed from day 0 to day 1, and then stabilized. Results of burn wound imaging using non-invasive techniques were able to produce metrics that correlate to different predictors of burn depth. Collagen coagulation and apoptosis correlated with SFDI scattering coefficient parameter [Formula: see text] and adnexal/vascular necrosis on the day of burn correlated with blood flow determined by LSI. Therefore, incorporation of SFDI scattering coefficient and blood flow determined by LSI may provide an algorithm for accurate assessment of the severity of burn wounds in real time.

  18. Depth-resolved birefringence imaging of the primate retinal nerve fiber layer using polarization-sensitive OCT

    NASA Astrophysics Data System (ADS)

    Kemp, Nathaniel J.; Park, Jesung; Marsack, Jason D.; Dave, Digant P.; Parekh, Sapun H.; Milner, Thomas E.; Rylander, Henry G., III

    2002-06-01

    Imaging the optical phase retardation per unit depth (OPR/UD) in the retinal nerve fiber layer (RNFL) may aid in glaucoma diagnosis. Polarization Sensitive Optical Coherence Tomography (PSOCT) was used to record in vivo high-resolution images of the RNFL in two cynomologous monkeys. The depth variation in the Stokes vector of reflected light was used to calculate the OPR/UD as a function of RNFL position. OPR/UD decreased from 35 degree(s)/100 micrometers near the optic nerve to 5 degree(s)/100 micrometers at a location 600 micrometers superior to the optic nerve. Variation of OPR/UD in the RNFL with retinal position demonstrates a change in birefringence for different densities of ganglion cell axons. PSOCT may be useful for noninvasive determination of RNFL thickness and fiber density.

  19. Large Area and Depth-Profiling Dislocation Imaging and Strain Analysis in Si/SiGe/Si Heterostructures

    DTIC Science & Technology

    2014-01-01

    by high-resolution X-ray 387 diffraction. In Characterization of Semiconductor Heterostructures 388and Nanostructures , Lamberti C. (Ed.), pp. 93–132...combined advantage of Si semiconductor 29 technology and band gap engineering (Kittler et al., 1995). 30 Inside the Si/SiGe/Si heterostructure , SiGe is...and Depth-Profiling Dislocation Imaging and Strain Analysis in Si/SiGe/Si Heterostructures Report Title We demonstrate the combined use of large area

  20. Penetration depth in tissue-mimicking phantoms from hyperspectral imaging in SWIR in transmission and reflection geometry

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Salo, Daniel; Kim, David M.; Berezin, Mikhail Y.

    2016-03-01

    We explored the depth penetration in tissue-mimicking intralipid-based phantoms in SWIR (800-1650 nm) using a hyperspectral imaging system composed from a 2D CCD camera coupled to a microscope. Hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 minutes or less that minimized artifacts from sample drying. Michelson spatial contrast was used as a metric to evaluate light penetration. Results from both transmission and reflection geometries consistently revealed the highest spatial contrast in the wavelength range of 1300 to 1350 nm.

  1. Enhanced depth imaging optical coherence tomography of choroidal osteoma with secondary neovascular membranes: report of two cases.

    PubMed

    Mello, Patrícia Correa de; Berensztejn, Patricia; Brasil, Oswaldo Ferreira Moura

    2016-01-01

    We report enhanced depth imaging optical coherence tomography (EDI-OCT) features based on clinical and imaging data from two newly diagnosed cases of choroidal osteoma presenting with recent visual loss secondary to choroidal neovascular membranes. The features described in the two cases, compression of the choriocapillaris and disorganization of the medium and large vessel layers, are consistent with those of previous reports. We noticed a sponge-like pattern previously reported, but it was subtle. Both lesions had multiple intralesional layers and a typical intrinsic transparency with visibility of the sclerochoroidal junction.

  2. Depth-correction algorithm that improves optical quantification of large breast lesions imaged by diffuse optical tomography

    PubMed Central

    Tavakoli, Behnoosh; Zhu, Quing

    2011-01-01

    Optical quantification of large lesions imaged with diffuse optical tomography in reflection geometry is depth dependence due to the exponential decay of photon density waves. We introduce a depth-correction method that incorporates the target depth information provided by coregistered ultrasound. It is based on balancing the weight matrix, using the maximum singular values of the target layers in depth without changing the forward model. The performance of the method is evaluated using phantom targets and 10 clinical cases of larger malignant and benign lesions. The results for the homogenous targets demonstrate that the location error of the reconstructed maximum absorption coefficient is reduced to the range of the reconstruction mesh size for phantom targets. Furthermore, the uniformity of absorption distribution inside the lesions improve about two times and the median of the absorption increases from 60 to 85% of its maximum compared to no depth correction. In addition, nonhomogenous phantoms are characterized more accurately. Clinical examples show a similar trend as the phantom results and demonstrate the utility of the correction method for improving lesion quantification. PMID:21639570

  3. Enhanced 3D prestack depth imaging of broadband data from the South China Sea: a case study

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Xu, Jincheng; Li, Jinbo

    2016-08-01

    We present a case study of prestack depth imaging for data from the South China Sea using an enhanced work flow with cutting edge technologies. In the survey area, the presence of complex geologies such as carbonate pinnacles and gas pockets creates challenges for processing and imaging: the complex geometry of carbonates exhibits 3D effect for wave propagation; deriving velocity inside carbonates and gas pockets is difficult and laborious; and localised strong attenuation effect from gas pockets may lead to absorption and dispersion problems. In the course of developing the enhanced work flow to tackle these issues, the following processing steps have the most significant impact on improving the imaging quality: (1) 3D ghost wavefield attenuation, in particular to remove the ghost energy associated with complex structures; (2) 3D surface-related multiple elimination (SRME) to remove multiples, in particular multiples related to complex carbonate structures; (3) full waveform inversion (FWI) and tomography-based velocity model building, to derive a geologically plausible velocity model for imaging; (4) Q-tomography to estimate the Q model which describes the intrinsic attenuation of the subsurface media; (5) de-absorption prestack depth migration (Q-PSDM) to compensate the earth absorption and dispersion effect during imaging especially for the area below gas pockets. The case study with the data from the South China Sea shows that the enhanced work flow consisting of cutting edge technologies is effective when the complex geologies are present.

  4. Multi-Mode Electromagnetic Ultrasonic Lamb Wave Tomography Imaging for Variable-Depth Defects in Metal Plates

    PubMed Central

    Huang, Songling; Zhang, Yu; Wang, Shen; Zhao, Wei

    2016-01-01

    This paper proposes a new cross-hole tomography imaging (CTI) method for variable-depth defects in metal plates based on multi-mode electromagnetic ultrasonic Lamb waves (LWs). The dispersion characteristics determine that different modes of LWs are sensitive to different thicknesses of metal plates. In this work, the sensitivities to thickness variation of A0- and S0-mode LWs are theoretically studied. The principles and procedures for the cooperation of A0- and S0-mode LW CTI are proposed. Moreover, the experimental LW imaging system on an aluminum plate with a variable-depth defect is set up, based on A0- and S0-mode EMAT (electromagnetic acoustic transducer) arrays. For comparison, the traditional single-mode LW CTI method is used in the same experimental platform. The imaging results show that the computed thickness distribution by the proposed multi-mode method more accurately reflects the actual thickness variation of the defect, while neither the S0 nor the A0 single-mode method was able to distinguish thickness variation in the defect region. Moreover, the quantification of the defect’s thickness variation is more accurate with the multi-mode method. Therefore, theoretical and practical results prove that the variable-depth defect in metal plates can be successfully quantified and visualized by the proposed multi-mode electromagnetic ultrasonic LW CTI method. PMID:27144571

  5. Ultra-deep Large Binocular Camera U-band Imaging of the GOODS-North Field: Depth vs. Resolution

    NASA Astrophysics Data System (ADS)

    Ashcraft, Teresa; Windhorst, Rogier A.; Jansen, Rolf A.; Cohen, Seth H.; Grazian, Andrea; Boutsia, Konstantina; Fontana, Adriano; Giallongo, Emanuele; O'Connell, Robert W.; Paris, Diego; Rutkowski, Michael J.; Scarlata, Claudia; Testa, Vincenzo

    2017-01-01

    We present a study of the trade-off between depth and resolution using a large number of U-band images in the GOODS-North field obtained with the Large Binocular Camera (LBC) on the Large Binocular Telescope (LBT). Having acquired over 30 hours of total exposure time (315 images, each 5-6 min), we generated multiple image mosaics, starting with the subset of images with the best (FWHM < 0."8) atmospheric seeing (~10% of the total data set). For subsequent mosaics, we added in data with larger seeing values until the final, deepest mosaic included all images with FWHM < 1."8 (~94% of the total data set). For each mosaic, we created object catalogs to compare the optimal-resolution, yet shallower image to the low-resolution but deeper image and found the number counts for both images are ~90% complete to AB = 26 mag. In the optimal-resolution image, object counts start to drop-off dramatically, fainter than AB ~ 27 mag, while in the deepest image the drop is more gradual because of the better surface-brightness sensitivity ( SB ~ 32 mag arcsec-2). We conclude that for studies of brighter galaxies and features within them, the optimal-resolution image should be used. However, to fully explore and understand the faintest objects, the deeper imaging with lower resolution are also required. We also discuss how high-resolution F336W HST data complements our LBT mosaics.For 220 brighter galaxies with U < 24 mag, we find only marginal differences (< 0.07 mag in total integrated flux) between the optimal-resolution and low-resolution light-profiles to SB ~ 32 mag arcsec-2. This helps constrain how much flux can be missed in galaxy outskirts, which is important for studies of Extragalactic Background Light.In the future, we will expand our analysis of the GOODS-N field to ~26 hours of LBT/LBC R-band surface photometry to similar depths.

  6. New Fast Fall Detection Method Based on Spatio-Temporal Context Tracking of Head by Using Depth Images

    PubMed Central

    Yang, Lei; Ren, Yanyun; Hu, Huosheng; Tian, Bo

    2015-01-01

    In order to deal with the problem of projection occurring in fall detection with two-dimensional (2D) grey or color images, this paper proposed a robust fall detection method based on spatio-temporal context tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor. In the pre-processing procedure, the parameters of the Single-Gauss-Model (SGM) are estimated and the coefficients of the floor plane equation are extracted from the background images. Once human subject appears in the scene, the silhouette is extracted by SGM and the foreground coefficient of ellipses is used to determine the head position. The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image. When the distance is lower than an adaptive threshold, the centroid height of the human will be used as the second judgment criteria to decide whether a fall incident happened. Lastly, four groups of experiments with different falling directions are performed. Experimental results show that the proposed method can detect fall incidents that occurred in different orientations, and they only need a low computation complexity. PMID:26378540

  7. Fluorescent microthermal imaging-theory and methodology for achieving high thermal resolution images

    SciTech Connect

    Barton, D.L.; Tangyunyong, P.

    1995-09-01

    The fluorescent microthermal imaging technique (FMI) involves coating a sample surface with an inorganic-based thin film that, upon exposure to UV light, emits temperature-dependent fluorescence. FMI offers the ability to create thermal maps of integrated circuits with a thermal resolution theoretically limited to 1 m{degrees}C and a spatial resolution which is diffraction-limited to 0.3 {mu}m. Even though the fluorescent microthermal imaging (FMI) technique has been around for more than a decade, many factors that can significantly affect the thermal image quality have not been systematically studied and characterized. After a brief review of FMI theory, we will present our recent results demonstrating for the first time three important factors that have a dramatic impact on the thermal quality and sensitivity of FMI. First, the limitations imparted by photon shot noise and improvement in the signal-to-noise ratio realized through signal averaging will be discussed. Second, ultraviolet bleaching, an unavoidable problem with FMI as it currently is performed, will be characterized to identify ways to minimize its effect. Finally, the impact of film dilution on thermal sensitivity will be discussed.

  8. Compton back scatter imaging for mild steel rebar detection and depth characterization embedded in concrete

    NASA Astrophysics Data System (ADS)

    Margret, M.; Menaka, M.; Venkatraman, B.; Chandrasekaran, S.

    2015-01-01

    A novel non-destructive Compton scattering technique is described to ensure the feasibility, reliability and applicability of detecting the reinforcing steel bar in concrete. The indigenously developed prototype system presented in this paper is capable of detecting the reinforcement of varied diameters embedded in the concrete and as well as up to 60 mm depth, with the aid of Caesium-137(137Cs) radioactive source and a high resolution HPGe detector. The technique could also detect the inhomogeneities present in the test specimen by interpreting the material density variation caused due to the count rate. The experimental results are correlated using established techniques such as radiography and rebar locators. The results obtained from its application to locate the rebars are quite promising and also been successfully used for reinforcement mapping. This method can be applied, especially when the intrusion is located underneath the cover of the concrete or considerably at larger depths and where two sided access is restricted.

  9. Increasing the imaging depth of coherent anti-Stokes Raman scattering microscopy with a miniature microscope objective

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Huff, Terry B.; Fu, Yan; Jia, Kevin Y.; Cheng, Ji-Xin

    2007-08-01

    A miniature objective lens with a tip diameter of 1.3 mm was used for extending the penetration depth of coherent anti-Stokes Raman scattering (CARS) microscopy. Its axial and lateral focal widths were determined to be 11.4 and 0.86 μm, respectively, by two-photon excitation fluorescence imaging of 200 nm beads at a 735 nm excitation wavelength. By inserting the lens tip into a soft gel sample, CARS images of 2 μm polystyrene beads 5 mm deep from the surface were acquired. The miniature objective was applied to CARS imaging of rat spinal cord white matter with a minimal requirement for surgery.

  10. Examination of Optical Depth Effects on Fluorescence Imaging of Cardiac Propagation

    PubMed Central

    Bray, Mark-Anthony; Wikswo, John P.

    2003-01-01

    Optical mapping with voltage-sensitive dyes provides a high-resolution technique to observe cardiac electrodynamic behavior. Although most studies assume that the fluorescent signal is emitted from the surface layer of cells, the effects of signal attenuation with depth on signal interpretation are still unclear. This simulation study examines the effects of a depth-weighted signal on epicardial activation patterns and filament localization. We simulated filament behavior using a detailed cardiac model, and compared the signal obtained from the top (epicardial) layer of the spatial domain with the calculated weighted signal. General observations included a prolongation of the action upstroke duration, early upstroke initiation, and reduction in signal amplitude in the weighted signal. A shallow filament was found to produce a dual-humped action potential morphology consistent with previously reported observations. Simulated scroll wave breakup exhibited effects such as the false appearance of graded potentials, apparent supramaximal conduction velocities, and a spatially blurred signal with the local amplitude dependent upon the immediate subepicardial activity; the combination of these effects produced a corresponding change in the accuracy of filament localization. Our results indicate that the depth-dependent optical signal has significant consequences on the interpretation of epicardial activation dynamics. PMID:14645100

  11. Ambient molecular imaging and depth profiling of live tissue by infrared laser ablation electrospray ionization mass spectrometry.

    PubMed

    Nemes, Peter; Barton, Alexis A; Li, Yue; Vertes, Akos

    2008-06-15

    Mass spectrometry in conjunction with atmospheric pressure ionization methods enables the in vivo investigation of biochemical changes with high specificity and sensitivity. Laser ablation electrospray ionization (LAESI) is a recently introduced ambient ionization method suited for the analysis of biological samples with sufficient water content. With LAESI mass spectrometric analysis of chimeric Aphelandra squarrosa leaf tissue, we identify the metabolites characteristic for the green and yellow sectors of variegation. Significant parts of the related biosynthetic pathways (e.g., kaempferol biosynthesis) are ascertained from the detected metabolites and metabolomic databases. Scanning electron microscopy of the ablated areas indicates the feasibility of both two-dimensional imaging and depth profiling with a approximately 350 microm lateral and approximately 50 microm depth resolution. Molecular distributions of some endogenous metabolites show chemical contrast between the sectors of variegation and quantitative changes as the ablation reaches the epidermal and mesophyll layers. Our results demonstrate that LAESI mass spectrometry opens a new way for ambient molecular imaging and depth profiling of metabolites in biological tissues and live organisms.

  12. Speckle imaging of Titan at 2 microns: surface albedo, haze optical depth, and tropospheric clouds 1996-1998

    NASA Astrophysics Data System (ADS)

    Gibbard, S. G.; Macintosh, B.; Gavel, D.; Max, C. E.; de Pater, I.; Roe, H. G.; Ghez, A. M.; Young, E. F.; McKay, C. P.

    2004-06-01

    We present results from 14 nights of observations of Titan in 1996-1998 using near-infrared (centered at 2.1 microns) speckle imaging at the 10-meter W.M. Keck Telescope. The observations have a spatial resolution of 0.06 arcseconds. We detect bright clouds on three days in October 1998, with a brightness about 0.5% of the brightness of Titan. Using a 16-stream radiative transfer model (DISORT) to model the central equatorial longitude of each image, we construct a suite of surface albedo models parameterized by the optical depth of Titan's hydrocarbon haze layer. From this we conclude that Titan's equatorial surface albedo has plausible values in the range of 0-0.20. Titan's minimum haze optical depth cannot be constrained from this modeling, but an upper limit of 0.3 at this wavelength range is found. More accurate determination of Titan's surface albedo and haze optical depth, especially at higher latitudes, will require a model that fully considers the 3-dimensional nature of Titan's atmosphere.

  13. Endoscopic diagnosis of invasion depth for early colorectal carcinomas: a prospective comparative study of narrow-band imaging, acetic acid, and crystal violet.

    PubMed

    Zhang, Jing-Jing; Gu, Li-Yang; Chen, Xiao-Yu; Gao, Yun-Jie; Ge, Zhi-Zheng; Li, Xiao-Bo

    2015-02-01

    Several studies have validated the effectiveness of narrow-band imaging (NBI) in estimating invasion depth of early colorectal cancers. However, comparative diagnostic accuracy between NBI and chromoendoscopy remains unclear. Other than crystal violet, use of acetic acid as a new staining method to diagnose deep submucosal invasive (SM-d) carcinomas has not been extensively evaluated. We aimed to assess the diagnostic accuracy and interobserver agreement of NBI, acetic acid enhancement, and crystal violet staining in predicting invasion depth of early colorectal cancers. A total of 112 early colorectal cancers were prospectively observed by NBI, acetic acid, and crystal violet staining in sequence by 1 expert colonoscopist. All endoscopic images of each technique were stored and reassessed. Finally, 294 images of 98 lesions were selected for evaluation by 3 less experienced endoscopists. The accuracy of NBI, acetic acid, and crystal violet for real-time diagnosis was 85.7%, 86.6%, and 92.9%, respectively. For image evaluation by novices, NBI achieved the highest accuracy of 80.6%, compared with that of 72.4% by acetic acid, and 75.8% by crystal violet. The kappa values of NBI, acetic acid, and crystal violet among the 3 trainees were 0.74 (95% CI 0.65-0.83), 0.68 (95% CI 0.59-0.77), and 0.70 (95% CI 0.61-0.79), respectively. For diagnosis of SM-d carcinoma, NBI was slightly inferior to crystal violet staining, when performed by the expert endoscopist. However, NBI yielded higher accuracy than crystal violet staining, in terms of less experienced endoscopists. Acetic acid enhancement with pit pattern analysis was capable of predicting SM-d carcinoma, comparable to the traditional crystal violet staining.

  14. Automated wavelet denoising of photoacoustic signals for burn-depth image reconstruction

    NASA Astrophysics Data System (ADS)

    Holan, Scott H.; Viator, John A.

    2007-02-01

    Photoacoustic image reconstruction involves dozens or perhaps hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a sample with laser light are used to produce an image of the acoustic source. Each of these point measurements must undergo some signal processing, such as denoising and system deconvolution. In order to efficiently process the numerous signals acquired for photoacoustic imaging, we have developed an automated wavelet algorithm for processing signals generated in a burn injury phantom. We used the discrete wavelet transform to denoise photoacoustic signals generated in an optically turbid phantom containing whole blood. The denoising used universal level independent thresholding, as developed by Donoho and Johnstone. The entire signal processing technique was automated so that no user intervention was needed to reconstruct the images. The signals were backprojected using the automated wavelet processing software and showed reconstruction using denoised signals improved image quality by 21%, using a relative 2-norm difference scheme.

  15. 3D Pre-stack depth imaging of the Nankai Trough accretionary prism off Shikoku Island, Japan

    NASA Astrophysics Data System (ADS)

    Costa Pisani, P.; Ike, T.; Moore, G.; Reshef, M.; Bangs, N.; Gulick, S.; Shipley, T.; Kuramoto, S.

    2003-12-01

    During 1999 we acquired an 8x90 km 3D seismic dataset across the toe of the Nankai Through accretionary prism south of Shikoku. Previous processing steps have focused on 3D pre-stack time migration of the entire survey and 2D pre-stack depth migration (PSDM) of two in-lines that cross the Leg 190/196 drill sites. In this study, we conducted 3D PSDM of the seaward half of the data set to improve structural images and to derive the velocity structure of the underthrust sedimentary section in order to better understand its 3D compaction and dewatering history. Velocities derived from pre-stack depth migration are considered to most accurately reflect actual in-situ formation velocities. Our processing procedure started with pre-stack time migration in the cross-line direction to image the data into 2D inlines, allowing us to use 2D migration velocity analysis (MVA) techniques to update the velocity field. 3D imaging of target volumes of data around the leg 190/196 drill holes using several distinctive reflections as depth marker horizons provided constraints for the migration input velocity model. We then 2D MVA on every 5th inline (total of 32 lines), using a top-down, layer stripping technique with Residual Move Out picking to iteratively update the velocity model and flatten the Common Reflection Point (CRP) gathers. We also compared CRP gathers with image gathers in order to detect dipping events and velocity anisotropy. We then used the resulting 3D velocity field as input to a full 3D PSDM of the entire data set. The depth image clarified the accretionary prism's structure, including the numerous thrust faults, the basal décollement, and the underthrusting Shikoku Basin sedimentary unit. The thickness of the underthrust section decreases landward because of compaction. The velocity model shows that the underthrust section's velocity increases about 20% over the first 15 km landward. Along strike variations in velocity are generally less than about 5-10%.

  16. Achieving High Spatial Resolution Surface Plasmon Resonance Microscopy with Image Reconstruction.

    PubMed

    Yu, Hui; Shan, Xiaonan; Wang, Shaopeng; Tao, Nongjian

    2017-03-07

    Surface plasmon resonance microscopy (SPRM) is a powerful platform for biomedical imaging and molecular binding kinetics analysis. However, the spatial resolution of SPRM along the plasmon propagation direction (longitudinal) is determined by the decaying length of the plasmonic wave, which can be as large as tens of microns. Different methods have been proposed to improve the spatial resolution, but each at the expense of decreased sensitivity or temporal resolution. Here we present a method to achieve high spatial resolution SPRM based on deconvolution of complex field. The method does not require additional optical setup and improves the spatial resolution in the longitudinal direction. We applied the method to image nanoparticles and achieved close-to-diffraction limit resolution in both longitudinal and transverse directions.

  17. Multimodal adaptive optics for depth-enhanced high-resolution ophthalmic imaging

    NASA Astrophysics Data System (ADS)

    Hammer, Daniel X.; Mujat, Mircea; Iftimia, Nicusor V.; Lue, Niyom; Ferguson, R. Daniel

    2010-02-01

    We developed a multimodal adaptive optics (AO) retinal imager for diagnosis of retinal diseases, including glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and retinitis pigmentosa (RP). The development represents the first ever high performance AO system constructed that combines AO-corrected scanning laser ophthalmoscopy (SLO) and swept source Fourier domain optical coherence tomography (SSOCT) imaging modes in a single compact clinical prototype platform. The SSOCT channel operates at a wavelength of 1 μm for increased penetration and visualization of the choriocapillaris and choroid, sites of major disease activity for DR and wet AMD. The system is designed to operate on a broad clinical population with a dual deformable mirror (DM) configuration that allows simultaneous low- and high-order aberration correction. The system also includes a wide field line scanning ophthalmoscope (LSO) for initial screening, target identification, and global orientation; an integrated retinal tracker (RT) to stabilize the SLO, OCT, and LSO imaging fields in the presence of rotational eye motion; and a high-resolution LCD-based fixation target for presentation to the subject of stimuli and other visual cues. The system was tested in a limited number of human subjects without retinal disease for performance optimization and validation. The system was able to resolve and quantify cone photoreceptors across the macula to within ~0.5 deg (~100-150 μm) of the fovea, image and delineate ten retinal layers, and penetrate to resolve targets deep into the choroid. In addition to instrument hardware development, analysis algorithms were developed for efficient information extraction from clinical imaging sessions, with functionality including automated image registration, photoreceptor counting, strip and montage stitching, and segmentation. The system provides clinicians and researchers with high-resolution, high performance adaptive optics imaging to help

  18. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  19. Penetration depth of photons in biological tissues from hyperspectral imaging in shortwave infrared in transmission and reflection geometries

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Salo, Daniel; Kim, David M.; Komarov, Sergey; Tai, Yuan-Chuan; Berezin, Mikhail Y.

    2016-12-01

    Measurement of photon penetration in biological tissues is a central theme in optical imaging. A great number of endogenous tissue factors such as absorption, scattering, and anisotropy affect the path of photons in tissue, making it difficult to predict the penetration depth at different wavelengths. Traditional studies evaluating photon penetration at different wavelengths are focused on tissue spectroscopy that does not take into account the heterogeneity within the sample. This is especially critical in shortwave infrared where the individual vibration-based absorption properties of the tissue molecules are affected by nearby tissue components. We have explored the depth penetration in biological tissues from 900 to 1650 nm using Monte-Carlo simulation and a hyperspectral imaging system with Michelson spatial contrast as a metric of light penetration. Chromatic aberration-free hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 min. Relatively short recording time minimized artifacts from sample drying. Results from both transmission and reflection geometries consistently revealed that the highest spatial contrast in the wavelength range for deep tissue lies within 1300 to 1375 nm however, in heavily pigmented tissue such as the liver, the range 1550 to 1600 nm is also prominent.

  20. Improved logarithmic phase mask to extend the depth of field of an incoherent imaging system.

    PubMed

    Zhao, Hui; Li, Qi; Feng, Huajun

    2008-06-01

    A logarithmic phase mask was proposed in 2001, and the depth extension effect was proved at the same time. Three years later, in 2004, further research on that kind of mask obtained more results. This valuable work can be found in two papers [Proc. SPIE 4471, 272 (2001) and Appl. Opt. 43, 2709 (2004)]. We reviewed the papers carefully and made simple modifications to that mask. The modified phase mask still had the logarithmic form, but the simulation results demonstrated that it was superior to the original one.

  1. Depth-resolved imaging of colon tumor using optical coherence tomography and fluorescence laminar optical tomography

    PubMed Central

    Tang, Qinggong; Wang, Jianting; Frank, Aaron; Lin, Jonathan; Li, Zhifang; Chen, Chao-wei; Jin, Lily; Wu, Tongtong; Greenwald, Bruce D.; Mashimo, Hiroshi; Chen, Yu

    2016-01-01

    Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is difficult to detect the subsurface lesions. In this research, we investigated the feasibility of a novel multi-modal optical imaging approach including high-resolution optical coherence tomography (OCT) and high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. The C57BL/6J-ApcMin/J mice were imaged using OCT and FLOT, and the correlated histopathological diagnosis was obtained. Quantitative structural (scattering coefficient) and molecular (relative enzyme activity) parameters were obtained from OCT and FLOT images for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 88.23% (82.35%) for sensitivity (specificity) compared to either modality alone. This study suggested that combining OCT and FLOT is promising for subsurface cancer detection, diagnosis, and characterization. PMID:28018738

  2. Nanoscale β-nuclear magnetic resonance depth imaging of topological insulators

    PubMed Central

    Koumoulis, Dimitrios; Morris, Gerald D.; He, Liang; Kou, Xufeng; King, Danny; Wang, Dong; Hossain, Masrur D.; Wang, Kang L.; Fiete, Gregory A.; Kanatzidis, Mercouri G.; Bouchard, Louis-S.

    2015-01-01

    Considerable evidence suggests that variations in the properties of topological insulators (TIs) at the nanoscale and at interfaces can strongly affect the physics of topological materials. Therefore, a detailed understanding of surface states and interface coupling is crucial to the search for and applications of new topological phases of matter. Currently, no methods can provide depth profiling near surfaces or at interfaces of topologically inequivalent materials. Such a method could advance the study of interactions. Herein, we present a noninvasive depth-profiling technique based on β-detected NMR (β-NMR) spectroscopy of radioactive 8Li+ ions that can provide “one-dimensional imaging” in films of fixed thickness and generates nanoscale views of the electronic wavefunctions and magnetic order at topological surfaces and interfaces. By mapping the 8Li nuclear resonance near the surface and 10-nm deep into the bulk of pure and Cr-doped bismuth antimony telluride films, we provide signatures related to the TI properties and their topological nontrivial characteristics that affect the electron–nuclear hyperfine field, the metallic shift, and magnetic order. These nanoscale variations in β-NMR parameters reflect the unconventional properties of the topological materials under study, and understanding the role of heterogeneities is expected to lead to the discovery of novel phenomena involving quantum materials. PMID:26124141

  3. Burn Depth Estimation Based on Infrared Imaging of Thermally Excited Tissue

    SciTech Connect

    Dickey, F.M.; Hoswade, S.C.; Yee, M.L.

    1999-03-05

    Accurate estimation of the depth of partial-thickness burns and the early prediction of a need for surgical intervention are difficult. A non-invasive technique utilizing the difference in thermal relaxation time between burned and normal skin may be useful in this regard. In practice, a thermal camera would record the skin's response to heating or cooling by a small amount-roughly 5 C for a short duration. The thermal stimulus would be provided by a heat lamp, hot or cold air, or other means. Processing of the thermal transients would reveal areas that returned to equilibrium at different rates, which should correspond to different burn depths. In deeper thickness burns, the outside layer of skin is further removed from the constant-temperature region maintained through blood flow. Deeper thickness areas should thus return to equilibrium more slowly than other areas. Since the technique only records changes in the skin's temperature, it is not sensitive to room temperature, the burn's location, or the state of the patient. Preliminary results are presented for analysis of a simulated burn, formed by applying a patch of biosynthetic wound dressing on top of normal skin tissue.

  4. Thermal Images of Seeds Obtained at Different Depths by Photoacoustic Microscopy (PAM)

    NASA Astrophysics Data System (ADS)

    Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2015-06-01

    The objective of the present study was to obtain thermal images of a broccoli seed ( Brassica oleracea) by photoacoustic microscopy, at different modulation frequencies of the incident light beam ((0.5, 1, 5, and 20) Hz). The thermal images obtained in the amplitude of the photoacoustic signal vary with each applied frequency. In the lowest light frequency modulation, there is greater thermal wave penetration in the sample. Likewise, the photoacoustic signal is modified according to the structural characteristics of the sample and the modulation frequency of the incident light. Different structural components could be seen by photothermal techniques, as shown in the present study.

  5. Photothermal optical coherence tomography for depth-resolved imaging of mesenchymal stem cells via single wall carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Subhash, Hrebesh M.; Connolly, Emma; Murphy, Mary; Barron, Valerie; Leahy, Martin

    2014-03-01

    The progress in stem cell research over the past decade holds promise and potential to address many unmet clinical therapeutic needs. Tracking stem cell with modern imaging modalities are critically needed for optimizing stem cell therapy, which offers insight into various underlying biological processes such as cell migration, engraftment, homing, differentiation, and functions etc. In this study we report the feasibility of photothermal optical coherence tomography (PT-OCT) to image human mesenchymal stem cells (hMSCs) labeled with single-walled carbon nanotubes (SWNTs) for in vitro cell tracking in three dimensional scaffolds. PT-OCT is a functional extension of conventional OCT with extended capability of localized detection of absorbing targets from scattering background to provide depth-resolved molecular contrast imaging. A 91 kHz line rate, spectral domain PT-OCT system at 1310nm was developed to detect the photothermal signal generated by 800nm excitation laser. In general, MSCs do not have obvious optical absorption properties and cannot be directly visualized using PT-OCT imaging. However, the optical absorption properties of hMSCs can me modified by labeling with SWNTs. Using this approach, MSC were labeled with SWNT and the cell distribution imaged in a 3D polymer scaffold using PT-OCT.

  6. Single-pixel three-dimensional imaging with time-based depth resolution

    NASA Astrophysics Data System (ADS)

    Sun, Ming-Jie; Edgar, Matthew P.; Gibson, Graham M.; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J.

    2016-07-01

    Time-of-flight three-dimensional imaging is an important tool for applications such as object recognition and remote sensing. Conventional time-of-flight three-dimensional imaging systems frequently use a raster scanned laser to measure the range of each pixel in the scene sequentially. Here we show a modified time-of-flight three-dimensional imaging system, which can use compressed sensing techniques to reduce acquisition times, whilst distributing the optical illumination over the full field of view. Our system is based on a single-pixel camera using short-pulsed structured illumination and a high-speed photodiode, and is capable of reconstructing 128 × 128-pixel resolution three-dimensional scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, by using a compressive sampling strategy, we demonstrate continuous real-time three-dimensional video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost three-dimensional imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  7. Achieving High Contrast for Exoplanet Imaging with a Kalman Filter and Stroke Minimization

    NASA Astrophysics Data System (ADS)

    Eldorado Riggs, A. J.; Groff, T. D.; Kasdin, N. J.; Carlotti, A.; Vanderbei, R. J.

    2014-01-01

    High contrast imaging requires focal plane wavefront control and estimation to correct aberrations in an optical system; non-common path errors prevent the use of conventional estimation with a separate wavefront sensor. The High Contrast Imaging Laboratory (HCIL) at Princeton has led the development of several techniques for focal plane wavefront control and estimation. In recent years, we developed a Kalman filter for optimal wavefront estimation. Our Kalman filter algorithm is an improvement upon DM Diversity, which requires at least two images pairs each iteration and does not utilize any prior knowledge of the system. The Kalman filter is a recursive estimator, meaning that it uses the data from prior estimates along with as few as one new image pairs per iteration to update the electric field estimate. Stroke minimization has proven to be a feasible controller for achieving high contrast. While similar to a variation of Electric Field Conjugation (EFC), stroke minimization achieves the same contrast with less stroke on the DMs. We recently utilized these algorithms to achieve high contrast for the first time in our experiment at the High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory (JPL). Our HCIT experiment was also the first demonstration of symmetric dark hole correction in the image plane using two DMs--this is a major milestone for future space missions. Our ongoing work includes upgrading our optimal estimator to include an estimate of the incoherent light in the system, which allows for simultaneous estimation of the light from a planet along with starlight. The two-DM experiment at the HCIT utilized a shaped pupil coronagraph. Those tests utilized ripple style, free-standing masks etched out of silicon, but our current work is in designing 2-D optimized reflective shaped pupils. In particular, we have created several designs for the AFTA telescope, whose pupil presents major hurdles because of its atypical pupil obstructions. Our

  8. Orientation and depth estimation for femoral components using image sensor, magnetometer and inertial sensors in THR surgeries.

    PubMed

    Jiyang Gao; Shaojie Su; Hong Chen; Zhihua Wang

    2015-08-01

    Malposition of the acetabular and femoral component has long been recognized as an important cause of dislocation after total hip replacement (THR) surgeries. In order to help surgeons improve the positioning accuracy of the components, a visual-aided system for THR surgeries that could estimate orientation and depth of femoral component is proposed. The sensors are fixed inside the femoral prosthesis trial and checkerboard patterns are printed on the internal surface of the acetabular prosthesis trial. An extended Kalman filter is designed to fuse the data from inertial sensors and the magnetometer orientation estimation. A novel image processing algorithm for depth estimation is developed. The algorithms have been evaluated under the simulation with rotation quaternion and translation vector and the experimental results shows that the root mean square error (RMSE) of the orientation estimation is less then 0.05 degree and the RMSE for depth estimation is 1mm. Finally, the femoral head is displayed in 3D graphics in real time to help surgeons with the component positioning.

  9. Enhanced contrast and depth resolution in polarization imaging using elliptically polarized light

    NASA Astrophysics Data System (ADS)

    Sridhar, Susmita; Da Silva, Anabela

    2016-07-01

    Polarization gating is a popular and widely used technique in biomedical optics to sense superficial tissues (colinear detection), deeper volumes (crosslinear detection), and also selectively probe subsuperficial volumes (using elliptically polarized light). As opposed to the conventional linearly polarized illumination, we propose a new protocol of polarization gating that combines coelliptical and counter-elliptical measurements to selectively enhance the contrast of the images. This new method of eliminating multiple-scattered components from the images shows that it is possible to retrieve a greater signal and a better contrast for subsurface structures. In vivo experiments were performed on skin abnormalities of volunteers to confirm the results of the subtraction method and access subsurface information.

  10. Depth-resolved imaging of functional activation in the rat cerebral cortex using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Aguirre, A. D.; Chen, Y.; Fujimoto, J. G.; Ruvinskaya, L.; Devor, A.; Boas, D. A.

    2006-12-01

    Co-registered optical coherence tomography (OCT) and video microscopy of the rat somatosensory cortex were acquired simultaneously through a thinned skull during forepaw electrical stimulation. Fractional signal change measured by OCT revealed a functional signal time course corresponding to the hemodynamic signal measurement made with video microscopy. OCT can provide high-resolution, cross-sectional images of functional neurovascular activation and may offer a new tool for basic neuroscience research in the important rat cerebral cortex model.

  11. Depth-resolved optical imaging of hemodynamic response in mouse brain with microcirculatory beds

    NASA Astrophysics Data System (ADS)

    Jia, Yali; Nettleton, Rosemary; Rosenberg, Mara; Boudreau, Eilis; Wang, Ruikang K.

    2011-03-01

    Optical hemodynamic imaging employed in pre-clinical studies with high spatial and temporal resolution is significant to unveil the functional activities of brain and the mechanism of internal or external stimulus effects in diverse pathological conditions and treatments. Most current optical systems only resolve hemodynamic changes within superficial macrocirculatory beds, such as laser speckle contrast imaging; or only provide vascular structural information within microcirculatory beds, such as multi-photon microscopy. In this study, we introduce a hemodynamic imaging system based on Optical Micro-angiography (OMAG) which is capable of resolving and quantifying 3D dynamic blood perfusion down to microcirculatory level. This system can measure the optical phase shifts caused by moving blood cells in microcirculation. Here, the utility of OMAG was demonstrated by monitoring the hemodynamic response to alcohol administration in mouse prefrontal cortex. Our preliminary results suggest that the spatiotemporal tracking of cerebral micro-hemodynamic using OMAG can be successfully applied to the mouse brain and reliably distinguish between vehicle and alcohol stimulation experiment.

  12. Multi-angle lensless digital holography for depth resolved imaging on a chip

    PubMed Central

    Su, Ting-Wei; Isikman, Serhan O.; Bishara, Waheb; Tseng, Derek; Erlinger, Anthony; Ozcan, Aydogan

    2010-01-01

    A multi-angle lensfree holographic imaging platform that can accurately characterize both the axial and lateral positions of cells located within multi-layered micro-channels is introduced. In this platform, lensfree digital holograms of the micro-objects on the chip are recorded at different illumination angles using partially coherent illumination. These digital holograms start to shift laterally on the sensor plane as the illumination angle of the source is tilted. Since the exact amount of this lateral shift of each object hologram can be calculated with an accuracy that beats the diffraction limit of light, the height of each cell from the substrate can be determined over a large field of view without the use of any lenses. We demonstrate the proof of concept of this multi-angle lensless imaging platform by using light emitting diodes to characterize various sized microparticles located on a chip with sub-micron axial and lateral localization over ~60 mm2 field of view. Furthermore, we successfully apply this lensless imaging approach to simultaneously characterize blood samples located at multi-layered micro-channels in terms of the counts, individual thicknesses and the volumes of the cells at each layer. Because this platform does not require any lenses, lasers or other bulky optical/mechanical components, it provides a compact and high-throughput alternative to conventional approaches for cytometry and diagnostics applications involving lab on a chip systems. PMID:20588819

  13. Changes in choroidal thickness after prophylactic iridectomy in primary angle closure suspect eyes using enhanced depth imaging optical coherence tomography

    PubMed Central

    Wang, Wei; Zhou, Minwen; Huang, Wenbin; Gao, Xinbo; Zhang, Xiulan

    2015-01-01

    Purpose: The aim of the present study was to evaluate the effect of surgical peripheral iridectomy (SPI) on choroidal thickness in primary angle-closure suspect (PACS) eyes. Materials and Methods: This was a prospective observational case series of 30 subjects with PACS. Ocular biometry was performed before SPI (baseline) and then 1 week later. Choroid was imaged by enhanced depth imaging optical coherence tomography (EDI-OCT). The choroidal thickness of the subfoveal area at 1 and 3 mm diameter around the fovea was determined. Central anterior chamber depth (ACD), lens thickness (LT), vitreous chamber depth (VCD), and axial length (AL) were measured by A-scan ultrasound. Parameters were compared before SPI (baseline) and 1 week later. Results: Thirty eyes of 30 patients with mean age of 61.53 ± 7.98 years were studied. There was no significant difference in the choroidal thickness at all macular locations before and after SPI (all P > 0.05). Mean subfoveal choroidal thickness was 279.61 μm ± 65.50 μm before and 274.54 μm ± 63.36 μm after SPI (P = 0.308). There was also no significant change in central ACD, LT, VCD, and LT after SPI (all P > 0.05). Conclusions: SPI does not appear to alter choroidal thickness in PACS eyes, as assessed using EDI-OCT. Long-term follow-up of PACS eyes treated with SPI may provide further insight into the effects of this treatment modality on the choroid. PMID:26654999

  14. Direct Depth- and Lateral- Imaging of Nanoscale Magnets Generated by Ion Impact

    PubMed Central

    Röder, Falk; Hlawacek, Gregor; Wintz, Sebastian; Hübner, René; Bischoff, Lothar; Lichte, Hannes; Potzger, Kay; Lindner, Jürgen; Fassbender, Jürgen; Bali, Rantej

    2015-01-01

    Nanomagnets form the building blocks for a variety of spin-transport, spin-wave and data storage devices. In this work we generated nanoscale magnets by exploiting the phenomenon of disorder-induced ferromagnetism; disorder was induced locally on a chemically ordered, initially non-ferromagnetic, Fe60Al40 precursor film using  nm diameter beam of Ne+ ions at 25 keV energy. The beam of energetic ions randomized the atomic arrangement locally, leading to the formation of ferromagnetism in the ion-affected regime. The interaction of a penetrating ion with host atoms is known to be spatially inhomogeneous, raising questions on the magnetic homogeneity of nanostructures caused by ion-induced collision cascades. Direct holographic observations of the flux-lines emergent from the disorder-induced magnetic nanostructures were made in order to measure the depth- and lateral- magnetization variation at ferromagnetic/non-ferromagnetic interfaces. Our results suggest that high-resolution nanomagnets of practically any desired 2-dimensional geometry can be directly written onto selected alloy thin films using a nano-focussed ion-beam stylus, thus enabling the rapid prototyping and testing of novel magnetization configurations for their magneto-coupling and spin-wave properties. PMID:26584789

  15. High-resolution three-dimensional images from confocal scanning laser microscopy. Quantitative study and mathematical correction of the effects from bleaching and fluorescence attenuation in depth.

    PubMed

    Rigaut, J P; Vassy, J

    1991-08-01

    Three-dimensional images can be assembled by piling up consecutive confocal fluorescent images obtained by confocal scanning laser microscopy. The present work was based on three-dimensional (50-microns-deep) images at high (x, y) resolution obtained with an MRC-500 after en bloc staining of thick slices of rat liver by chromomycin A3 for nuclear DNA. The results of studies on bleaching, fluorescence excitation and emission intensities at various depths of histologic preparations are described. These effects could be evaluated separately by acquiring piled-up ("brick-stepping") and non-piled-up ("side-stepping") (x, y) images at consecutive depths and also (x, z) images. Empirical equations allowed the fitting of experimental plots of bleaching versus time, at different laser intensities and at different depths, and of fluorescence emission intensity versus depth. The main conclusions were that under our experimental conditions: (1) there was no attenuation by depth of the fluorochrome penetration, (2) there was no attenuation of the exciting beam intensity up to at least 50 microns deep, (3) there was an attenuation of the fluorescence emission intensity by depth, (4) bleaching happened equally on all planes above and below any confocal plane being studied, and (5) the fluorescence bleaching half-life was independent of depth. A mathematical correction scheme designed to compensate for bleaching and for attenuation of fluorescence emission in depth is presented. This correction is required for obtaining three-dimensional images of better quality, for optimal three-dimensional image segmentation and for any quantitative analysis based upon voxel-discretized emission intensities (gray levels)--e.g., estimating, by confocal image cytometry, textural chromatin parameters and nuclear DNA amounts.

  16. Optical full-depth refocusing of 3-D objects based on subdivided-elemental images and local periodic δ-functions in integral imaging.

    PubMed

    Ai, Ling-Yu; Dong, Xiao-Bin; Jang, Jae-Young; Kim, Eun-Soo

    2016-05-16

    We propose a new approach for optical refocusing of three-dimensional (3-D) objects on their real depth without a pickup-range limitation based on subdivided-elemental image arrays (sub-EIAs) and local periodic δ-function arrays (L-PDFAs). The captured EIA from the 3-D objects locating out of the pickup-range, is divided into a number of sub-EIAs depending on the object distance from the lens array. Then, by convolving these sub-EIAs with each L-PDFA whose spatial period corresponds to the specific object's depth, as well as whose size is matched to that of the sub-EIA, arrays of spatially-filtered sub-EIAs (SF-sub-EIAs) for each object depth can be uniquely extracted. From these arrays of SF-sub-EIAs, 3-D objects can be optically reconstructed to be refocused on their real depth. Operational principle of the proposed method is analyzed based on ray-optics. In addition, to confirm the feasibility of the proposed method in the practical application, experiments with test objects are carried out and the results are comparatively discussed with those of the conventional method.

  17. Developments in molecular SIMS depth profiling and 3D imaging of biological systems using polyatomic primary ions.

    PubMed

    Fletcher, John S; Lockyer, Nicholas P; Vickerman, John C

    2011-01-01

    In principle mass spectral imaging has enormous potential for discovery applications in biology. The chemical specificity of mass spectrometry combined with spatial analysis capabilities of liquid metal cluster beams and the high yields of polyatomic ion beams should present unprecedented ability to spatially locate molecular chemistry in the 100 nm range. However, although metal cluster ion beams have greatly increased yields in the m/z range up to 1000, they still have to be operated under the static limit and even in most favorable cases maximum yields for molecular species from 1 µm pixels are frequently below 20 counts. However, some very impressive molecular imaging analysis has been accomplished under these conditions. Nevertheless although molecular ions of lipids have been detected and correlation with biology is obtained, signal levels are such that lateral resolution must be sacrificed to provide a sufficient signal to image. To obtain useful spatial resolution detection below 1 µm is almost impossible. Too few ions are generated! The review shows that the application of polyatomic primary ions with their low damage cross-sections offers hope of a new approach to molecular SIMS imaging by accessing voxels rather than pixels to thereby increase the dynamic signal range in 2D imaging and to extend the analysis to depth profiling and 3D imaging. Recent data on cells and tissue analysis suggest that there is, in consequence, the prospect that a wider chemistry might be accessible within a sub-micron area and as a function of depth. However, these advances are compromised by the pulsed nature of current ToF-SIMS instruments. The duty cycle is very low and results in excessive analysis times, and maximum mass resolution is incompatible with maximum spatial resolution. New instrumental directions are described that enable a dc primary beam to be used that promises to be able to take full advantage of all the capabilities of the polyatomic ion beam. Some new

  18. Imaging widespread seismicity at midlower crustal depths beneath Long Beach, CA, with a dense seismic array: Evidence for a depth-dependent earthquake size distribution

    NASA Astrophysics Data System (ADS)

    Inbal, Asaf; Clayton, Robert W.; Ampuero, Jean-Paul

    2015-08-01

    We use a dense seismic array composed of 5200 vertical geophones to monitor microseismicity in Long Beach, California. Poor signal-to-noise ratio due to anthropogenic activity is mitigated via downward-continuation of the recorded wavefield. The downward-continued data are continuously back projected to search for coherent arrivals from sources beneath the array, which reveals numerous, previously undetected events. The spatial distribution of seismicity is uncorrelated with the mapped fault traces, or with activity in the nearby oil-fields. Many events are located at depths larger than 20 km, well below the commonly accepted seismogenic depth for that area. The seismicity exhibits temporal clustering consistent with Omori's law, and its size distribution obeys the Gutenberg-Richter relation above 20 km but falls off exponentially at larger depths. The dense array allows detection of earthquakes two magnitude units smaller than the permanent seismic network in the area. Because the event size distribution above 20 km depth obeys a power law whose exponent is near one, this improvement yields a hundred-fold decrease in the time needed for effective characterization of seismicity in Long Beach.

  19. Nanoscopy—imaging life at the nanoscale: a Nobel Prize achievement with a bright future

    NASA Astrophysics Data System (ADS)

    Blom, Hans; Bates, Mark

    2015-10-01

    A grand scientific prize was awarded last year to three pioneering scientists, for their discovery and development of molecular ‘ON-OFF’ switching which, when combined with optical imaging, can be used to see the previously invisible with light microscopy. The Royal Swedish Academy of Science announced on October 8th their decision and explained that this achievement—rooted in physics and applied in biology and medicine—was awarded with the Nobel Prize in Chemistry for controlling fluorescent molecules to create images of specimens smaller than anything previously observed with light. The story of how this noble switch in optical microscopy was achieved and how it was engineered to visualize life at the nanoscale is highlighted in this invited comment.

  20. On evaluation of depth accuracy in consumer depth sensors

    NASA Astrophysics Data System (ADS)

    Abd Aziz, Azim Zaliha; Wei, Hong; Ferryman, James

    2015-12-01

    This paper presents an experimental study of different depth sensors. The aim is to answer the question, whether these sensors give accurate data for general depth image analysis. The study examines the depth accuracy between three popularly used depth sensors; ASUS Xtion Prolive, Kinect Xbox 360 and Kinect for Windows v2. The main attention is to study on the stability of pixels in the depth image captured at several different sensor-object distances by measuring the depth returned by the sensors within specified time intervals. The experimental results show that the fluctuation (mm) of the random selected pixels within the target area, increases with increasing distance to the sensor, especially on the Kinect for Xbox 360 and the Asus Xtion Prolive. Both of these sensors provide pixels fluctuation between 20mm and 30mm at a sensor-object distance beyond 1500mm. However, the pixel's stability of the Kinect for Windows v2 not affected much with the distance between the sensor and the object. The maximum fluctuation for all the selected pixels of Kinect for Windows v2 is approximately 5mm at sensor-object distance of between 800mm and 3000mm. Therefore, in the optimal distance, the best stability achieved.

  1. Magnetic Resonance Imaging (MRI) Analysis of Fibroid Location in Women Achieving Pregnancy After Uterine Artery Embolization

    SciTech Connect

    Walker, Woodruff J.; Bratby, Mark John

    2007-09-15

    The purpose of this study was to evaluate the fibroid morphology in a cohort of women achieving pregnancy following treatment with uterine artery embolization (UAE) for symptomatic uterine fibroids. A retrospective review of magnetic resonance imaging (MRI) of the uterus was performed to assess pre-embolization fibroid morphology. Data were collected on fibroid size, type, and number and included analysis of follow-up imaging to assess response. There have been 67 pregnancies in 51 women, with 40 live births. Intramural fibroids were seen in 62.7% of the women (32/48). Of these the fibroids were multiple in 16. A further 12 women had submucosal fibroids, with equal numbers of types 1 and 2. Two of these women had coexistent intramural fibroids. In six women the fibroids could not be individually delineated and formed a complex mass. All subtypes of fibroid were represented in those subgroups of women achieving a live birth versus those who did not. These results demonstrate that the location of uterine fibroids did not adversely affect subsequent pregnancy in the patient population investigated. Although this is only a small qualitative study, it does suggest that all types of fibroids treated with UAE have the potential for future fertility.

  2. Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV

    NASA Astrophysics Data System (ADS)

    Fehn, Christoph

    2004-05-01

    This paper presents details of a system that allows for an evolutionary introduction of depth perception into the existing 2D digital TV framework. The work is part of the European Information Society Technologies (IST) project "Advanced Three-Dimensional Television System Technologies" (ATTEST), an activity, where industries, research centers and universities have joined forces to design a backwards-compatible, flexible and modular broadcast 3D-TV system. At the very heart of the described new concept is the generation and distribution of a novel data representation format, which consists of monoscopic color video and associated per-pixel depth information. From these data, one or more "virtual" views of a real-world scene can be synthesized in real-time at the receiver side (i.e. a 3D-TV set-top box) by means of so-called depth-image-based rendering (DIBR) techniques. This publication will provide: (1) a detailed description of the fundamentals of this new approach on 3D-TV; (2) a comparison with the classical approach of "stereoscopic" video; (3) a short introduction to DIBR techniques in general; (4) the development of a specific DIBR algorithm that can be used for the efficient generation of high-quality "virtual" stereoscopic views; (5) a number of implementation details that are specific to the current state of the development; (6) research on the backwards-compatible compression and transmission of 3D imagery using state-of-the-art MPEG (Moving Pictures Expert Group) tools.

  3. Linear dispersion relation and depth sensitivity to swell parameters: application to synthetic aperture radar imaging and bathymetry.

    PubMed

    Boccia, Valentina; Renga, Alfredo; Rufino, Giancarlo; D'Errico, Marco; Moccia, Antonio; Aragno, Cesare; Zoffoli, Simona

    2015-01-01

    Long gravity waves or swell dominating the sea surface is known to be very useful to estimate seabed morphology in coastal areas. The paper reviews the main phenomena related to swell waves propagation that allow seabed morphology to be sensed. The linear dispersion is analysed and an error budget model is developed to assess the achievable depth accuracy when Synthetic Aperture Radar (SAR) data are used. The relevant issues and potentials of swell-based bathymetry by SAR are identified and discussed. This technique is of particular interest for characteristic regions of the Mediterranean Sea, such as in gulfs and relatively close areas, where traditional SAR-based bathymetric techniques, relying on strong tidal currents, are of limited practical utility.

  4. Linear Dispersion Relation and Depth Sensitivity to Swell Parameters: Application to Synthetic Aperture Radar Imaging and Bathymetry

    PubMed Central

    Boccia, Valentina; Renga, Alfredo; Rufino, Giancarlo; D'Errico, Marco; Moccia, Antonio; Aragno, Cesare; Zoffoli, Simona

    2015-01-01

    Long gravity waves or swell dominating the sea surface is known to be very useful to estimate seabed morphology in coastal areas. The paper reviews the main phenomena related to swell waves propagation that allow seabed morphology to be sensed. The linear dispersion is analysed and an error budget model is developed to assess the achievable depth accuracy when Synthetic Aperture Radar (SAR) data are used. The relevant issues and potentials of swell-based bathymetry by SAR are identified and discussed. This technique is of particular interest for characteristic regions of the Mediterranean Sea, such as in gulfs and relatively close areas, where traditional SAR-based bathymetric techniques, relying on strong tidal currents, are of limited practical utility. PMID:25789333

  5. Effect of Uveal Melanocytes on Choroidal Morphology in Rhesus Macaques and Humans on Enhanced-Depth Imaging Optical Coherence Tomography

    PubMed Central

    Yiu, Glenn; Vuong, Vivian S.; Oltjen, Sharon; Cunefare, David; Farsiu, Sina; Garzel, Laura; Roberts, Jeffrey; Thomasy, Sara M.

    2016-01-01

    Purpose To compare cross-sectional choroidal morphology in rhesus macaque and human eyes using enhanced-depth imaging optical coherence tomography (EDI-OCT) and histologic analysis. Methods Enhanced-depth imaging–OCT images from 25 rhesus macaque and 30 human eyes were evaluated for choriocapillaris and choroidal–scleral junction (CSJ) visibility in the central macula based on OCT reflectivity profiles, and compared with age-matched histologic sections. Semiautomated segmentation of the choriocapillaris and CSJ was used to measure choriocapillary and choroidal thickness, respectively. Multivariate regression was performed to determine the association of age, refractive error, and race with choriocapillaris and CSJ visibility. Results Rhesus macaques exhibit a distinct hyporeflective choriocapillaris layer on EDI-OCT, while the CSJ cannot be visualized. In contrast, humans show variable reflectivities of the choriocapillaris, with a distinct CSJ seen in many subjects. Histologic sections demonstrate large, darkly pigmented melanocytes that are densely distributed in the macaque choroid, while melanocytes in humans are smaller, less pigmented, and variably distributed. Optical coherence tomography reflectivity patterns of the choroid appear to correspond to the density, size, and pigmentation of choroidal melanocytes. Mean choriocapillary thickness was similar between the two species (19.3 ± 3.4 vs. 19.8 ± 3.4 μm, P = 0.615), but choroidal thickness may be lower in macaques than in humans (191.2 ± 43.0 vs. 266.8 ± 78.0 μm, P < 0.001). Racial differences in uveal pigmentation also appear to affect the visibility of the choriocapillaris and CSJ on EDI-OCT. Conclusions Pigmented uveal melanocytes affect choroidal morphology on EDI-OCT in rhesus macaque and human eyes. Racial differences in pigmentation may affect choriocapillaris and CSJ visibility, and may influence the accuracy of choroidal thickness measurements. PMID:27792810

  6. Achieving the image interpolation algorithm on the FPGA platform based on ImpulseC

    NASA Astrophysics Data System (ADS)

    Jia, Ge; Peng, Xianrong

    2013-10-01

    ImpulseC is based on the C language which can describe highly parallel and multi-process applications. It also generates a underlying hardware description for the dedicated process. To improve the famous bi-cubic interpolation algorithm, we design the bi-cubic convolution template algorithms with better computing performance and higher efficiency. The results of simulation show that the interpolation method not only improves the interpolation accuracy and image quality, but also preferably retains the texture of the image. Based on ImpulseC hardware design tools, we can make use of the compiler features to further parallelize the algorithm so that it is more conducive to the hardware implementation. Based on the Xilinx Spartan3 of XC3S4000 chip, our method achieves the real-time interpolation at the rate of 50fps. The FPGA experimental results show that the stream of output images after interpolation is robust and real-time. The summary shows that the allocation of hardware resources is reasonable. Compared with the existing hand-written HDL code, it has the advantages of parallel speedup. Our method provides a novel idea from C to FPGA-based embedded hardware system for software engineers.

  7. Seismic imaging of the Waltham Canyon fault, California: comparison of ray‐theoretical and Fresnel volume prestack depth migration

    USGS Publications Warehouse

    Bauer, Klaus; Ryberg, Trond; Fuis, Gary S.; Lüth, Stefan

    2013-01-01

    Near‐vertical faults can be imaged using reflected refractions identified in controlled‐source seismic data. Often theses phases are observed on a few neighboring shot or receiver gathers, resulting in a low‐fold data set. Imaging can be carried out with Kirchhoff prestack depth migration in which migration noise is suppressed by constructive stacking of large amounts of multifold data. Fresnel volume migration can be used for low‐fold data without severe migration noise, as the smearing along isochrones is limited to the first Fresnel zone around the reflection point. We developed a modified Fresnel volume migration technique to enhance imaging of steep faults and to suppress noise and undesired coherent phases. The modifications include target‐oriented filters to separate reflected refractions from steep‐dipping faults and reflections with hyperbolic moveout. Undesired phases like multiple reflections, mode conversions, direct P and S waves, and surface waves are suppressed by these filters. As an alternative approach, we developed a new prestack line‐drawing migration method, which can be considered as a proxy to an infinite frequency approximation of the Fresnel volume migration. The line‐drawing migration is not considering waveform information but requires significantly shorter computational time. Target‐oriented filters were extended by dip filters in the line‐drawing migration method. The migration methods were tested with synthetic data and applied to real data from the Waltham Canyon fault, California. The two techniques are applied best in combination, to design filters and to generate complementary images of steep faults.

  8. Visualizing the Subsurface of Soft Matter: Simultaneous Topographical Imaging, Depth Modulation, and Compositional Mapping with Triple Frequency Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Solares, Santiago; Ebeling, Daniel; Eslami, Babak

    2014-03-01

    Characterization of subsurface morphology and mechanical properties with nanoscale resolution and depth control is of significant interest in soft matter fields like biology and polymer science, where buried structural and compositional features can be important. However, controllably ``feeling'' the subsurface is a challenging task for which the available imaging tools are relatively limited. This presentation describes a trimodal atomic force microscopy (AFM) imaging scheme, whereby three eigenmodes of the microcantilever probe are used as separate control ``knobs'' to simultaneously measure the topography, modulate sample indentation by the tip during tip-sample impact, and map compositional contrast, respectively. This method is illustrated through computational simulation and experiments conducted on ultrathin polymer films with embedded glass nanoparticles. By actively increasing the tip-sample indentation using a higher eigenmode of the cantilever, one is able to gradually and controllably reveal glass nanoparticles that are buried tens of nanometers deep under the surface, while still being able to refocus on the surface. The authors gratefully acknowledge support from the U.S. Department of Energy (conceptual method development and experimental work, award DESC-0008115) and the U.S. National Science Foundation (computational work, award CMMI-0841840).

  9. Depth Map Restoration From Undersampled Data.

    PubMed

    Mandal, Srimanta; Bhavsar, Arnav; Sao, Anil Kumar

    2017-01-01

    Depth map sensed by low-cost active sensor is often limited in resolution, whereas depth information achieved from structure from motion or sparse depth scanning techniques may result in a sparse point cloud. Achieving a high-resolution (HR) depth map from a low resolution (LR) depth map or densely reconstructing a sparse non-uniformly sampled depth map are fundamentally similar problems with different types of upsampling requirements. The first problem involves upsampling in a uniform grid, whereas the second type of problem requires an upsampling in a non-uniform grid. In this paper, we propose a new approach to address such issues in a unified framework, based on sparse representation. Unlike, most of the approaches of depth map restoration, our approach does not require an HR intensity image. Based on example depth maps, sub-dictionaries of exemplars are constructed, and are used to restore HR/dense depth map. In the case of uniform upsampling of LR depth map, an edge preserving constraint is used for preserving the discontinuity present in the depth map, and a pyramidal reconstruction strategy is applied in order to deal with higher upsampling factors. For upsampling of non-uniformly sampled sparse depth map, we compute the missing information in local patches from that from similar exemplars. Furthermore, we also suggest an alternative method of reconstructing dense depth map from very sparse non-uniformly sampled depth data by sequential cascading of uniform and non-uniform upsampling techniques. We provide a variety of qualitative and quantitative results to demonstrate the efficacy of our approach for depth map restoration.

  10. Coupling sky images with three-dimensional radiative transfer models: a new method to estimate cloud optical depth

    NASA Astrophysics Data System (ADS)

    Mejia, F. A.; Kurtz, B.; Murray, K.; Hinkelman, L. M.; Sengupta, M.; Xie, Y.; Kleissl, J.

    2015-10-01

    A method for retrieving cloud optical depth (τc) using a ground-based sky imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various τc produced by a 3-D Radiative Transfer Model (3DRTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (θ0), τc, solar pixel angle/scattering angle (ϑs), and pixel zenith angle/view angle (ϑz). The effects of these parameters are described and the functions for radiance, Iλ(τc, θ0, ϑs, ϑz) and the red-blue ratio, RBR(τc, θ0, ϑs, ϑz) are retrieved from the 3DRTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for τc, where RBR increases with τc up to about τc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Iλmeas(ϑs, ϑz), in addition to RBRmeas(ϑs, ϑz) to obtain a unique solution for τc. The RRBR method is applied to images taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and validated against measurements from a microwave radiometer (MWR); output from the Min method for overcast skies, and τc retrieved by Beer's law from direct normal irradiance (DNI) measurements. A τc RMSE of 5.6 between the Min method and the USI are observed. The MWR and USI have an RMSE of 2.3 which is well within the uncertainty of the MWR. An RMSE of 0.95 between the USI and DNI retrieved τc is observed. The procedure developed here provides a foundation to test and develop other cloud detection algorithms.

  11. Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography.

    PubMed

    Lahiri, A; Roy, Abhijit Guha; Sheet, Debdoot; Biswas, Prabir Kumar

    2016-08-01

    Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.

  12. Dual-Element Transducer with Phase-Inversion for Wide Depth of Field in High-Frequency Ultrasound Imaging

    PubMed Central

    Jeong, Jong Seob

    2014-01-01

    In high frequency ultrasound imaging (HFUI), the quality of focusing is deeply related to the length of the depth of field (DOF). In this paper, a phase-inversion technique implemented by a dual-element transducer is proposed to enlarge the DOF. The performance of the proposed method was numerically demonstrated by using the ultrasound simulation program called Field-II. A simulated dual-element transducer was composed of a disc- and an annular-type elements, and its aperture was concavely shaped to have a confocal point at 6 mm. The area of each element was identical in order to provide same intensity at the focal point. The outer diameters of the inner and the outer elements were 2.1 mm and 3 mm, respectively. The center frequency of each element was 40 MHz and the f-number (focal depth/aperture size) was two. When two input signals with 0° and 180° phases were applied to inner and outer elements simultaneously, a multi-focal zone was generated in the axial direction. The total −6 dB DOF, i.e., sum of two −6 dB DOFs in the near and far field lobes, was 40% longer than that of the conventional single element transducer. The signal to noise ratio (SNR) was increased by about two times, especially in the far field. The point and cyst phantom simulation were conducted and their results were identical to that of the beam pattern simulation. Thus, the proposed scheme may be a potential method to improve the DOF and SNR in HFUI. PMID:25098208

  13. Guided lamb waves and L-SAFT processing technique for enhanced detection and imaging of corrosion defects in plates with small depth-to-wavelength ratio.

    PubMed

    Sicard, René; Chahbaz, Ahmad; Goyette, Jacques

    2004-10-01

    The Lamb synthetic aperture focusing technique (L-SAFT) imaging algorithm in the Fourier domain is used to produce Lamb wave imaging in plates while considering the wave dispersive properties. This artificial focusing technique produces easy-to-interpret, modified B-scan type images of Lamb wave inspection results. The high level of sensitivity of Lamb waves combined with the L-SAFT algorithm allows one to detect and to produce images of corrosion defects with small depth-to-wavelength ratio. This paper briefly presents the formulated L-SAFT algorithm used for Lamb waves and, in more details, some experimental results obtained on simulated and real corrosion pits, demonstrating the benefit of combining L-SAFT with pulse-echo Lamb wave inspection. The obtained images of the real corrosion defects showed detection of pits with a depth-to-wavelength ratio of approximately 2/11.

  14. Validation of snow depth reconstruction from lapse-rate webcam images against terrestrial laser scanner measurements in centrel Pyrenees

    NASA Astrophysics Data System (ADS)

    Revuelto, Jesús; Jonas, Tobias; López-Moreno, Juan Ignacio

    2015-04-01

    Snow distribution in mountain areas plays a key role in many processes as runoff dynamics, ecological cycles or erosion rates. Nevertheless, the acquisition of high resolution snow depth data (SD) in space-time is a complex task that needs the application of remote sensing techniques as Terrestrial Laser Scanning (TLS). Such kind of techniques requires intense field work for obtaining high quality snowpack evolution during a specific time period. Combining TLS data with other remote sensing techniques (satellite images, photogrammetry…) and in-situ measurements could represent an improvement of the available information of a variable with rapid topographic changes. The aim of this study is to reconstruct daily SD distribution from lapse-rate images from a webcam and data from two to three TLS acquisitions during the snow melting periods of 2012, 2013 and 2014. This information is obtained at Izas Experimental catchment in Central Spanish Pyrenees; a catchment of 33ha, with an elevation ranging from 2050 to 2350m a.s.l. The lapse-rate images provide the Snow Covered Area (SCA) evolution at the study site, while TLS allows obtaining high resolution information of SD distribution. With ground control points, lapse-rate images are georrectified and their information is rasterized into a 1-meter resolution Digital Elevation Model. Subsequently, for each snow season, the Melt-Out Date (MOD) of each pixel is obtained. The reconstruction increases the estimated SD lose for each time step (day) in a distributed manner; starting the reconstruction for each grid cell at the MOD (note the reverse time evolution). To do so, the reconstruction has been previously adjusted in time and space as follows. Firstly, the degree day factor (SD lose/positive average temperatures) is calculated from the information measured at an automatic weather station (AWS) located in the catchment. Afterwards, comparing the SD lose at the AWS during a specific time period (i.e. between two TLS

  15. Determination of hydrogen diffusion coefficients in F82H by hydrogen depth profiling with a tritium imaging plate technique

    SciTech Connect

    Higaki, M.; Otsuka, T.; Hashizume, K.; Tokunaga, K.; Ezato, K.; Suzuki, S.; Enoeda, M.; Akiba, M.

    2015-03-15

    Hydrogen diffusion coefficients in a reduced activation ferritic/martensitic steel (F82H) and an oxide dispersion strengthened F82H (ODS-F82H) have been determined from depth profiles of plasma-loaded hydrogen with a tritium imaging plate technique (TIPT) in the temperature range from 298 K to 523 K. Data on hydrogen diffusion coefficients, D, in F82H, are summarized as D [m{sup 2}*s{sup -1}] =1.1*10{sup -7}exp(-16[kJ mol{sup -1}]/RT). The present data indicate almost no trapping effect on hydrogen diffusion due to an excess entry of energetic hydrogen by the plasma loading, which results in saturation of the trapping sites at the surface and even in the bulk. In the case of ODS-F82H, data of hydrogen diffusion coefficients are summarized as D [m{sup 2}*s{sup -1}] =2.2*10{sup -7}exp(-30[kJ mol{sup -1}]/RT) indicating a remarkable trapping effect on hydrogen diffusion caused by tiny oxide particles (Y{sub 2}O{sub 3}) in the bulk of F82H. Such oxide particles introduced in the bulk may play an effective role not only on enhancement of mechanical strength but also on suppression of hydrogen penetration by plasma loading.

  16. Client-Side Image Maps: Achieving Accessibility and Section 508 Compliance

    ERIC Educational Resources Information Center

    Beasley, William; Jarvis, Moana

    2004-01-01

    Image maps are a means of making a picture "clickable", so that different portions of the image can be hyperlinked to different URLS. There are two basic types of image maps: server-side and client-side. Besides requiring access to a CGI on the server, server-side image maps are undesirable from the standpoint of accessibility--creating…

  17. Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect

    SciTech Connect

    Frary, R.; Louie, J.; Pullammanappallil, S.; Eisses, A.

    2016-08-01

    Roxanna Frary, John N. Louie, Sathish Pullammanappallil, Amy Eisses, 2011, Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect: presented at American Geophysical Union Fall Meeting, San Francisco, Dec. 5-9, abstract T13G-07.

  18. Femininity, Masculinity, and Body Image Issues among College-Age Women: An In-Depth and Written Interview Study of the Mind-Body Dichotomy

    ERIC Educational Resources Information Center

    Leavy, Patricia; Gnong, Andrea; Ross, Lauren Sardi

    2009-01-01

    In this article we investigate college-age women's body image issues in the context of dominant femininity and its polarization of the mind and body. We use original data collected through seven in-depth interviews and 32 qualitative written interviews with college-age women and men. We coded the data thematically applying feminist approaches to…

  19. Operational Retrieval of aerosol optical depth over Indian subcontinent and Indian Ocean using INSAT-3D/Imager product validation

    NASA Astrophysics Data System (ADS)

    Mishra, M. K.; Rastogi, G.; Chauhan, P.

    2014-11-01

    Aerosol optical depth (AOD) over Indian subcontinent and Indian Ocean region is derived operationally for the first time from the geostationary earth orbit (GEO) satellite INSAT-3D Imager data at 0.65 μm wavelength. Single visible channel algorithm based on clear sky composites gives larger retrieval error in AOD than other multiple channel algorithms due to errors in estimating surface reflectance and atmospheric property. However, since MIR channel signal is insensitive to the presence of most aerosols, therefore in present study, AOD retrieval algorithm employs both visible (centred at 0.65 μm) and mid-infrared (MIR) band (centred at 3.9 μm) measurements, and allows us to monitor transport of aerosols at higher temporal resolution. Comparisons made between INSAT-3D derived AOD (τI) and MODIS derived AOD (τM) co-located in space (at 1° resolution) and time during January, February and March (JFM) 2014 encompasses 1165, 1052 and 900 pixels, respectively. Good agreement found between τI and τM during JFM 2014 with linear correlation coefficients (R) of 0.87, 0.81 and 0.76, respectively. The extensive validation made during JFM 2014 encompasses 215 co-located AOD in space and time derived by INSAT 3D (τI) and 10 sun-photometers (τA) that includes 9 AERONET (Aerosol Robotic Network) and 1 handheld sun-photometer site. INSAT-3D derived AOD i.e. τI, is found within the retrieval errors of τI = ±0.07 ±0.15τA with linear correlation coefficient (R) of 0.90 and root mean square error equal (RMSE) to 0.06. Present work shows that INSAT-3D aerosol products can be used quantitatively in many applications with caution for possible residual clouds, snow/ice, and water contamination.

  20. Evaluation of choroidal thickness via enhanced depth-imaging optical coherence tomography in patients with systemic hypertension

    PubMed Central

    Gök, Mustafa; Karabaş, V Levent; Emre, Ender; Akşar, Arzu Toruk; Aslan, Mehmet Ş; Ural, Dilek

    2015-01-01

    Purpose: The purpose was to evaluate choroidal thickness via spectral domain optical coherence tomography (SD-OCT) and to compare the data with those of 24-h blood pressure monitoring, elastic features of the aorta, and left ventricle systolic functions, in patients with systemic hypertension. Materials and Methods: This was a case-control, cross-sectional prospective study. A total of 116 patients with systemic hypertension, and 116 healthy controls over 45 years of age, were included. Subfoveal choroidal thickness (SFCT) was measured using a Heidelberg SD-OCT platform operating in the enhanced depth imaging mode. Patients were also subjected to 24-h ambulatory blood pressure monitoring (ABPM) and standard transthoracic echocardiography (STTE). Patients were divided into dippers and nondippers using ABPM data and those with or without left ventricular hypertrophy (LVH+ and LVH-) based on STTE data. The elastic parameters of the aorta, thus aortic strain (AoS), the beta index (BI), aortic distensibility (AoD), and the left ventricular mass index (LVMI), were calculated from STTE data. Results: No significant difference in SFCT was evident between patients and controls (P ≤ 0.611). However, a significant negative correlation was evident between age and SFCT in both groups (r = −0.66/−0.56, P ≤ 0.00). No significant SFCT difference was evident between the dipper and nondipper groups (P ≤ 0.67), or the LVH (+) and LVH (-) groups (P ≤ 0.84). No significant correlation was evident between SFCT and any of AoS, BI, AoD, or LVMI. Discussion: The choroid is affected by atrophic changes associated with aging. Even in the presence of comorbid risk factors including LVH and arterial stiffness, systemic hypertension did not affect SFCT. PMID:25971169

  1. Enhanced depth imaging is less suited than indocyanine green angiography for close monitoring of primary stromal choroiditis: a pilot report.

    PubMed

    Balci, Ozlem; Gasc, Amel; Jeannin, Bruno; Herbort, Carl P

    2016-08-02

    The purpose of this study is to investigate the performance, utility, and precision of enhanced depth imaging optical coherence tomography (EDI-OCT) versus indocyanine green angiography (ICGA) in tracking any fluctuation in the activity of stromal choroiditis in response to therapeutic interventions during long-term follow-up. Patients with a diagnosis of Vogt-Koyanagi-Harada (VKH) disease or birdshot retinochoroiditis (BRC), with untreated initial disease, and having had long-term follow-up, including both ICGA and EDI-OCT, were recruited at the Centre for Ophthalmic Specialised care, Lausanne, Switzerland. Angiography signs were quantified according to established dual fluorescein angiography (FA) and ICGA scoring systems for uveitis. Changes in ICGA score and EDI choroidal thickness, in response to therapeutic intervention, were assessed. In the four eyes analysed (2 BRC and 2 VKH), mean EDI-OCT choroidal thickness decreased from 672 ± 101 µm at presentation to 358.5 ± 44.5 µm in a mean of 26.5 months, i.e. the time taken to stabilize the disease. Mean ICGA scores decreased from 28 ± 4.2 at presentation to 5 ± 7 at stabilization. Only ICGA was sufficiently sensitive and reactive having the ability to detect disease recurrences and efficacy or the absence of effect of successive treatment changes, detected in seven instances during follow-up, not recorded by EDI-OCT. This pilot study showed that ICGA was a more sensitive methodology, which promptly identifies evolving subclinical and occult choroidal disease, and flag occult recurrence and/or therapeutic responses that were otherwise missed by EDI-OCT. Although choroidal thickness was proportional to treatment course, demonstrating a linear decrease, these changes were too sluggish to be relied upon for close follow-up and timely adjustment of therapy.

  2. Exploring the effects of landscape structure on aerosol optical depth (AOD) patterns using GIS and HJ-1B images.

    PubMed

    Ye, Luping; Fang, Linchuan; Tan, Wenfeng; Wang, Yunqiang; Huang, Yu

    2016-02-01

    A GIS approach and HJ-1B images were employed to determine the effect of landscape structure on aerosol optical depth (AOD) patterns. Landscape metrics, fractal analysis and contribution analysis were proposed to quantitatively illustrate the impact of land use on AOD patterns. The high correlation between the mean AOD and landscape metrics indicates that both the landscape composition and spatial structure affect the AOD pattern. Additionally, the fractal analysis demonstrated that the densities of built-up areas and bare land decreased from the high AOD centers to the outer boundary, but those of water and forest increased. These results reveal that the built-up area is the main positive contributor to air pollution, followed by bare land. Although bare land had a high AOD, it made a limited contribution to regional air pollution due to its small spatial extent. The contribution analysis further elucidated that built-up areas and bare land can increase air pollution more strongly in spring than in autumn, whereas forest and water have a completely opposite effect. Based on fractal and contribution analyses, the different effects of cropland are ascribed to the greater vegetation coverage from farming activity in spring than in autumn. The opposite effect of cropland on air pollution reveals that green coverage and human activity also influence AOD patterns. Given that serious concerns have been raised regarding the effects of built-up areas, bare land and agricultural air pollutant emissions, this study will add fundamental knowledge of the understanding of the key factors influencing urban air quality.

  3. Diurnal variation of aerosol optical depth and angstrom exponent from Geostationary Ocean Color Imager (GOCI) Yonsei AErosol Retrieval (YAER) algorithm

    NASA Astrophysics Data System (ADS)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa

    2015-04-01

    Over the East Asia, aerosol optical properties (AOPs) can be changed very quickly and diversely during a day because mineral dust or heavy anthropogenic aerosol events occur sporadically and frequently. When severe aerosol event occurs from source region, long-range transported can be appeared over East Asia within one day so that multi-temporal satellite observation during a day is essential to detect aerosol diurnal variation in East Asia. Although it has been possible from previous meteorological sensors in geostationary earth orbit, only aerosol optical depth (AOD) at one channel can be retrieved and accuracy of retrieved AOD is worse than those of multi-channel sensors such as MODIS, SeaWiFS, or VIIRS because appropriate aerosol model selection is difficult using single channel information. The Geostationary Ocean Color Imager (GOCI) is one of sensor onboard COMS geostationary satellite. It has 8 channels in visible, which are similar with SeaWiFS and MODIS ocean color channels. It observes East Asia, including East China, Korean Peninsula, and Japan, hourly during the daytime (8 times observation in daytime). Because of geostationary and multi-channel characteristics, accurate AOPs such as AOD and Angstrom exponent (AE) can be retrieved from GOCI Yonsei Aerosol retrieval (YAER) algorithm as high spatial (6 km x 6 km) and temporal (1 hour) resolution. In this study, GOCI YAER AOD and AE are compared with those from AERONET (ground-based observation) and MODIS Collection 6 Dark Target and Deep Blue algorithm (satellite-based observation) as high frequency time series during a day and few days over AERONET sites. This can show the accuracy of GOCI YAER algorithm in compare with AERONET. In specific transport cases such as dust or haze, instantaneous increase of AOD and change of aerosol size from AE can be also detect from GOCI. These GOCI YEAR products can be used effectively as input observation data of air-quality monitoring and forecasting.

  4. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.; Green, Richard; Bian, Fuyan; Strauss, Michael A.; Buck, Zoë; Annis, James; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)

  5. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan; McGreer, Ian D.; Strauss, Michael A.; Annis, James; Buck, Zoë; Green, Richard; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-06-25

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).

  6. HgCdTe Detectors for Space and Science Imaging: General Issues and Latest Achievements

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Rothman, J.; Cervera, C.; Baier, N.; Lobre, C.; Zanatta, J. P.; Boulade, O.; Moreau, V.; Fieque, B.

    2016-09-01

    HgCdTe (MCT) is a very versatile material system for infrared (IR) detection, suitable for high performance detection in a wide range of applications and spectral ranges. Indeed, the ability to tailor the cutoff frequency as close as possible to the needs makes it a perfect candidate for high performance detection. Moreover, the high quality material available today, grown either by molecular beam epitaxy or liquid phase epitaxy, allows for very low dark currents at low temperatures, suitable for low flux detection applications such as science imaging. MCT has also demonstrated robustness to the aggressive environment of space and faces, therefore, a large demand for space applications. A satellite may stare at the earth, in which case detection usually involves a lot of photons, called a high flux scenario. Alternatively, a satellite may stare at outer space for science purposes, in which case the detected photon number is very low, leading to low flux scenarios. This latter case induces very strong constraints onto the detector: low dark current, low noise, (very) large focal plane arrays. The classical structure used to fulfill those requirements are usually p/ n MCT photodiodes. This type of structure has been deeply investigated in our laboratory for different spectral bands, in collaboration with the CEA Astrophysics lab. However, another alternative may also be investigated with low excess noise: MCT n/ p avalanche photodiodes (APD). This paper reviews the latest achievements obtained on this matter at DEFIR (LETI and Sofradir common laboratory) from the short wave infrared (SWIR) band detection for classical astronomical needs, to long wave infrared (LWIR) band for exoplanet transit spectroscopy, up to very long wave infrared (VLWIR) bands. The different available diode architectures ( n/ p VHg or p/ n, or even APDs) are reviewed, including different available ROIC architectures for low flux detection.

  7. Depth cube display using depth map

    NASA Astrophysics Data System (ADS)

    Jung, Jung-Hun; Song, Byoung-Sub; Min, Sung-Wook

    2011-03-01

    We propose Depth Cube Display (DCD) method using depth map. The structure of the proposed method consists of two parts: A projection part composed of projector for generating image and a Twisted Nematic Liquid Crystal display (TNLCD) as polarization modulating device for adjusting the proper depth and a display part composed of air-spaced stack of selective scattering polarizers which make the incident light to scatter selectively as the polarization of light rays. The image from projector whose depth is determined as passing through the TN-LCD displaying depth map progresses into the stack of selective scattering polarizers and then three-dimensional image is generated. At that time, the polarization of each polarizer is set 0°, 45° and 90° sequentially, and then the incident light rays are scattered by different polarizer as the polarization of these rays. If the light ray has the polarization between those of polarizers, this light ray is scattered by multi polarizers and the image of this ray is generated on gap between polarizers. The proposed method is more simple structure and implemented easily than previous DCD method.

  8. Image reconstruction for PET/CT scanners: past achievements and future challenges

    PubMed Central

    Tong, Shan; Alessio, Adam M; Kinahan, Paul E

    2011-01-01

    PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831

  9. Functional imaging using the retinal function imager: direct imaging of blood velocity, achieving fluorescein angiography-like images without any contrast agent, qualitative oximetry, and functional metabolic signals.

    PubMed

    Izhaky, David; Nelson, Darin A; Burgansky-Eliash, Zvia; Grinvald, Amiram

    2009-07-01

    The Retinal Function Imager (RFI; Optical Imaging, Rehovot, Israel) is a unique, noninvasive multiparameter functional imaging instrument that directly measures hemodynamic parameters such as retinal blood-flow velocity, oximetric state, and metabolic responses to photic activation. In addition, it allows capillary perfusion mapping without any contrast agent. These parameters of retinal function are degraded by retinal abnormalities. This review delineates the development of these parameters and demonstrates their clinical applicability for noninvasive detection of retinal function in several modalities. The results suggest multiple clinical applications for early diagnosis of retinal diseases and possible critical guidance of their treatment.

  10. Effect of cataract surgery on subfoveal choroidal and ganglion cell complex thicknesses measured by enhanced depth imaging optical coherence tomography

    PubMed Central

    Celik, Erkan; Cakır, Burcin; Turkoglu, Elif Betul; Doğan, Emine; Alagoz, Gursoy

    2016-01-01

    Purpose We aimed to evaluate the effect of cataract surgery on subfoveal choroidal thickness (CT) and ganglion cell complex (GCC) thickness, as measured by enhanced depth imaging-optical coherence tomography (OCT). Methods This prospective study included 30 eyes of 30 patients who had undergone uneventful phacoemulsification surgery for senile cataract but had no previous ocular surgery or other ocular abnormality. Best-corrected visual acuity, slit-lamp biomicroscopy, intraocular pressure, axial length, and central corneal thickness were measured preoperatively. The operative times (OTs) and effective phaco times were also recorded in each case. OCT measurements were performed at the preoperative visit and 1 month after cataract surgery. Study of CT and GCC thickness changes was the primary objective, but central macular thickness (CMT) and peripapillary retinal nerve fiber layer (RNFL) thicknesses were also obtained by OCT. Results The mean subfoveal CT was 294.4±39.2 μm preoperatively and 301.4±39.9 μm postoperatively (P<0.001). The mean GCC thickness was 85.0±4.4 μm preoperatively and 89.2±5.3 μm postoperatively (P<0.001). The mean CMT was 247.9±17.6 μm preoperatively and 249.0±17.8 μm postoperatively (P=0.029). The mean RNFL thickness was 97.4±5.4 μm preoperatively and 101.7±5.6 μm postoperatively (P<0.001). Regression analysis showed that age, sex, axial length, central corneal thickness, operative time, and effective phaco time were not associated with CT changes (P=0.834, P=0.129, P=0.203, P=0.343, P=0.547, and P=0.147, respectively) and GCC thickness changes (P=0.645, P=0.542, P=0.152, P=0.664, P=0.448, and P=0.268, respectively) after cataract surgery. Conclusion Our results indicate that all subfoveal CT, CMT, as well as RNFL and GCC thicknesses are slightly affected after uneventful phacoemulsification surgery. After cataract surgery, the examiners should consider obtaining new baseline measurements. PMID:27843286

  11. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  12. Nuclear imaging of the breast: translating achievements in instrumentation into clinical use.

    PubMed

    Hruska, Carrie B; O'Connor, Michael K

    2013-05-01

    Approaches to imaging the breast with nuclear medicine and∕or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed.

  13. Nuclear imaging of the breast: Translating achievements in instrumentation into clinical use

    PubMed Central

    Hruska, Carrie B.; O'Connor, Michael K.

    2013-01-01

    Approaches to imaging the breast with nuclear medicine and/or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed. PMID:23635248

  14. Thermal Coherence Tomography: Depth-Resolved Imaging in Parabolic Diffusion-Wave Fields Using the Thermal-Wave Radar

    NASA Astrophysics Data System (ADS)

    Tabatabaei, N.; Mandelis, A.

    2012-11-01

    Energy transport in diffusion-wave fields is gradient driven and therefore diffuse, yielding depth-integrated responses with poor axial resolution. Using matched filter principles, a methodology is proposed enabling these parabolic diffusion-wave energy fields to exhibit energy localization akin to propagating hyperbolic wave fields. This not only improves the axial resolution, but also allows for deconvolution of individual responses of superposed axially discrete sources, opening a new field of depth-resolved subsurface thermal coherence tomography using diffusion waves. The depth-resolved nature of the developed methodology is verified through experiments carried out on phantoms and biological samples. The results suggest that thermal coherence tomography can resolve deep structural changes in hard dental and bone tissues, allowing for remote detection of early dental caries and potentially early osteoporosis.

  15. Extended depth-of-field 3D endoscopy with synthetic aperture integral imaging using an electrically tunable focal-length liquid-crystal lens.

    PubMed

    Wang, Yu-Jen; Shen, Xin; Lin, Yi-Hsin; Javidi, Bahram

    2015-08-01

    Conventional synthetic-aperture integral imaging uses a lens array to sense the three-dimensional (3D) object or scene that can then be reconstructed digitally or optically. However, integral imaging generally suffers from a fixed and limited range of depth of field (DOF). In this Letter, we experimentally demonstrate a 3D integral-imaging endoscopy with tunable DOF by using a single large-aperture focal-length-tunable liquid crystal (LC) lens. The proposed system can provide high spatial resolution and an extended DOF in synthetic-aperture integral imaging 3D endoscope. In our experiments, the image plane in the integral imaging pickup process can be tuned from 18 to 38 mm continuously using a large-aperture LC lens, and the total DOF is extended from 12 to 51 mm. To the best of our knowledge, this is the first report on synthetic aperture integral imaging 3D endoscopy with a large-aperture LC lens that can provide high spatial resolution 3D imaging with an extend DOF.

  16. Investigating Pre-Service Candidates' Images of Mathematical Reasoning: An In-Depth Online Analysis of Common Core Mathematics Standards

    ERIC Educational Resources Information Center

    Davis, C. E.; Osler, James E.

    2013-01-01

    This paper details the outcomes of a qualitative in-depth investigation into teacher education mathematics preparation. This research is grounded in the notion that mathematics teacher education students (as "degree seeking candidates") need to develop strong foundations of mathematical practice as defined by the Common Core State…

  17. High-depth-resolution 3-dimensional radar-imaging system based on a few-cycle W-band photonic millimeter-wave pulse generator.

    PubMed

    Tseng, Tzu-Fang; Wun, Jhih-Min; Chen, Wei; Peng, Sui-Wei; Shi, Jin-Wei; Sun, Chi-Kuang

    2013-06-17

    We demonstrate that a near-single-cycle photonic millimeter-wave short-pulse generator at W-band is capable to provide high spatial resolution three-dimensional (3-D) radar imaging. A preliminary study indicates that 3-D radar images with a state-of-the-art ranging resolution of around 1.2 cm at the W-band can be achieved.

  18. Improving depth maps with limited user input

    NASA Astrophysics Data System (ADS)

    Vandewalle, Patrick; Klein Gunnewiek, René; Varekamp, Chris

    2010-02-01

    A vastly growing number of productions from the entertainment industry are aiming at 3D movie theaters. These productions use a two-view format, primarily intended for eye-wear assisted viewing in a well defined environment. To get this 3D content into the home environment, where a large variety of 3D viewing conditions exists (e.g. different display sizes, display types, viewing distances), we need a flexible 3D format that can adjust the depth effect. This can be provided by the image plus depth format, in which a video frame is enriched with depth information for all pixels in the video frame. This format can be extended with additional layers, such as an occlusion layer or a transparency layer. The occlusion layer contains information on the data that is behind objects, and is also referred to as occluded video. The transparency layer, on the other hand, contains information on the opacity of the foreground layer. This allows rendering of semi-transparencies such as haze, smoke, windows, etc., as well as transitions from foreground to background. These additional layers are only beneficial if the quality of the depth information is high. High quality depth information can currently only be achieved with user assistance. In this paper, we discuss an interactive method for depth map enhancement that allows adjustments during the propagation over time. Furthermore, we will elaborate on the automatic generation of the transparency layer, using the depth maps generated with an interactive depth map generation tool.

  19. Achieving high-resolution in flat-panel imagers for digital radiography

    NASA Astrophysics Data System (ADS)

    Rahn, Jeffrey T.; Lemmi, Francesco; Lu, Jeng-Ping; Mei, Ping; Street, Robert A.; Ready, Steve E.; Ho, Jackson; Apte, Raj B.; Van Schuylenbergh, Koenraad; Lau, Rachel; Weisfield, Richard L.; Lujan, Rene; Boyce, James B.

    1999-10-01

    Amorphous silicon (a-Si:H) matrix-addressed imager sensors are the leading new technology for digital medical x-ray imaging. Large-area systems are now commercially available with good resolution and large dynamic range. These systems image x-rays either by detecting light emission from a phosphor screen onto an a-Si:H photodiode, or by collecting ionization charge in a thick x-ray absorbing photoconductor with as selenium, and both approaches have been widely discussed in the literature. While these systems meet the performance needs for general radiographic imaging, further improvements in sensitivity, noise and resolution are needed to fully satisfy the requirements for fluoroscopy and mammography. The approach taken for this paper uses indirect detection, with a phosphor layer for x-ray conversion. The thin a-Si:H photodiode layer for detects the scintillation light. In contrast with the present generation of devices, which have a mesa-isolated sensor at each pixel, these imagers use a continuous sensor covering the entire front surface of the array. The p+ and i layers of a-Si:H are continuous, while the n+ contact has been patterned to isolate adjacent pixels. The continuous photodiode layer maximizes light absorption from the phosphor and provides high x-ray conversion efficiency.

  20. Current achievements of nanoparticle applications in developing optical sensing and imaging techniques

    NASA Astrophysics Data System (ADS)

    Choi, Jong-ryul; Shin, Dong-Myeong; Song, Hyerin; Lee, Donghoon; Kim, Kyujung

    2016-11-01

    Metallic nanostructures have recently been demonstrated to improve the performance of optical sensing and imaging techniques due to their remarkable localization capability of electromagnetic fields. Particularly, the zero-dimensional nanostructure, commonly called a nanoparticle, is a promising component for optical measurement systems due to its attractive features, e.g., ease of fabrication, capability of surface modification and relatively high biocompatibility. This review summarizes the work to date on metallic nanoparticles for optical sensing and imaging applications, starting with the theoretical backgrounds of plasmonic effects in nanoparticles and moving through the applications in Raman spectroscopy and fluorescence biosensors. Various efforts for enhancing the sensitivity, selectivity and biocompatibility are summarized, and the future outlooks for this field are discussed. Convergent studies in optical sensing and imaging have been emerging field for the development of medical applications, including clinical diagnosis and therapeutic applications.

  1. High-resolution 1050 nm spectral domain retinal optical coherence tomography at 120 kHz A-scan rate with 6.1 mm imaging depth

    PubMed Central

    An, Lin; Li, Peng; Lan, Gongpu; Malchow, Doug; Wang, Ruikang K.

    2013-01-01

    We report a newly developed high speed 1050nm spectral domain optical coherence tomography (SD-OCT) system for imaging posterior segment of human eye. The system is capable of an axial resolution at ~10 µm in air, an imaging depth of 6.1 mm in air, a system sensitivity fall-off at ~6 dB/3mm and an imaging speed of 120,000 A-scans per second. We experimentally demonstrate the system’s capability to perform phase-resolved imaging of dynamic blood flow within retina, indicating high phase stability of the SDOCT system. Finally, we show an example that uses this newly developed system to image posterior segment of human eye with a large view of view (10 × 9 mm2), providing detailed visualization of microstructural features from anterior retina to posterior choroid. The demonstrated system parameters and imaging performances are comparable to those that a typical 1 µm swept source OCT would deliver for retinal imaging. PMID:23411636

  2. Improved ultrasonic TV images achieved by use of Lamb-wave orientation technique

    NASA Technical Reports Server (NTRS)

    Berger, H.

    1967-01-01

    Lamb-wave sample orientation technique minimizes the interference from standing waves in continuous wave ultrasonic television imaging techniques used with thin metallic samples. The sample under investigation is oriented such that the wave incident upon it is not normal, but slightly angled.

  3. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  4. Depth-varying density and organization of chondrocytes in immature and mature bovine articular cartilage assessed by 3d imaging and analysis

    NASA Technical Reports Server (NTRS)

    Jadin, Kyle D.; Wong, Benjamin L.; Bae, Won C.; Li, Kelvin W.; Williamson, Amanda K.; Schumacher, Barbara L.; Price, Jeffrey H.; Sah, Robert L.

    2005-01-01

    Articular cartilage is a heterogeneous tissue, with cell density and organization varying with depth from the surface. The objectives of the present study were to establish a method for localizing individual cells in three-dimensional (3D) images of cartilage and quantifying depth-associated variation in cellularity and cell organization at different stages of growth. Accuracy of nucleus localization was high, with 99% sensitivity relative to manual localization. Cellularity (million cells per cm3) decreased from 290, 310, and 150 near the articular surface in fetal, calf, and adult samples, respectively, to 120, 110, and 50 at a depth of 1.0 mm. The distance/angle to the nearest neighboring cell was 7.9 microm/31 degrees , 7.1 microm/31 degrees , and 9.1 microm/31 degrees for cells at the articular surface of fetal, calf, and adult samples, respectively, and increased/decreased to 11.6 microm/31 degrees , 12.0 microm/30 degrees , and 19.2 microm/25 degrees at a depth of 0.7 mm. The methodologies described here may be useful for analyzing the 3D cellular organization of cartilage during growth, maturation, aging, degeneration, and regeneration.

  5. Low-coherence in-depth microscopy for biological tissue imaging: design of a real-time control system

    NASA Astrophysics Data System (ADS)

    Blanchot, Loic; Lebec, Martial; Beaurepaire, Emmanuel; Gleyzes, Philippe; Boccara, Albert C.; Saint-Jalmes, Herve

    1998-01-01

    We describe the design of a versatile electronic system performing a lock-in detection in parallel on every pixel of a 2D CCD camera. The system is based on a multiplexed lock- in detection method that requires accurate synchronization of the camera, the excitation signal and the processing computer. This device has been incorporated in an imaging setup based on the optical coherence tomography principle, enabling to acquire a full 2D head-on image without scanning. The imaging experiment is implemented on a modified commercial microscope. Lateral resolution is on the order of 2 micrometers , and the coherence length of the light source defines an axial resolution of approximately 8 micrometers . Images of onion cells a few hundred microns deep into the sample are obtained with 100 dB sensitivity.

  6. Low-coherence in-depth microscopy for biological tissue imaging: design of a real-time control system

    NASA Astrophysics Data System (ADS)

    Blanchot, Loic; Lebec, Martial; Beaurepaire, Emmanuel; Gleyzes, Philippe; Boccara, A. Claude; Saint-Jalmes, Herve

    1997-12-01

    We describe the design of a versatile electronic system performing a lock-in detection in parallel on every pixel of a 2D CCD camera. The system is based on a multiplexed lock- in detection method that requires accurate synchronization of the camera, the excitation signal and the processing computer. This device has been incorporated in an imaging setup based on the optical coherence tomography principle, enabling to acquire a full 2D head-on image without scanning. The imaging experiment is implemented on a modified commercial microscope. Lateral resolution is on the order of 2 micrometers , and the coherence length of the light source defines an axial resolution of approximately 8 micrometers . Images of onion cells a few hundred microns deep into the sample are obtained with 100 dB sensitivity.

  7. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    NASA Astrophysics Data System (ADS)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  8. Noncontact photoacoustic imaging achieved by using a low-coherence interferometer as the acoustic detector.

    PubMed

    Wang, Yi; Li, Chunhui; Wang, Ruikang K

    2011-10-15

    We report on a noncontact photoacoustic imaging (PAI) technique in which a low-coherence interferometer [(LCI), optical coherence tomography (OCT) hardware] is utilized as the acoustic detector. A synchronization approach is used to lock the LCI system at its highly sensitive region for photoacoustic detection. The technique is experimentally verified by the imaging of a scattering phantom embedded with hairs and the blood vessels within a mouse ear in vitro. The system's axial and lateral resolutions are evaluated at 60 and 30 μm, respectively. The experimental results indicate that PAI in a noncontact detection mode is possible with high resolution and high bandwidth. The proposed approach lends itself to a natural integration of PAI with OCT, rather than a combination of two separate and independent systems.

  9. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging

    PubMed Central

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  10. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging.

    PubMed

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-06-14

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models.

  11. High-speed depth-sectioned wide-field imaging using low-coherence photorefractive holographic microscopy

    NASA Astrophysics Data System (ADS)

    Dunsby, C.; Gu, Y.; Ansari, Z.; French, P. M. W.; Peng, L.; Yu, P.; Melloch, M. R.; Nolte, D. D.

    2003-04-01

    Low-coherence photorefractive holography has the potential to acquire wide-field coherence-gated images at frame rates approaching 1000 frames/s, including through scattering media. We present a quantitative analysis of the system optimization and limits of performance for coherence-gated imaging through scattering media using photorefractive holography and compare this performance to direct CCD detection. We show that, for high optical quality recording photorefractive multiple quantum well devices, photorefractive holography has the potential to provide a higher dynamic range than is possible with direct CCD-based detection.

  12. Depth-resolved mid-infrared photothermal imaging of living cells and organisms with submicrometer spatial resolution.

    PubMed

    Zhang, Delong; Li, Chen; Zhang, Chi; Slipchenko, Mikhail N; Eakins, Gregory; Cheng, Ji-Xin

    2016-09-01

    Chemical contrast has long been sought for label-free visualization of biomolecules and materials in complex living systems. Although infrared spectroscopic imaging has come a long way in this direction, it is thus far only applicable to dried tissues because of the strong infrared absorption by water. It also suffers from low spatial resolution due to long wavelengths and lacks optical sectioning capabilities. We overcome these limitations through sensing vibrational absorption-induced photothermal effect by a visible laser beam. Our mid-infrared photothermal (MIP) approach reached 10 μM detection sensitivity and submicrometer lateral spatial resolution. This performance has exceeded the diffraction limit of infrared microscopy and allowed label-free three-dimensional chemical imaging of live cells and organisms. Distributions of endogenous lipid and exogenous drug inside single cells were visualized. We further demonstrated in vivo MIP imaging of lipids and proteins in Caenorhabditis elegans. The reported MIP imaging technology promises broad applications from monitoring metabolic activities to high-resolution mapping of drug molecules in living systems, which are beyond the reach of current infrared microscopy.

  13. Depth-resolved mid-infrared photothermal imaging of living cells and organisms with submicrometer spatial resolution

    PubMed Central

    Zhang, Delong; Li, Chen; Zhang, Chi; Slipchenko, Mikhail N.; Eakins, Gregory; Cheng, Ji-Xin

    2016-01-01

    Chemical contrast has long been sought for label-free visualization of biomolecules and materials in complex living systems. Although infrared spectroscopic imaging has come a long way in this direction, it is thus far only applicable to dried tissues because of the strong infrared absorption by water. It also suffers from low spatial resolution due to long wavelengths and lacks optical sectioning capabilities. We overcome these limitations through sensing vibrational absorption–induced photothermal effect by a visible laser beam. Our mid-infrared photothermal (MIP) approach reached 10 μM detection sensitivity and submicrometer lateral spatial resolution. This performance has exceeded the diffraction limit of infrared microscopy and allowed label-free three-dimensional chemical imaging of live cells and organisms. Distributions of endogenous lipid and exogenous drug inside single cells were visualized. We further demonstrated in vivo MIP imaging of lipids and proteins in Caenorhabditis elegans. The reported MIP imaging technology promises broad applications from monitoring metabolic activities to high-resolution mapping of drug molecules in living systems, which are beyond the reach of current infrared microscopy. PMID:27704043

  14. Survey of light sources for image display systems to achieve brightness with efficient energy

    NASA Astrophysics Data System (ADS)

    Cheng, Dah Yu; Chen, Li-Min

    1995-04-01

    This paper will review the currently available light sources, and also introduces a new, patented compound orthogonal parabolic reflector to be integrated with the light source, which focuses a relatively large light source into a very small point. The reflector creates a nearly ideal intense point source for all next generation image display systems. The proposed system is not limited by the radiation source whether it is a short arc lamp or a long tungsten filament lamp. Our technologies take the finite size of radiation sources into account to address the common problem for all reflector lamp systems, i.e., intensity and uniformity (dark hole). Successful examples will be shown on how to make the efficient intense light source match the requirements of LCD and DMD display systems. A method for reducing U.V. and I.R. radiation will also be demonstrated.

  15. Combining hard and soft magnetism into a single core-shell nanoparticle to achieve both hyperthermia and image contrast

    PubMed Central

    Yang, Qiuhong; Gong, Maogang; Cai, Shuang; Zhang, Ti; Douglas, Justin T; Chikan, Viktor; Davies, Neal M; Lee, Phil; Choi, In-Young; Ren, Shenqiang; Forrest, M Laird

    2015-01-01

    Background A biocompatible core/shell structured magnetic nanoparticles (MNPs) was developed to mediate simultaneous cancer therapy and imaging. Methods & results A 22-nm MNP was first synthesized via magnetically coupling hard (FePt) and soft (Fe3O4) materials to produce high relative energy transfer. Colloidal stability of the FePt@Fe3O4 MNPs was achieved through surface modification with silane-polyethylene glycol (PEG). Intravenous administration of PEG-MNPs into tumor-bearing mice resulted in a sustained particle accumulation in the tumor region, and the tumor burden of treated mice was a third that of the mice in control groups 2 weeks after a local hyperthermia treatment. In vivo magnetic resonance imaging exhibited enhanced T2 contrast in the tumor region. Conclusion This work has demonstrated the feasibility of cancer theranostics with PEG-MNPs. PMID:26606855

  16. Framework of a Contour Based Depth Map Coding Method

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; He, Xun; Jin, Xin; Goto, Satoshi

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  17. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  18. An Image Processing Technique for Achieving Lossy Compression of Data at Ratios in Excess of 100:1

    DTIC Science & Technology

    1992-11-01

    2,529x1.578 505x315 Room 2,1 10x2,695 422x539 Turbans 2,523xl,617 504x323 Chapter 4 Achieving High Compression Ratios 41 CL Cuj 0. U. 42 Chater 4Achieing...NJ. Nelson, M. (1991). The data compression book, M&T Books, Redwood City , CA. "Software listings," Dr. Dobb’s Journal. (1991). M&T Books, Redwood... City , CA. 48 References Bibliography Barnsley, M. (1992). "Methods and apparatus for image compression by iterated function system," U.S. Patent

  19. High altitude diving depths.

    PubMed

    Paulev, Poul-Erik; Zubieta-Calleja, Gustavo

    2007-01-01

    In order to make any sea level dive table usable during high altitude diving, a new conversion factor is created. We introduce the standardized equivalent sea depth (SESD), which allows conversion of the actual lake diving depth (ALDD) to an equivalent sea dive depth. SESD is defined as the sea depth in meters or feet for a standardized sea dive, equivalent to a mountain lake dive at any altitude, such that [image omitted] [image omitted] [image omitted] Mountain lakes contain fresh water with a relative density that can be standardized to 1,000 kg m(-3), and sea water can likewise be standardized to a relative density of 1,033 kg m(-3), at the general gravity of 9.80665 m s(-2). The water density ratio (1,000/1,033) refers to the fresh lake water and the standardized sea water densities. Following calculation of the SESD factor, we recommend the use of our simplified diving table or any acceptable sea level dive table with two fundamental guidelines: 1. The classical decompression stages (30, 20, and 10 feet or 9, 6, and 3 m) are corrected to the altitude lake level, dividing the stage depth by the SESD factor. 2. Likewise, the lake ascent rate during diving is equal to the sea ascent rate divided by the SESD factor.

  20. Fast super-resolution imaging with ultra-high labeling density achieved by joint tagging super-resolution optical fluctuation imaging.

    PubMed

    Zeng, Zhiping; Chen, Xuanze; Wang, Hening; Huang, Ning; Shan, Chunyan; Zhang, Hao; Teng, Junlin; Xi, Peng

    2015-02-10

    Previous stochastic localization-based super-resolution techniques are largely limited by the labeling density and the fidelity to the morphology of specimen. We report on an optical super-resolution imaging scheme implementing joint tagging using multiple fluorescent blinking dyes associated with super-resolution optical fluctuation imaging (JT-SOFI), achieving ultra-high labeling density super-resolution imaging. To demonstrate the feasibility of JT-SOFI, quantum dots with different emission spectra were jointly labeled to the tubulin in COS7 cells, creating ultra-high density labeling. After analyzing and combining the fluorescence intermittency images emanating from spectrally resolved quantum dots, the microtubule networks are capable of being investigated with high fidelity and remarkably enhanced contrast at sub-diffraction resolution. The spectral separation also significantly decreased the frame number required for SOFI, enabling fast super-resolution microscopy through simultaneous data acquisition. As the joint-tagging scheme can decrease the labeling density in each spectral channel, thereby bring it closer to single-molecule state, we can faithfully reconstruct the continuous microtubule structure with high resolution through collection of only 100 frames per channel. The improved continuity of the microtubule structure is quantitatively validated with image skeletonization, thus demonstrating the advantage of JT-SOFI over other localization-based super-resolution methods.

  1. Reconstruction of Indoor Models Using Point Clouds Generated from Single-Lens Reflex Cameras and Depth Images

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Wu, T.-S.; Lee, I.-C.; Chang, H.; Su, A. Y. S.

    2015-05-01

    This paper presents a data acquisition system consisting of multiple RGB-D sensors and digital single-lens reflex (DSLR) cameras. A systematic data processing procedure for integrating these two kinds of devices to generate three-dimensional point clouds of indoor environments is also developed and described. In the developed system, DSLR cameras are used to bridge the Kinects and provide a more accurate ray intersection condition, which takes advantage of the higher resolution and image quality of the DSLR cameras. Structure from Motion (SFM) reconstruction is used to link and merge multiple Kinect point clouds and dense point clouds (from DSLR color images) to generate initial integrated point clouds. Then, bundle adjustment is used to resolve the exterior orientation (EO) of all images. Those exterior orientations are used as the initial values to combine these point clouds at each frame into the same coordinate system using Helmert (seven-parameter) transformation. Experimental results demonstrate that the design of the data acquisition system and the data processing procedure can generate dense and fully colored point clouds of indoor environments successfully even in featureless areas. The accuracy of the generated point clouds were evaluated by comparing the widths and heights of identified objects as well as coordinates of pre-set independent check points against in situ measurements. Based on the generated point clouds, complete and accurate three-dimensional models of indoor environments can be constructed effectively.

  2. Underwater camera with depth measurement

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  3. Quantitative, depth-resolved determination of particle motion using multi-exposure speckle imaging and spatial frequency domain analysis

    NASA Astrophysics Data System (ADS)

    Rice, Tyler Bywaters

    Laser Speckle Imaging (LSI) is a simple, noninvasive technique to quickly image particle motion in scattering media such as biological tissue. LSI is generally used as a qualitative metric of relative flow speeds due to unknown impact from other variables that affect speckle contrast. These variables include: the intensity profile of the coherent laser beam, optical absorption and scattering coefficients, multi-layered dynamics including static, non-ergodic sections, and systematic effects such as laser coherence length. We overcame these obstacles using a methodology that combined accurate photon transport modeling, multi-exposure speckle imaging (MESI), spatial frequency domain imaging (SFDI), and careful instrument calibration to determine absolute flow metrics in layered geometries. The impact of the beam profile was explored by first demonstrating a significant effect using in vivo experiments. Next, we determined the spatial frequencies of the beam shape using Discrete Fourier Transforms (DFT's), and modeled the propagation of each frequency individually using SFDI methodology. We found that we could accurately predict the effect of the beam profile, and correct the aberration using optical components to form a flat, planar light projection. Next, diffusion and Monte Carlo forward models of light transport were compared and used to determine flow dynamics in a simple homogenous diffusing solution. Results showed that Monte Carlo outperformed diffusion, due to breakdown of the diffusion approximation at clinically relevant camera exposure times. Phantoms were created with varying sizes of scattering polystyrene spheres and concentrations of viscous glycerine. Speckle contrast measurements were taken and fit using Monte Carlo models, and expected Brownian Diffusion Coefficients were returned within ~10% of expected values. Monte Carlo models were then extended to generate total and layer-specific fractional momentum transfer distributions. This information was

  4. Quantitative estimation of Secchi disk depth using the HJ-1B CCD image and in situ observations in Sishili Bay, China

    NASA Astrophysics Data System (ADS)

    Yu, Dingfeng; Zhou, Bin; Fan, Yanguo; Li, Tantan; Liang, Shouzhen; Sun, Xiaoling

    2014-11-01

    Secchi disk depth (SDD) is an important optical property of water related to water quality and primary production. The traditional sampling method is not only time-consuming and labor-intensive but also limited in terms of temporal and spatial coverage, while remote sensing technology can deal with these limitations. In this study, models estimating SDD have been proposed based on the regression analysis between the HJ-1 satellite CCD image and synchronous in situ water quality measurements. The results illustrate the band ratio model of B3/B1 of CCD could be used to estimate Secchi depth in this region, with the mean relative error (MRE) of 8.6% and root mean square error (RMSE) of 0.1 m, respectively. This model has been applied to one image of HJ-1 satellite CCD, generating water transparency on June 23, 2009, which will be of immense value for environmental monitoring. In addition, SDD was deeper in offshore waters than in inshore waters. River runoffs, hydrodynamic environments, and marine aquaculture are the main factors influencing SDD in this area.

  5. Confocal Raman imaging and chemometrics applied to solve forensic document examination involving crossed lines and obliteration cases by a depth profiling study.

    PubMed

    Borba, Flávia de Souza Lins; Jawhari, Tariq; Saldanha Honorato, Ricardo; de Juan, Anna

    2017-03-27

    This article describes a non-destructive analytical method developed to solve forensic document examination problems involving crossed lines and obliteration. Different strategies combining confocal Raman imaging and multivariate curve resolution-alternating least squares (MCR-ALS) are presented. Multilayer images were acquired at subsequent depth layers into the samples. It is the first time that MCR-ALS is applied to multilayer images for forensic purposes. In this context, this method provides a single set of pure spectral ink signatures and related distribution maps for all layers examined from the sole information in the raw measurement. Four cases were investigated, namely, two concerning crossed lines with different degrees of ink similarity and two related to obliteration, where previous or no knowledge about the identity of the obliterated ink was available. In the crossing line scenario, MCR-ALS analysis revealed the ink nature and the chronological order in which strokes were drawn. For obliteration cases, results making active use of information about the identity of the obliterated ink in the chemometric analysis were of similar quality as those where the identity of the obliterated ink was unknown. In all obliteration scenarios, the identity of inks and the obliterated text were satisfactorily recovered. The analytical methodology proposed is of general use for analytical forensic document examination problems, and considers different degrees of complexity and prior available information. Besides, the strategies of data analysis proposed can be applicable to any other kind of problem in which multilayer Raman images from multicomponent systems have to be interpreted.

  6. Real-time human pose estimation and gesture recognition from depth images using superpixels and SVM classifier.

    PubMed

    Kim, Hanguen; Lee, Sangwon; Lee, Dongsung; Choi, Soonmin; Ju, Jinsun; Myung, Hyun

    2015-05-26

    In this paper, we present human pose estimation and gesture recognition algorithms that use only depth information. The proposed methods are designed to be operated with only a CPU (central processing unit), so that the algorithm can be operated on a low-cost platform, such as an embedded board. The human pose estimation method is based on an SVM (support vector machine) and superpixels without prior knowledge of a human body model. In the gesture recognition method, gestures are recognized from the pose information of a human body. To recognize gestures regardless of motion speed, the proposed method utilizes the keyframe extraction method. Gesture recognition is performed by comparing input keyframes with keyframes in registered gestures. The gesture yielding the smallest comparison error is chosen as a recognized gesture. To prevent recognition of gestures when a person performs a gesture that is not registered, we derive the maximum allowable comparison errors by comparing each registered gesture with the other gestures. We evaluated our method using a dataset that we generated. The experiment results show that our method performs fairly well and is applicable in real environments.

  7. Perceptual scaling of visual and inertial cues: effects of field of view, image size, depth cues, and degree of freedom.

    PubMed

    Correia Grácio, B J; Bos, J E; van Paassen, M M; Mulder, M

    2014-02-01

    In the field of motion-based simulation, it was found that a visual amplitude equal to the inertial amplitude does not always provide the best perceived match between visual and inertial motion. This result is thought to be caused by the "quality" of the motion cues delivered by the simulator motion and visual systems. This paper studies how different visual characteristics, like field of view (FoV) and size and depth cues, influence the scaling between visual and inertial motion in a simulation environment. Subjects were exposed to simulator visuals with different fields of view and different visual scenes and were asked to vary the visual amplitude until it matched the perceived inertial amplitude. This was done for motion profiles in surge, sway, and yaw. Results showed that the subjective visual amplitude was significantly affected by the FoV, visual scene, and degree-of-freedom. When the FoV and visual scene were closer to what one expects in the real world, the scaling between the visual and inertial cues was closer to one. For yaw motion, the subjective visual amplitudes were approximately the same as the real inertial amplitudes, whereas for sway and especially surge, the subjective visual amplitudes were higher than the inertial amplitudes. This study demonstrated that visual characteristics affect the scaling between visual and inertial motion which leads to the hypothesis that this scaling may be a good metric to quantify the effect of different visual properties in motion-based simulation.

  8. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  9. Static stereo vision depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, D. B.; Von Sydow, M.

    1988-01-01

    A major problem in high-precision teleoperation is the high-resolution presentation of depth information. Stereo television has so far proved to be only a partial solution, due to an inherent trade-off among depth resolution, depth distortion and the alignment of the stereo image pair. Converged cameras can guarantee image alignment but suffer significant depth distortion when configured for high depth resolution. Moving the stereo camera rig to scan the work space further distorts depth. The 'dynamic' (camera-motion induced) depth distortion problem was solved by Diner and Von Sydow (1987), who have quantified the 'static' (camera-configuration induced) depth distortion. In this paper, a stereo image presentation technique which yields aligned images, high depth resolution and low depth distortion is demonstrated, thus solving the trade-off problem.

  10. Conductivity-depth imaging of fixed-wing time-domain electromagnetic data with pitch based on two-component measurement

    NASA Astrophysics Data System (ADS)

    Dou, Mei; Zhang, Qiong; Meng, Yang; Li, Jing; Lu, Yiming; Zhu, Kaiguang

    2017-01-01

    Conductivity-depth imaging (CDI) of data is generally applied in identifying conductive targets. CDI results will be affected by the bird attitude especially the pitch of the receiver coil due to the attitude, velocity of the aircraft and the wind speed. A CDI algorithm with consideration of pitch is developed based on two-component measurement. A table is established based on two-component B field response and the pitch is considered as a parameter in the table. Primary advantages of this method are immunity to pith errors and better resolution of conductive layers than results without consideration of pith. Not only the conductivity but also the pitch can be obtained from this algorithm. Tests on synthetic data demonstrate that the CDI results with pitch based on two-component measurement does a better job than the results without consideration of pitch and the pitch obtained is close to the true model in many circumstances.

  11. Choroidal area assessment in various fundus sectors of patients at different stages of primary open-angle glaucoma by using enhanced depth imaging optical coherence tomography

    PubMed Central

    Li, Mu; Yan, Xiao-Qin; Song, Yin-Wei; Guo, Jing-Min; Zhang, Hong

    2017-01-01

    Abstract To compare the choroidal area in different eye fundus sectors of subjects with normal eyes, early-stage primary open-angle glaucoma (POAG) eyes, and 10° tubular visual field POAG eyes using enhanced depth imaging optical coherence tomography. Twenty-five normal, 25 early-stage POAG, and 25 ten-degree tubular visual field POAG eyes were recruited. Enhanced depth imaging optical coherence tomography was used to measure the choroidal area in different fundus sectors (fovea; 10° superior, inferior, temporal, and 24° superior, inferior, temporal, nasal to the fovea) and the peripapillary sector. There were neither significant differences in the choroidal area at any of the 8 measured fundus sectors, nor significant differences in the percentage change between the choroidal area of the fovea and other 7 measured fundus sectors among the 3 groups (all P > 0.05). For the total peripapillary choroidal area, no significant difference was found among the 3 groups (P > 0.05); however, the temporal peripapillary choroidal area of 10° tubular visual field POAG eyes was significantly thicker than that of normal eyes (4,46,213 ± 1,16,267 vs 3,74,164 ± 1,21,658 μm2; P = 0.048). Our study showed that there was no significant difference in the choroidal area of the 8 measured fundus sectors among normal, early-stage POAG, and 10° tubular visual field POAG eyes, suggesting that there might be no blood redistribution from the peripheral choroid to the subfoveal choroid. However, the thicker temporal peripapillary choroidal area might play a role in the central visual acuity protection in patients with POAG. PMID:28272255

  12. Teaching image-processing concepts in junior high school: boys' and girls' achievements and attitudes towards technology

    NASA Astrophysics Data System (ADS)

    Barak, Moshe; Asad, Khaled

    2012-04-01

    Background : This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose : The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these subjects to the children's world and to the digital culture characterizing society today. Sample : The participants were 60 junior high-school students (9th grade). Design and method : Data collection included observations in the classes, administering an attitude questionnaire before and after the course, giving an achievement exam and analyzing the students' final projects. Results and conclusions : The findings indicated that boys' and girls' achievements were similar throughout the course, and all managed to handle the mathematical knowledge without any particular difficulties. Learners' motivation to engage in the subject was high in the project-based learning part of the course in which they dealt, for instance, with editing their own pictures and experimenting with a facial recognition method. However, the students were less interested in learning the theory at the beginning of the course. The course increased the girls', more than the boys', interest in learning scientific-technological subjects in school, and the gender gap in this regard was bridged.

  13. An Examination of the Relationship between Gifted Students' Self-Image, Gifted Program Model, Years in the Program, and Academic Achievement

    ERIC Educational Resources Information Center

    Creasy, Lydia A.

    2012-01-01

    This study examined the correlations between gifted students' self-image, academic achievement, and number of years enrolled in the gifted programming. In addition, the study examined the relationships between gifted students' educational placement, race, and gender with self-image. Study participants were gifted students in third through eighth…

  14. Topographical and depth-dependent glycosaminoglycan concentration in canine medial tibial cartilage 3 weeks after anterior cruciate ligament transection surgery—a microscopic imaging study

    PubMed Central

    Mittelstaedt, Daniel; Kahn, David

    2016-01-01

    Background Medical imaging has become an invaluable tool to diagnose damage to cartilage. Depletion of glycosaminoglycans (GAG) has been shown to be one of the early signs of cartilage degradation. In order to investigate the topographical changes in GAG concentration caused by the anterior cruciate ligament transection (ACLT) surgery in a canine model, microscopic magnetic resonance imaging (µMRI) and microscopic computed tomography (µCT) were used to measure the GAG concentration with correlation from a biochemical assay, inductively coupled plasma optical emission spectroscopy (ICP-OES), to understand where the topographical and depth-dependent changes in the GAG concentration occur. Methods This study used eight knee joints from four canines, which were examined 3 weeks after ACLT surgery. From right (n=3) and left (n=1) medial tibias of the ACLT and the contralateral side, two ex vivo specimens from each of four locations (interior, central, exterior and posterior) were imaged before and after equilibration in contrast agents. The cartilage blocks imaged using µMRI were approximately 3 mm × 5 mm and were imaged before and after eight hours submersion in a gadolinium (Gd) contrast agent with an in-plane pixel resolution of 17.6 µm2 and an image slice thickness of 1 mm. The cartilage blocks imaged using µCT were approximately 2 mm × 1 mm and were imaged before and after 24 hours submersed in ioxaglate with an isotropic voxel resolution of 13.4 µm3. ICP-OES was used to quantify the bulk GAG at each topographical location. Results The pre-contrast µMRI and µCT results did not demonstrate significant differences in GAG between the ACLT and contralateral cartilage at all topographical locations. The post-contrast µMRI and µCT results demonstrated topographically similar significant differences in GAG concentrations between the ACLT and contralateral tibia. Using µMRI, the GAG concentrations (mg/mL) were measured for the ACLT and contralateral

  15. Cost-effective instrumentation for quantitative depth measurement of optic nerve head using stereo fundus image pair and image cross correlation techniques

    NASA Astrophysics Data System (ADS)

    de Carvalho, Luis Alberto V.; Carvalho, Valeria

    2014-02-01

    One of the main problems with glaucoma throughout the world is that there are typically no symptoms in the early stages. Many people who have the disease do not know they have it and by the time one finds out, the disease is usually in an advanced stage. Most retinal cameras available in the market today use sophisticated optics and have several other features/capabilities (wide-angle optics, red-free and angiography filters, etc) that make them expensive for the general practice or for screening purposes. Therefore, it is important to develop instrumentation that is fast, effective and economic, in order to reach the mass public in the general eye-care centers. In this work, we have constructed the hardware and software of a cost-effective and non-mydriatic prototype device that allows fast capturing and plotting of high-resolution quantitative 3D images and videos of the optical disc head and neighboring region (30° of field of view). The main application of this device is for glaucoma screening, although it may also be useful for the diagnosis of other pathologies related to the optic nerve.

  16. Depth perception estimation of various stereoscopic displays.

    PubMed

    Baek, Sangwook; Lee, Chulhee

    2016-10-17

    In this paper, we investigate the relationship between depth perception and several disparity parameters in stereoscopic images. A number of subjective experiments were conducted using various 3D displays, which indicate that depth perception of stereoscopic images is proportional to depth difference and is inversely related to the camera distance. Based on this observation, we developed some formulas to quantify the degree of depth perception of stereoscopic images. The proposed method uses depth differences and the camera distance between the objects and the 3D camera. This method also produces improved depth perception estimation by using non-linear functions whose inputs include a depth difference and a camera distance. The results show that the proposed method provides noticeable improvements in terms of correlation and produces more accurate depth perception estimations of stereoscopic images.

  17. Metal detector depth estimation algorithms

    NASA Astrophysics Data System (ADS)

    Marble, Jay; McMichael, Ian

    2009-05-01

    This paper looks at depth estimation techniques using electromagnetic induction (EMI) metal detectors. Four algorithms are considered. The first utilizes a vertical gradient sensor configuration. The second is a dual frequency approach. The third makes use of dipole and quadrapole receiver configurations. The fourth looks at coils of different sizes. Each algorithm is described along with its associated sensor. Two figures of merit ultimately define algorithm/sensor performance. The first is the depth of penetration obtainable. (That is, the maximum detection depth obtainable.) This describes the performance of the method to achieve detection of deep targets. The second is the achievable statistical depth resolution. This resolution describes the precision with which depth can be estimated. In this paper depth of penetration and statistical depth resolution are qualitatively determined for each sensor/algorithm. A scientific method is used to make these assessments. A field test was conducted using 2 lanes with emplaced UXO. The first lane contains 155 shells at increasing depths from 0" to 48". The second is more realistic containing objects of varying size. The first lane is used for algorithm training purposes, while the second is used for testing. The metal detectors used in this study are the: Geonics EM61, Geophex GEM5, Minelab STMR II, and the Vallon VMV16.

  18. Optimization of the depth resolution for deuterium depth profiling up to large depths

    NASA Astrophysics Data System (ADS)

    Wielunska, B.; Mayer, M.; Schwarz-Selinger, T.

    2016-11-01

    The depth resolution of deuterium depth profiling by the nuclear reaction D(3He,p)α is studied theoretically and experimentally. General kinematic considerations are presented which show that the depth resolution for deuterium depth profiling using the nuclear reaction D(3He,p)α is best at reaction angles of 0° and 180° at all incident energies below 9 MeV and for all depths and materials. In order to confirm this theoretical prediction the depth resolution was determined experimentally with a conventional detector at 135° and an annular detector at 175.9°. Deuterium containing thin films buried under different metal cover layers of aluminum, molybdenum and tungsten with thicknesses in the range of 0.5-11 μm served as samples. For all materials and depths an improvement of the depth resolution with the detector at 175.9° is achieved. For tungsten as cover layer a better depth resolution up to a factor of 18 was determined. Good agreement between the experimental results and the simulations for the depth resolution is demonstrated.

  19. Depth resolution enhancement in double-detection optical scanning holography.

    PubMed

    Ou, Haiyan; Poon, Ting-Chung; Wong, Kenneth K Y; Lam, Edmund Y

    2013-05-01

    We propose an optical scanning holography system with enhanced axial resolution using two detections at different depths. By scanning the object twice, we can obtain two different sets of Fresnel zone plates to sample the same object, which in turn provides more information for the sectional image reconstruction process. We develop the computation algorithm that makes use of such information, solving a constrained optimization problem using the conjugate gradient method. Simulation results show that this method can achieve a depth resolution up to 1 μm.

  20. Evaluation of Choroidal Thickness and Volume during the Third Trimester of Pregnancy using Enhanced Depth Imaging Optical Coherence Tomography: A Pilot Study

    PubMed Central

    Meira, Dália M; Oliveira, Marisa A; Ribeiro, Lígia F; Fonseca, Sofia L

    2015-01-01

    Background During pregnancy the maternal choroid is exposed to the multiple haemodynamic and hormonal alterations inherent to this physiological condition. These changes may influence choroidal anatomy. In this study a quantitative assessment of overall choroidal structure is performed, by constructing a 3-dimensional topographic map of this vascular bed. Purpose To compare the thickness and volume of the maternal choroidal in the third trimester of pregnancy with that of an age-matched control group of women. Materials and Methods Twenty-four eyes of 12 pregnant women in the last trimester and 12 age-matched healthy controls (24 eyes) were included. Optical coherence tomography in enhanced depth imaging mode was used to construct maps of the choroid of the macular area. Choroidal thickness and volume were automatically calculated for the 9 subfields defined by the Early Treatment Diabetic Retinopathy Study (ETDRS). A comparative analysis between the two groups was performed using the two-way ANOVA test. Results The average thickness of the choroid for the entire ETDRS area of the pregnant group was 295.15 ±42.40μm and 271.56 ±37.65μm in the control group (p=0.051). The average choroidal volume was 8.05 ±1.12mm3 and 7.46 ±1.03mm3, respectively (p=0.067). Although the choroid of the pregnant group had larger thickness and volume in all subfields compared to the control group, this difference was statistically significant only in three regions - the central subfield, minimum foveal thickness and inferior inner macula (p<0.05). Conclusion Our study suggests that in the third trimester of pregnancy the choroid may be subjected to physiological changes in structure. Whether these changes are a result of hormonal and/or haemodynamic adaptations of pregnancy remains to be studied. PMID:26435977

  1. Achieving Quality in Cardiovascular Imaging II: proceedings from the Second American College of Cardiology -- Duke University Medical Center Think Tank on Quality in Cardiovascular Imaging.

    PubMed

    Douglas, Pamela S; Chen, Jersey; Gillam, Linda; Hendel, Robert; Hundley, W Gregory; Masoudi, Frederick; Patel, Manesh R; Peterson, Eric

    2009-02-01

    Despite rapid technologic advances and sustained growth, less attention has been focused on quality in imaging than in other areas of cardiovascular medicine. To address this deficit, representatives from cardiovascular imaging societies, private payers, government agencies, the medical imaging industry, and experts in quality measurement met in the second Quality in Cardiovascular Imaging Think Tank. The participants endorsed the previous consensus definition of quality in imaging and proposed quality measures. Additional areas of needed effort included data standardization and structured reporting, appropriateness criteria, imaging registries, laboratory accreditation, partnership development, and imaging research. The second American College of Cardiology-Duke University Think Tank continued the process of the development, dissemination, and adoption of quality improvement initiatives for all cardiovascular imaging modalities.

  2. Ultrashallow seismic imaging of the causative fault of the 1980, M6.9, southern Italy earthquake by pre-stack depth migration of dense wide-aperture data

    NASA Astrophysics Data System (ADS)

    Bruno, Pier Paolo; Castiello, Antonio; Improta, Luigi

    2010-10-01

    A two-step imaging procedure, including pre-stack depth migration (PSDM) and non-linear multiscale refraction tomography, was applied to dense wide-aperture data with the aim of imaging the causative fault of the 1980, M6.9, Irpinia normal faulting earthquake in a very complex geologic environment. PSDM is often ineffective for ultrashallow imaging (100 m of depth and less) of laterally heterogeneous media because of the difficulty in estimating a correct velocity model for migration. Dense wide-aperture profiling allowed us to build accurate velocity models across the fault zone by multiscale tomography and to record wide-angle reflections from steep reflectors. PSDM provided better imaging with respect to conventional post-stack depth migration, and improved definition of fault geometry and apparent cumulative displacement. Results indicate that this imaging strategy can be very effective for near-surface fault detection and characterization. Fault location and geometry are in agreement with paleoseismic data from two nearby trenches. The estimated vertical fault throw is only 29-38 m. This value, combined with the vertical slip rate determined by trench data, suggests a young age (97-127 kyr) of fault inception.

  3. High-dimensional camera shake removal with given depth map.

    PubMed

    Yue, Tao; Suo, Jinli; Dai, Qionghai

    2014-06-01

    Camera motion blur is drastically nonuniform for large depth-range scenes, and the nonuniformity caused by camera translation is depth dependent but not the case for camera rotations. To restore the blurry images of large-depth-range scenes deteriorated by arbitrary camera motion, we build an image blur model considering 6-degrees of freedom (DoF) of camera motion with a given scene depth map. To make this 6D depth-aware model tractable, we propose a novel parametrization strategy to reduce the number of variables and an effective method to estimate high-dimensional camera motion as well. The number of variables is reduced by temporal sampling motion function, which describes the 6-DoF camera motion by sampling the camera trajectory uniformly in time domain. To effectively estimate the high-dimensional camera motion parameters, we construct the probabilistic motion density function (PMDF) to describe the probability distribution of camera poses during exposure, and apply it as a unified constraint to guide the convergence of the iterative deblurring algorithm. Specifically, PMDF is computed through a back projection from 2D local blur kernels to 6D camera motion parameter space and robust voting. We conduct a series of experiments on both synthetic and real captured data, and validate that our method achieves better performance than existing uniform methods and nonuniform methods on large-depth-range scenes.

  4. Targeting Cancer Protein Profiles with Split-Enzyme Reporter Fragments to Achieve Chemical Resolution for Molecular Imaging

    DTIC Science & Technology

    2014-11-01

    conducted in vivo with animals bearing tumors that express either a subset or all of the biomarkers required for enzyme trans-complementation. 3 15...SUBJECT TERMS molecular imaging, enzyme complementation, cancer biomarkers, epidermal growth factor receptor 16. SECURITY CLASSIFICATION OF: U 17...application for imaging the multi- step progression of cancer growth that requires the coordinated overexpression of multiple biomarkers. The

  5. Event-related functional magnetic resonance imaging (efMRI) of depth-by-disparity perception: additional evidence for right-hemispheric lateralization.

    PubMed

    Baecke, Sebastian; Lützkendorf, Ralf; Tempelmann, Claus; Müller, Charles; Adolf, Daniela; Scholz, Michael; Bernarding, Johannes

    2009-07-01

    In natural environments depth-related information has to be extracted very fast from binocular disparity even if cues are presented shortly. However, few studies used efMRI to study depth perception. We therefore analyzed extension and localization of activation evoked by depth-by-disparity stimuli that were displayed for 1 s. As some clinical as well as neuroimaging studies had found a right-hemispheric lateralization of depth perception the sample size was increased to 26 subjects to gain higher statistical significance. All individuals reported a stable depth perception. In the random effects analysis the maximum activation of the disparity versus no disparity condition was highly significant and located in the extra-striate cortex, presumably in V3A (P < 0.05, family wise error). The activation was more pronounced in the right hemisphere. However, in the single-subject analysis depth-related right-hemispheric lateralization was observed only in 65% of the subjects. Lateralization of depth-by-disparity may therefore be obscured in smaller groups.

  6. Jupiter Clouds in Depth

    NASA Technical Reports Server (NTRS)

    2000-01-01

    [figure removed for brevity, see original site] 619 nm [figure removed for brevity, see original site] 727 nm [figure removed for brevity, see original site] 890 nm

    Images from NASA's Cassini spacecraft using three different filters reveal cloud structures and movements at different depths in the atmosphere around Jupiter's south pole.

    Cassini's cameras come equipped with filters that sample three wavelengths where methane gas absorbs light. These are in the red at 619 nanometer (nm) wavelength and in the near-infrared at 727 nm and 890 nm. Absorption in the 619 nm filter is weak. It is stronger in the 727 nm band and very strong in the 890 nm band where 90 percent of the light is absorbed by methane gas. Light in the weakest band can penetrate the deepest into Jupiter's atmosphere. It is sensitive to the amount of cloud and haze down to the pressure of the water cloud, which lies at a depth where pressure is about 6 times the atmospheric pressure at sea level on the Earth). Light in the strongest methane band is absorbed at high altitude and is sensitive only to the ammonia cloud level and higher (pressures less than about one-half of Earth's atmospheric pressure) and the middle methane band is sensitive to the ammonia and ammonium hydrosulfide cloud layers as deep as two times Earth's atmospheric pressure.

    The images shown here demonstrate the power of these filters in studies of cloud stratigraphy. The images cover latitudes from about 15 degrees north at the top down to the southern polar region at the bottom. The left and middle images are ratios, the image in the methane filter divided by the image at a nearby wavelength outside the methane band. Using ratios emphasizes where contrast is due to methane absorption and not to other factors, such as the absorptive properties of the cloud particles, which influence contrast at all wavelengths.

    The most prominent feature seen in all three filters is the polar stratospheric haze that makes Jupiter

  7. High-resolution surface charge image achieved by a multiforce sensor based on a quartz tuning fork in electrostatic force microscope

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-yong; Bao, Jian-bin; Zhang, Hong-hai; Guo, Wen-ming

    2002-08-01

    A multiforce sensor was fabricated by attaching a tiny tungsten tip to a tuning fork. By operating an ac modulation bias on the minitip of the needle sensor, we have achieved a dynamic noncontact mode electrostatic force microscope with high spatial resolution. It can utilize the van der Waals force and electrostatic force signals between the microtip and the sample, respectively, to obtain the images of topography and quantitative surface charge density of an open-gate field effect transistor simultaneously.

  8. A Unified Approach for Registration and Depth in Depth from Defocus.

    PubMed

    Ben-Ari, Rami

    2014-06-01

    Depth from Defocus (DFD) suggests a simple optical set-up to recover the shape of a scene through imaging with shallow depth of field. Although numerous methods have been proposed for DFD, less attention has been paid to the particular problem of alignment between the captured images. The inherent shift-variant defocus often prevents standard registration techniques from achieving the accuracy needed for successful shape reconstruction. In this paper, we address the DFD and registration problem in a unified framework, exploiting their mutual relation to reach a better solution for both cues. We draw a formal connection between registration and defocus blur, find its limitations and reveal the weakness of the standard isolated approaches of registration and depth estimation. The solution is approached by energy minimization. The efficiency of the associated numerical scheme is justified by showing its equivalence to the celebrated Newton-Raphson method and proof of convergence of the emerged linear system. The computationally intensive approach of DFD, newly combined with simultaneous registration, is handled by GPU computing. Experimental results demonstrate the high sensitivity of the recovered shapes to slight errors in registration and validate the superior performance of the suggested approach over two, separately applying registration and DFD alternatives.

  9. Jupiter Clouds in Depth

    NASA Technical Reports Server (NTRS)

    2000-01-01

    [figure removed for brevity, see original site] 619 nm [figure removed for brevity, see original site] 727 nm [figure removed for brevity, see original site] 890 nm

    Images from NASA's Cassini spacecraft using three different filters reveal cloud structures and movements at different depths in the atmosphere around Jupiter's south pole.

    Cassini's cameras come equipped with filters that sample three wavelengths where methane gas absorbs light. These are in the red at 619 nanometer (nm) wavelength and in the near-infrared at 727 nm and 890 nm. Absorption in the 619 nm filter is weak. It is stronger in the 727 nm band and very strong in the 890 nm band where 90 percent of the light is absorbed by methane gas. Light in the weakest band can penetrate the deepest into Jupiter's atmosphere. It is sensitive to the amount of cloud and haze down to the pressure of the water cloud, which lies at a depth where pressure is about 6 times the atmospheric pressure at sea level on the Earth). Light in the strongest methane band is absorbed at high altitude and is sensitive only to the ammonia cloud level and higher (pressures less than about one-half of Earth's atmospheric pressure) and the middle methane band is sensitive to the ammonia and ammonium hydrosulfide cloud layers as deep as two times Earth's atmospheric pressure.

    The images shown here demonstrate the power of these filters in studies of cloud stratigraphy. The images cover latitudes from about 15 degrees north at the top down to the southern polar region at the bottom. The left and middle images are ratios, the image in the methane filter divided by the image at a nearby wavelength outside the methane band. Using ratios emphasizes where contrast is due to methane absorption and not to other factors, such as the absorptive properties of the cloud particles, which influence contrast at all wavelengths.

    The most prominent feature seen in all three filters is the polar stratospheric haze that makes Jupiter

  10. Academic Achievement and the Self-Image of Adolescents with Diabetes Mellitus Type-1 And Rheumatoid Arthritis.

    ERIC Educational Resources Information Center

    Erkolahti, Ritva; Ilonen, Tuula

    2005-01-01

    A total of 69 adolescents, 21 with diabetes mellitus type-1 (DM), 24 with rheumatoid arthritis (RA), and 24 controls matched for sex, age, social background, and living environment, were compared by means of their school grades and the Offer Self-Image Questionnaire. The ages of the children at the time of the diagnosis of the disease and its…

  11. Teaching Image-Processing Concepts in Junior High School: Boys' and Girls' Achievements and Attitudes towards Technology

    ERIC Educational Resources Information Center

    Barak, Moshe; Asad, Khaled

    2012-01-01

    Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these…

  12. SU-E-T-387: Achieving Optimal Patient Setup Imaging and Treatment Workflow Configurations in Multi-Room Proton Centers

    SciTech Connect

    Zhang, H; Prado, K; Langen, K; Yi, B; Mehta, M; Regine, W; D'Souza, W

    2014-06-01

    Purpose: To simulate patient flow in proton treatment center under uncertainty and to explore the feasibility of treatment preparation rooms to improve patient throughput and cyclotron utilization. Methods: Three center layout scenarios were modeled: (S1: In-Tx room imaging) patient setup and imaging (planar/volumetric) performed in treatment room, (S2: Patient setup in preparation room) each treatment room was assigned with preparation room(s) that was equipped with lasers only for patient setup and gross patient alignment, and (S3: Patient setup and imaging in preparation room) preparation room(s) was equipped with laser and volumetric imaging for patient setup, gross and fine patient alignment. A 'snap' imaging was performed in treatment room. For each scenario, the number of treatment rooms and the number of preparation rooms serving each treatment room were varied. We examined our results (average of 100 16-hour (two shifts) working days) by evaluating patient throughput and cyclotron utilization. Results: When the number of treatment rooms increased ([from, to]) [1, 5], daily patient throughput increased [32, 161], [29, 184] and [27, 184] and cyclotron utilization increased [13%, 85%], [12%, 98%], and [11%, 98%] for scenarios S1, S2 and S3 respectively. However, both measures plateaued after 4 rooms. With the preparation rooms, the throughput and the cyclotron utilization increased by 14% and 15%, respectively. Three preparation rooms were optimal to serve 1-3 treatment rooms and two preparation rooms were optimal to serve 4 or 5 treatment rooms. Conclusion: Patient preparation rooms for patient setup may increase throughput and decrease the need for additional treatment rooms (cost effective). Optimal number of preparation rooms serving each gantry room varies as a function of treatment rooms and patient setup scenarios. A 5th treatment room may not be justified by throughput or utilization.

  13. Depth Estimation Using a Sliding Camera.

    PubMed

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  14. Chemical analysis of solid materials by a LIMS instrument designed for space research: 2D elemental imaging, sub-nm depth profiling and molecular surface analysis

    NASA Astrophysics Data System (ADS)

    Moreno-García, Pavel; Grimaudo, Valentine; Riedo, Andreas; Neuland, Maike B.; Tulej, Marek; Broekmann, Peter; Wurz, Peter

    2016-04-01

    Direct quantitative chemical analysis with high lateral and vertical resolution of solid materials is of prime importance for the development of a wide variety of research fields, including e.g., astrobiology, archeology, mineralogy, electronics, among many others. Nowadays, studies carried out by complementary state-of-the-art analytical techniques such as Auger Electron Spectroscopy (AES), X-ray Photoelectron Spectroscopy (XPS), Secondary Ion Mass Spectrometry (SIMS), Glow Discharge Time-of-Flight Mass Spectrometry (GD-TOF-MS) or Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) provide extensive insight into the chemical composition and allow for a deep understanding of processes that might have fashioned the outmost layers of an analyte due to its interaction with the surrounding environment. Nonetheless, these investigations typically employ equipment that is not suitable for implementation on spacecraft, where requirements concerning weight, size and power consumption are very strict. In recent years Laser Ablation/Ionization Mass Spectrometry (LIMS) has re-emerged as a powerful analytical technique suitable not only for laboratory but also for space applications.[1-3] Its improved performance and measurement capabilities result from the use of cutting edge ultra-short femtosecond laser sources, improved vacuum technology and fast electronics. Because of its ultimate compactness, simplicity and robustness it has already proven to be a very suitable analytical tool for elemental and isotope investigations in space research.[4] In this contribution we demonstrate extended capabilities of our LMS instrument by means of three case studies: i) 2D chemical imaging performed on an Allende meteorite sample,[5] ii) depth profiling with unprecedented sub-nm vertical resolution on Cu electrodeposited interconnects[6,7] and iii) preliminary molecular desorption of polymers without assistance of matrix or functionalized substrates.[8] On the whole

  15. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    SciTech Connect

    Wang, Qi; Wang, Junting; Lu, Qingyou; Hou, Yubin

    2013-11-15

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d{sub 31} coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  16. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Hou, Yubin; Wang, Junting; Lu, Qingyou

    2013-11-01

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d31 coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  17. Action Classification by Joint Boosting Using Spatiotemporal and Depth Information

    NASA Astrophysics Data System (ADS)

    Ikemura, Sho; Fujiyoshi, Hironobu

    This paper presents a method for action classification by using Joint Boosting with depth information obtained by TOF camera. Our goal is to classify action of a customer who takes the goods from each of the upper, middle and lower shelf in the supermarkets and convenience stores. Our method detects of human region by using Pixel State Analysis (PSA) from the depth image stream obtained by TOF camera, and extracts the PSA features captured from human-motion and the depth features (peak value of depth) captured from the information of human-height. We employ Joint Boosting, which is a multi-class classification of boosting method, to perform the action classification. Since the proposed method employs spatiotemporal and depth feature, it is possible to perform the detection of action for taking the goods and the classification of the height of the shelf simultaneously. Experimental results show that our method using PSA feature and peak value of depth achieved a classification rate of 93.2%. It also had a 3.1% higher performance than that of the CHLAC feature, and 2.8% higher performance than that of the ST-patch feature.

  18. Remote sensing of stream depths with hydraulically assisted bathymetry (HAB) models

    NASA Astrophysics Data System (ADS)

    Fonstad, Mark A.; Marcus, W. Andrew

    2005-12-01

    This article introduces a technique for using a combination of remote sensing imagery and open-channel flow principles to estimate depths for each pixel in an imaged river. This technique, which we term hydraulically assisted bathymetry (HAB), uses a combination of local stream gage information on discharge, image brightness data, and Manning-based estimates of stream resistance to calculate water depth. The HAB technique does not require ground-truth depth information at the time of flight. HAB can be accomplished with multispectral or hyperspectral data, and therefore can be applied over entire watersheds using standard high spatial resolution satellite or aerial images. HAB also has the potential to be applied retroactively to historic imagery, allowing researchers to map temporal changes in depth. We present two versions of the technique, HAB-1 and HAB-2. HAB-1 is based primarily on the geometry, discharge and velocity relationships of river channels. Manning's equation (assuming average depth approximates the hydraulic radius), the discharge equation, and the assumption that the frequency distribution of depths within a cross-section approximates that of a triangle are combined with discharge data from a local station, width measurements from imagery, and slope measurements from maps to estimate minimum, average and maximum depths at a multiple cross-sections. These depths are assigned to pixels of maximum, average, and minimum brightness within the cross-sections to develop a brightness-depth relation to estimate depths throughout the remainder of the river. HAB-2 is similar to HAB-1 in operation, but the assumption that the distribution of depths approximates that of a triangle is replaced by an optical Beer-Lambert law of light absorbance. In this case, the flow equations and the optical equations are used to iteratively scale the river pixel values until their depths produce a discharge that matches that of a nearby gage. R2 values for measured depths

  19. Fast and accurate auto-focusing algorithm based on the combination of depth from focus and improved depth from defocus.

    PubMed

    Zhang, Xuedian; Liu, Zhaoqing; Jiang, Minshan; Chang, Min

    2014-12-15

    An auto-focus method for digital imaging systems is proposed that combines depth from focus (DFF) and improved depth from defocus (DFD). The traditional DFD method is improved to become more rapid, which achieves a fast initial focus. The defocus distance is first calculated by the improved DFD method. The result is then used as a search step in the searching stage of the DFF method. A dynamic focusing scheme is designed for the control software, which is able to eliminate environmental disturbances and other noises so that a fast and accurate focus can be achieved. An experiment is designed to verify the proposed focusing method and the results show that the method's efficiency is at least 3-5 times higher than that of the traditional DFF method.

  20. Depth perception of illusory surfaces.

    PubMed

    Kogo, Naoki; Drożdżewska, Anna; Zaenen, Peter; Alp, Nihan; Wagemans, Johan

    2014-03-01

    The perception of an illusory surface, a subjectively perceived surface that is not given in the image, is one of the most intriguing phenomena in vision. It strongly influences the perception of some fundamental properties, namely, depth, lightness and contours. Recently, we suggested (1) that the context-sensitive mechanism of depth computation plays a key role in creating the illusion, (2) that the illusory lightness perception can be explained by an influence of depth perception on the lightness computation, and (3) that the perception of variations of the Kanizsa figure can be well-reproduced by implementing these principles in a model (Kogo, Strecha, et al., 2010). However, depth perception, lightness perception, contour perception, and their interactions can be influenced by various factors. It is essential to measure the differences between the variation figures in these aspects separately to further understand the mechanisms. As a first step, we report here the results of a new experimental paradigm to compare the depth perception of the Kanizsa figure and its variations. One of the illusory figures was presented side-by-side with a non-illusory variation whose stereo disparities were varied. Participants had to decide in which of these two figures the central region appeared closer. The results indicate that the depth perception of the illusory surface was indeed different in the variation figures. Furthermore, there was a non-linear interaction between the occlusion cues and stereo disparity cues. Implications of the results for the neuro-computational mechanisms are discussed.

  1. Additive and subtractive transparent depth displays

    NASA Astrophysics Data System (ADS)

    Kooi, Frank L.; Toet, Alexander

    2003-09-01

    Image fusion is the generally preferred method to combine two or more images for visual display on a single screen. We demonstrate that perceptual image separation may be preferable over perceptual image fusion for the combined display of enhanced and synthetic imagery. In this context image separation refers to the simultaneous presentation of images on different depth planes of a single display. Image separation allows the user to recognize the source of the information that is displayed. This can be important because synthetic images are more liable to flaws. We have examined methods to optimize perceptual image separation. A true depth difference between enhanced and synthetic imagery works quite well. A standard stereoscopic display based on convergence is less suitable since the two images tend to interfere: the image behind is masked (occluded) by the image in front, which results in poor viewing comfort. This effect places 3D systems based on 3D glasses, as well as most autostereoscopic displays, at a serious disadvantage. A 3D display based on additive or subtractive transparency is acceptable: both the perceptual separation and the viewing comfort are good, but the color of objects depends on the color in the other depth layer(s). A combined additive and subtractive transparent display eliminates this disadvantage and is most suitable for the combined display of enhanced and synthetic imagery. We suggest that the development of such a display system is of a greater practical value than increasing the number of depth planes in autostereoscopic displays.

  2. Correlation Plenoptic Imaging

    NASA Astrophysics Data System (ADS)

    D'Angelo, Milena; Pepe, Francesco V.; Garuccio, Augusto; Scarcelli, Giuliano

    2016-06-01

    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.

  3. Imaging Live Cells at the Nanometer-Scale with Single-Molecule Microscopy: Obstacles and Achievements in Experiment Optimization for Microbiology

    PubMed Central

    Haas, Beth L.; Matson, Jyl S.; DiRita, Victor J.; Biteen, Julie S.

    2015-01-01

    Single-molecule fluorescence microscopy enables biological investigations inside living cells to achieve millisecond- and nanometer-scale resolution. Although single-molecule-based methods are becoming increasingly accessible to non-experts, optimizing new single-molecule experiments can be challenging, in particular when super-resolution imaging and tracking are applied to live cells. In this review, we summarize common obstacles to live-cell single-molecule microscopy and describe the methods we have developed and applied to overcome these challenges in live bacteria. We examine the choice of fluorophore and labeling scheme, approaches to achieving single-molecule levels of fluorescence, considerations for maintaining cell viability, and strategies for detecting single-molecule signals in the presence of noise and sample drift. We also discuss methods for analyzing single-molecule trajectories and the challenges presented by the finite size of a bacterial cell and the curvature of the bacterial membrane. PMID:25123183

  4. Focal overlap gating in velocity map imaging to achieve high signal-to-noise ratio in photo-ion pump-probe experiments

    NASA Astrophysics Data System (ADS)

    Shivaram, Niranjan; Champenois, Elio G.; Cryan, James P.; Wright, Travis; Wingard, Taylor; Belkacem, Ali

    2016-12-01

    We demonstrate a technique in velocity map imaging (VMI) that allows spatial gating of the laser focal overlap region in time resolved pump-probe experiments. This significantly enhances signal-to-noise ratio by eliminating background signal arising outside the region of spatial overlap of pump and probe beams. This enhancement is achieved by tilting the laser beams with respect to the surface of the VMI electrodes which creates a gradient in flight time for particles born at different points along the beam. By suitably pulsing our microchannel plate detector, we can select particles born only where the laser beams overlap. This spatial gating in velocity map imaging can benefit nearly all photo-ion pump-probe VMI experiments especially when extreme-ultraviolet light or X-rays are involved which produce large background signals on their own.

  5. Water surface depth instrument

    NASA Technical Reports Server (NTRS)

    Davis, Q. C., IV

    1970-01-01

    Measurement gage provides instant visual indication of water depth based on capillary action and light diffraction in a group of solid, highly polished polymethyl methacrylate rods. Rod lengths are adjustable to measure various water depths in any desired increments.

  6. The Depths from Skin to the Major Organs at Chest Acupoints of Pediatric Patients

    PubMed Central

    Ma, Yi-Chun; Peng, Ching-Tien; Huang, Yu-Chuen; Lin, Hung-Yi; Lin, Jaung-Geng

    2015-01-01

    Background. Acupuncture is applied to treat numerous diseases in pediatric patients. Few reports have been published on the depth to which it is safe to insert needle acupoints in pediatric patients. We evaluated the depths to which acupuncture needles can be inserted safely in chest acupoints in pediatric patients and the variations in safe depth according to sex, age, body weight, and body mass index (BMI). Methods. We retrospectively studied computed tomography (CT) images of pediatric patients aged 4 to 18 years who had undergone chest CT at China Medical University Hospital from December 2004 to May 2013. The safe depth of chest acupoints was directly measured from the CT images. The relationships between the safe depth of these acupoints and sex, age, body weight, and BMI were analyzed. Results. The results demonstrated significant differences in depth among boys and girls at KI25 (kidney meridian), ST16 (stomach meridian), ST18, SP17 (spleen meridian), SP19, SP20, PC1 (pericardium meridian), LU2 (lung meridian), and GB22 (gallbladder meridian). Safe depth significantly differed among the age groups (P < 0.001), weight groups (P < 0.05), and BMI groups (P < 0.05). Conclusion. Physicians should focus on large variations in needle depth during acupuncture for achieving optimal therapeutic effect and preventing complications. PMID:26457105

  7. 7.0-T Magnetic Resonance Imaging Characterization of Acute Blood-Brain-Barrier Disruption Achieved with Intracranial Irreversible Electroporation

    PubMed Central

    Garcia, Paulo A.; Rossmeisl, John H.; Robertson, John L.; Olson, John D.; Johnson, Annette J.; Ellis, Thomas L.; Davalos, Rafael V.

    2012-01-01

    The blood-brain-barrier (BBB) presents a significant obstacle to the delivery of systemically administered chemotherapeutics for the treatment of brain cancer. Irreversible electroporation (IRE) is an emerging technology that uses pulsed electric fields for the non-thermal ablation of tumors. We hypothesized that there is a minimal electric field at which BBB disruption occurs surrounding an IRE-induced zone of ablation and that this transient response can be measured using gadolinium (Gd) uptake as a surrogate marker for BBB disruption. The study was performed in a Good Laboratory Practices (GLP) compliant facility and had Institutional Animal Care and Use Committee (IACUC) approval. IRE ablations were performed in vivo in normal rat brain (n = 21) with 1-mm electrodes (0.45 mm diameter) separated by an edge-to-edge distance of 4 mm. We used an ECM830 pulse generator to deliver ninety 50-μs pulse treatments (0, 200, 400, 600, 800, and 1000 V/cm) at 1 Hz. The effects of applied electric fields and timing of Gd administration (−5, +5, +15, and +30 min) was assessed by systematically characterizing IRE-induced regions of cell death and BBB disruption with 7.0-T magnetic resonance imaging (MRI) and histopathologic evaluations. Statistical analysis on the effect of applied electric field and Gd timing was conducted via Fit of Least Squares with α = 0.05 and linear regression analysis. The focal nature of IRE treatment was confirmed with 3D MRI reconstructions with linear correlations between volume of ablation and electric field. Our results also demonstrated that IRE is an ablation technique that kills brain tissue in a focal manner depicted by MRI (n = 16) and transiently disrupts the BBB adjacent to the ablated area in a voltage-dependent manner as seen with Evan's Blue (n = 5) and Gd administration. PMID:23226293

  8. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-07

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second.

  9. Learning in Depth: Students as Experts

    ERIC Educational Resources Information Center

    Egan, Kieran; Madej, Krystina

    2009-01-01

    Nearly everyone who has tried to describe an image of the educated person, from Plato to the present, includes at least two requirements: first, educated people must be widely knowledgeable and, second, they must know something in depth. The authors would like to advocate a somewhat novel approach to "learning in depth" (LiD) that seems…

  10. Temporal and Spatial Denoising of Depth Maps

    PubMed Central

    Lin, Bor-Shing; Su, Mei-Ju; Cheng, Po-Hsun; Tseng, Po-Jui; Chen, Sao-Jie

    2015-01-01

    This work presents a procedure for refining depth maps acquired using RGB-D (depth) cameras. With numerous new structured-light RGB-D cameras, acquiring high-resolution depth maps has become easy. However, there are problems such as undesired occlusion, inaccurate depth values, and temporal variation of pixel values when using these cameras. In this paper, a proposed method based on an exemplar-based inpainting method is proposed to remove artefacts in depth maps obtained using RGB-D cameras. Exemplar-based inpainting has been used to repair an object-removed image. The concept underlying this inpainting method is similar to that underlying the procedure for padding the occlusions in the depth data obtained using RGB-D cameras. Therefore, our proposed method enhances and modifies the inpainting method for application in and the refinement of RGB-D depth data image quality. For evaluating the experimental results of the proposed method, our proposed method was tested on the Tsukuba Stereo Dataset, which contains a 3D video with the ground truths of depth maps, occlusion maps, RGB images, the peak signal-to-noise ratio, and the computational time as the evaluation metrics. Moreover, a set of self-recorded RGB-D depth maps and their refined versions are presented to show the effectiveness of the proposed method. PMID:26230696

  11. Temporal and Spatial Denoising of Depth Maps.

    PubMed

    Lin, Bor-Shing; Su, Mei-Ju; Cheng, Po-Hsun; Tseng, Po-Jui; Chen, Sao-Jie

    2015-07-29

    This work presents a procedure for refining depth maps acquired using RGB-D (depth) cameras. With numerous new structured-light RGB-D cameras, acquiring high-resolution depth maps has become easy. However, there are problems such as undesired occlusion, inaccurate depth values, and temporal variation of pixel values when using these cameras. In this paper, a proposed method based on an exemplar-based inpainting method is proposed to remove artefacts in depth maps obtained using RGB-D cameras. Exemplar-based inpainting has been used to repair an object-removed image. The concept underlying this inpainting method is similar to that underlying the procedure for padding the occlusions in the depth data obtained using RGB-D cameras. Therefore, our proposed method enhances and modifies the inpainting method for application in and the refinement of RGB-D depth data image quality. For evaluating the experimental results of the proposed method, our proposed method was tested on the Tsukuba Stereo Dataset, which contains a 3D video with the ground truths of depth maps, occlusion maps, RGB images, the peak signal-to-noise ratio, and the computational time as the evaluation metrics. Moreover, a set of self-recorded RGB-D depth maps and their refined versions are presented to show the effectiveness of the proposed method.

  12. Applicability of compressive sensing on three-dimensional terahertz imagery for in-depth object defect detection and recognition using a dedicated semisupervised image processing methodology

    NASA Astrophysics Data System (ADS)

    Brook, Anna; Cristofani, Edison; Becquaert, Mathias; Lauwens, Ben; Jonuscheit, Joachim; Vandewal, Marijke

    2013-04-01

    The quality control of composite multilayered materials and structures using nondestructive tests is of high interest for numerous applications in the aerospace and aeronautics industry. One of the established nondestructive methods uses microwaves to reveal defects inside a three-dimensional (3-D) object. Recently, there has been a tendency to extrapolate this method to higher frequencies (going to the subterahertz spectrum) which could lead to higher resolutions in the obtained 3-D images. Working at higher frequencies reveals challenges to deal with the increased data rate and to efficiently and effectively process and evaluate the obtained 3-D imagery for defect detection and recognition. To deal with these two challenges, we combine compressive sensing (for data rate reduction) with a dedicated image processing methodology for a fast, accurate, and robust quality evaluation of the object under test. We describe in detail the used methodology and evaluate the obtained results using subterahertz data acquired of two calibration samples with a frequency modulated continuous wave system. The applicability of compressive sensing within this context is discussed as well as the quality of the image processing methodology dealing with the reconstructed images.

  13. Terahertz interferometric synthetic aperture tomography for confocal imaging systems.

    PubMed

    Heimbeck, M S; Marks, D L; Brady, D; Everitt, H O

    2012-04-15

    Terahertz (THz) interferometric synthetic aperture tomography (TISAT) for confocal imaging within extended objects is demonstrated by combining attributes of synthetic aperture radar and optical coherence tomography. Algorithms recently devised for interferometric synthetic aperture microscopy are adapted to account for the diffraction-and defocusing-induced spatially varying THz beam width characteristic of narrow depth of focus, high-resolution confocal imaging. A frequency-swept two-dimensional TISAT confocal imaging instrument rapidly achieves in-focus, diffraction-limited resolution over a depth 12 times larger than the instrument's depth of focus in a manner that may be easily extended to three dimensions and greater depths.

  14. Noninvasive Optical Imaging and In Vivo Cell Tracking of Indocyanine Green Labeled Human Stem Cells Transplanted at Superficial or In-Depth Tissue of SCID Mice

    PubMed Central

    Sabapathy, Vikram; Mentam, Jyothsna; Jacob, Paul Mazhuvanchary; Kumar, Sanjay

    2015-01-01

    Stem cell based therapies hold great promise for the treatment of human diseases; however results from several recent clinical studies have not shown a level of efficacy required for their use as a first-line therapy, because more often in these studies fate of the transplanted cells is unknown. Thus monitoring the real-time fate of in vivo transplanted cells is essential to validate the full potential of stem cells based therapy. Recent studies have shown how real-time in vivo molecular imaging has helped in identifying hurdles towards clinical translation and designing potential strategies that may contribute to successful transplantation of stem cells and improved outcomes. At present, there are no cost effective and efficient labeling techniques for tracking the cells under in vivo conditions. Indocyanine green (ICG) is a safer, economical, and superior labelling technique for in vivo optical imaging. ICG is a FDA-approved agent and decades of usage have clearly established the effectiveness of ICG for human clinical applications. In this study, we have optimized the ICG labelling conditions that is optimal for noninvasive optical imaging and demonstrated that ICG labelled cells can be successfully used for in vivo cell tracking applications in SCID mice injury models. PMID:26240573

  15. Achieving high-resolution soft-tissue imaging with cone-beam CT: a two-pronged approach for modulation of x-ray fluence and detector gain

    NASA Astrophysics Data System (ADS)

    Graham, S. A.; Siewerdsen, J. H.; Moseley, D. J.; Keller, H.; Shkumat, N. A.; Jaffray, D. A.

    2005-04-01

    Cone-beam computed tomography (CBCT) presents a highly promising and challenging advanced application of flat-panel detectors (FPDs). The great advantage of this adaptable technology is in the potential for sub-mm 3D spatial resolution in combination with soft-tissue detectability. While the former is achieved naturally by CBCT systems incorporating modern FPD designs (e.g., 200 - 400 um pixel pitch), the latter presents a significant challenge due to limitations in FPD dynamic range, large field of view, and elevated levels of x-ray scatter in typical CBCT configurations. We are investigating a two-pronged strategy to maximizing soft-tissue detectability in CBCT: 1) front-end solutions, including novel beam modulation designs (viz., spatially varying compensators) that alleviate detector dynamic range requirements, reduce x-ray scatter, and better distribute imaging dose in a manner suited to soft-tissue visualization throughout the field of view; and 2) back-end solutions, including implementation of an advanced FPD design (Varian PaxScan 4030CB) that features dual-gain and dynamic gain switching that effectively extends detector dynamic range to 18 bits. These strategies are explored quantitatively on CBCT imaging platforms developed in our laboratory, including a dedicated CBCT bench and a mobile isocentric C-arm (Siemens PowerMobil). Pre-clinical evaluation of improved soft-tissue visibility was carried out in phantom and patient imaging with the C-arm device. Incorporation of these strategies begin to reveal the full potential of CBCT for soft-tissue visualization, an essential step in realizing broad utility of this adaptable technology for diagnostic and image-guided procedures.

  16. Rifting-to-drifting transition of the South China Sea: early Cenozoic syn-rifting deposition imaged with prestack depth migration

    NASA Astrophysics Data System (ADS)

    Song, T.; Li, C.; Li, J.

    2012-12-01

    One of the major unsolved questions of the opening of the South China Sea (SCS) is its opening sequences and episodes. It has been suggested, for example, that the opening of the East and Northwest Sub-basins predated, or at least synchronized with, that of the Southwest Sub-basin, a model contrasting with some others in which an earlier opening in the Southwest Sub-basin is preferred. Difficulties in understanding the perplexing relationships between different sub-basins are often compounded by contradicting evidences leading to different interpretations. Here we carry out pre-stack depth migration of a recently acquired multichannel reflection seismic profile from the Southwest Sub-basin of the SCS in order to reveal complicated subsurface structures and strong lateral velocity variations associated with a thick syn-rifting sequence on the southern margin of the Southwest Sub-basin. Combined with gravimetric and magnetic inversion and modeling, this depth section helps us understand the complicated transitional processes from continental rifting to seafloor spreading. This syn-rifting sequence is found to be extremely thick, over 2 seconds in two-way travel time, and is located directly within the continent-ocean transition zone. It is bounded landwards by a seaward dipping fault, and tapers out seaward. The top of this sequence is an erosional truncation, representing mainly the Oligocene-Miocene unconformity landward but slightly an older unconformity on the seaward side. Stronger erosions of this sequence are found toward the ocean basin. The sequence itself is severely faulted by a group of seaward dipping faults developed mainly within the sequence. The overall deformation style suggests a successive episode of rifting, faulting, compression, tilting, and erosion, prior to seafloor spreading. Integrating information from gravity anomalies and seismic velocities, we interpret that this sequence represents a syn-rifting sequence developed during a long period

  17. Assessment of imaging with extended depth-of-field by means of the light sword lens in terms of visual acuity scale

    PubMed Central

    Kakarenko, Karol; Ducin, Izabela; Grabowiecki, Krzysztof; Jaroszewicz, Zbigniew; Kolodziejczyk, Andrzej; Mira-Agudelo, Alejandro; Petelczyc, Krzysztof; Składowska, Aleksandra; Sypek, Maciej

    2015-01-01

    We present outcomes of an imaging experiment using the refractive light sword lens (LSL) as a contact lens in an optical system that serves as a simplified model of the presbyopic eye. The results show that the LSL produces significant improvements in visual acuity of the simplified presbyopic eye model over a wide range of defocus. Therefore, this element can be an interesting alternative for the multifocal contact and intraocular lenses currently used in ophthalmology. The second part of the article discusses possible modifications of the LSL profile in order to render it more suitable for fabrication and ophthalmological applications. PMID:26137376

  18. Combination of an optical parametric oscillator and quantum-dots 655 to improve imaging depth of vasculature by intravital multicolor two-photon microscopy.

    PubMed

    Ricard, Clément; Lamasse, Lisa; Jaouen, Alexandre; Rougon, Geneviève; Debarbieux, Franck

    2016-06-01

    Simultaneous imaging of different cell types and structures in the mouse central nervous system (CNS) by intravital two-photon microscopy requires the characterization of fluorophores and advances in approaches to visualize them. We describe the use of a two-photon infrared illumination generated by an optical parametric oscillator (OPO) on quantum-dots 655 (QD655) nanocrystals to improve resolution of the vasculature deeper in the mouse brain both in healthy and pathological conditions. Moreover, QD655 signal can be unmixed from the DsRed2, CFP, EGFP and EYFP fluorescent proteins, which enhances the panel of multi-parametric correlative investigations both in the cortex and the spinal cord.

  19. Nonsingle viewpoint omni-stereo depth estimation via space layer labeling

    NASA Astrophysics Data System (ADS)

    Chen, Wang; Zhang, Maojun; Xiong, Zhihui

    2011-04-01

    An omni-directional stereo system has a wider field-of-view than conventional cameras, and is widely used in many applications such as robots navigation, depth estimation, and 3D reconstruction. Existing approaches usually use single viewpoint (SVP) systems as the imaging sensor. However, literature proves that an efficient SVP of an omni-directional system can only be achieved with precisely aligned mirrors of parabolic or hyperbolic profile. This enforces rigorous restriction on the configuration of camera and mirrors. In fact, some other profiles, though they do not have the SVP property, are desirable for certain reasons such as cheaper cost and more practical implementation. Therefore, in this paper, we propose both a typical nonsingle viewpoint (non-SVP) omni-directional stereo sensor and its corresponding depth estimation method based on graph-cuts optimization. The sensor comprises a perspective camera and two separate reflective mirrors that could be any radially-symmetric ones. To formulate the depth estimation more consistent with the proposed sensor, we divide the depth space of scenes with a sequence of virtual coaxial cylindrical layers, and model depth estimation as a labeling problem. In the labeling procedure, by considering the characteristics of an omni-directional image, we further devise novel tangential-neighborhood system, radial-neighborhood system, and depth-gradual-changing smoothness constraint which perform better than traditional ones. Depth estimation and 3D reconstruction for both synthesis and real scenes justify the effectiveness of the proposed method.

  20. Sampling Depths, Depth Shifts, and Depth Resolutions for Bi(n)(+) Ion Analysis in Argon Gas Cluster Depth Profiles.

    PubMed

    Havelund, R; Seah, M P; Gilmore, I S

    2016-03-10

    Gas cluster sputter depth profiling is increasingly used for the spatially resolved chemical analysis and imaging of organic materials. Here, a study is reported of the sampling depth in secondary ion mass spectrometry depth profiling. It is shown that effects of the sampling depth leads to apparent shifts in depth profiles of Irganox 3114 delta layers in Irganox 1010 sputtered, in the dual beam mode, using 5 keV Ar₂₀₀₀⁺ ions and analyzed with Bi(q+), Bi₃(q+) and Bi₅(q+) ions (q = 1 or 2) with energies between 13 and 50 keV. The profiles show sharp delta layers, broadened from their intrinsic 1 nm thickness to full widths at half-maxima (fwhm's) of 8-12 nm. For different secondary ions, the centroids of the measured delta layers are shifted deeper or shallower by up to 3 nm from the position measured for the large, 564.36 Da (C₃₃H₄₆N₃O₅⁻) characteristic ion for Irganox 3114 used to define a reference position. The shifts are linear with the Bi(n)(q+) beam energy and are greatest for Bi₃(q+), slightly less for Bi₅(q+) with its wider or less deep craters, and significantly less for Bi(q+) where the sputtering yield is very low and the primary ion penetrates more deeply. The shifts increase the fwhm’s of the delta layers in a manner consistent with a linearly falling generation and escape depth distribution function (GEDDF) for the emitted secondary ions, relevant for a paraboloid shaped crater. The total depth of this GEDDF is 3.7 times the delta layer shifts. The greatest effect is for the peaks with the greatest shifts, i.e. Bi₃(q+) at the highest energy, and for the smaller fragments. It is recommended that low energies be used for the analysis beam and that carefully selected, large, secondary ion fragments are used for measuring depth distributions, or that the analysis be made in the single beam mode using the sputtering Ar cluster ions also for analysis.

  1. Imaging medical imaging

    NASA Astrophysics Data System (ADS)

    Journeau, P.

    2015-03-01

    This paper presents progress on imaging the research field of Imaging Informatics, mapped as the clustering of its communities together with their main results by applying a process to produce a dynamical image of the interactions between their results and their common object(s) of research. The basic side draws from a fundamental research on the concept of dimensions and projective space spanning several streams of research about three-dimensional perceptivity and re-cognition and on their relation and reduction to spatial dimensionality. The application results in an N-dimensional mapping in Bio-Medical Imaging, with dimensions such as inflammatory activity, MRI acquisition sequencing, spatial resolution (voxel size), spatiotemporal dimension inferred, toxicity, depth penetration, sensitivity, temporal resolution, wave length, imaging duration, etc. Each field is represented through the projection of papers' and projects' `discriminating' quantitative results onto the specific N-dimensional hypercube of relevant measurement axes, such as listed above and before reduction. Past published differentiating results are represented as red stars, achieved unpublished results as purple spots and projects at diverse progress advancement levels as blue pie slices. The goal of the mapping is to show the dynamics of the trajectories of the field in its own experimental frame and their direction, speed and other characteristics. We conclude with an invitation to participate and show a sample mapping of the dynamics of the community and a tentative predictive model from community contribution.

  2. Stereoscopic depth constancy

    PubMed Central

    Guan, Phillip

    2016-01-01

    Depth constancy is the ability to perceive a fixed depth interval in the world as constant despite changes in viewing distance and the spatial scale of depth variation. It is well known that the spatial frequency of depth variation has a large effect on threshold. In the first experiment, we determined that the visual system compensates for this differential sensitivity when the change in disparity is suprathreshold, thereby attaining constancy similar to contrast constancy in the luminance domain. In a second experiment, we examined the ability to perceive constant depth when the spatial frequency and viewing distance both changed. To attain constancy in this situation, the visual system has to estimate distance. We investigated this ability when vergence, accommodation and vertical disparity are all presented accurately and therefore provided veridical information about viewing distance. We found that constancy is nearly complete across changes in viewing distance. Depth constancy is most complete when the scale of the depth relief is constant in the world rather than when it is constant in angular units at the retina. These results bear on the efficacy of algorithms for creating stereo content. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269596

  3. Depth sensitivity analysis of functional near-infrared spectroscopy measurement using three-dimensional Monte Carlo modelling-based magnetic resonance imaging.

    PubMed

    Mansouri, Chemseddine; L'huillier, Jean-Pierre; Kashou, Nasser H; Humeau, Anne

    2010-05-01

    Theoretical analysis of spatial distribution of near-infrared light propagation in head tissues is very important in brain function measurement, since it is impossible to measure the effective optical path length of the detected signal or the effect of optical fibre arrangement on the regions of measurement or its sensitivity. In this study a realistic head model generated from structure data from magnetic resonance imaging (MRI) was introduced into a three-dimensional Monte Carlo code and the sensitivity of functional near-infrared measurement was analysed. The effects of the distance between source and detector, and of the optical properties of the probed tissues, on the sensitivity of the optical measurement to deep layers of the adult head were investigated. The spatial sensitivity profiles of photons in the head, the so-called banana shape, and the partial mean optical path lengths in the skin-scalp and brain tissues were calculated, so that the contribution of different parts of the head to near-infrared spectroscopy signals could be examined. It was shown that the signal detected in brain function measurements was greatly affected by the heterogeneity of the head tissue and its scattering properties, particularly for the shorter interfibre distances.

  4. The depth measurement of internal defect based on laser speckle shearing interference

    NASA Astrophysics Data System (ADS)

    Peng, Yanhua; Liu, Guixiong; Quan, Yanming; Zeng, Qilin

    2017-07-01

    Speckle shearing interference has been widely used in non-destructive testing (NDT) as a useful NDT tool for its 阿advantages such as full-field, non-contacting measurement and so on. It reveals internal defect of an object by identifying defect-induced deformation anomalies. The location and size of internal defects are critical factors in determining the stability of the performance and the service life of the workpiece. This paper put forward a method of the depth measurement of internal defect based on laser speckle shearing interference. The measurement of defect depth is achieved by establishing the mechanical model which contains defect depth, out-of-plane displacement and load conditions, etc., and combining the relevant image information obtained from speckle pattern. The measurement error is less than 10%. The experiments demonstrate that it has a good consistency between the actual depth and the measurement result obtained by the method of this paper.

  5. Cathode depth sensing in CZT detectors

    NASA Astrophysics Data System (ADS)

    Hong, JaeSub; Bellm, Eric C.; Grindlay, Jonathan E.; Narita, Tomohiko

    2004-02-01

    Measuring the depth of interaction in thick Cadmium-Zinc-Telluride (CZT) detectors allows improved imaging and spectroscopy for hard X-ray imaging above 100 keV. The Energetic X-ray Imaging Survey Telescope (EXIST) will employ relatively thick (5 - 10 mm) CZT detectors, which are required to perform the broad energy-band sky survey. Interaction depth information is needed to correct events to the detector "focal plane" for correct imaging and can be used to improve the energy resolution of the detector at high energies by allowing event-based corrections for incomplete charge collection. Background rejection is also improved by allowing low energy events from the rear and sides of the detector to be rejected. We present experimental results of intereaction depth sensing in a 5 mm thick pixellated Au-contact IMARAD CZT detector. The depth sensing was done by making simultaneous measurements of cathode and anode signals, where the interaction depth at a given energy is proportional to the ratio of cathode/anode signals. We demonstrate how a simple empirical formula describing the event distributions in the cathode/anode signal space can dramatically improve the energy resolution. We also estimate the energy and depth resolution of the detector as a function of the energy and the interaction depth. We also show a depth-sensing prototype system currently under development for EXIST in which cathode signals from 8, 16 or 32 crystals can be read-out by a small multi-channel ASIC board that is vertically edge-mounted on the cathode electrode along every second CZT crystal boundary. This allows CZT crystals to be tiled contiguously with minimum impact on throughput of incoming photons. The robust packaging is crucial in EXIST, which will employ very large area imaging CZT detector arrays.

  6. Depth-encoded all-fiber swept source polarization sensitive OCT

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Lee, ByungKun; Choi, WooJhon; Potsaid, Benjamin; Liu, Jonathan; Jayaraman, Vijaysekhar; Cable, Alex; Kraus, Martin F.; Liang, Kaicheng; Hornegger, Joachim; Fujimoto, James G.

    2014-01-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of conventional OCT and can assess depth-resolved tissue birefringence in addition to intensity. Most existing PS-OCT systems are relatively complex and their clinical translation remains difficult. We present a simple and robust all-fiber PS-OCT system based on swept source technology and polarization depth-encoding. Polarization multiplexing was achieved using a polarization maintaining fiber. Polarization sensitive signals were detected using fiber based polarization beam splitters and polarization controllers were used to remove the polarization ambiguity. A simplified post-processing algorithm was proposed for speckle noise reduction relaxing the demand for phase stability. We demonstrated systems design for both ophthalmic and catheter-based PS-OCT. For ophthalmic imaging, we used an optical clock frequency doubling method to extend the imaging range of a commercially available short cavity light source to improve polarization depth-encoding. For catheter based imaging, we demonstrated 200 kHz PS-OCT imaging using a MEMS-tunable vertical cavity surface emitting laser (VCSEL) and a high speed micromotor imaging catheter. The system was demonstrated in human retina, finger and lip imaging, as well as ex vivo swine esophagus and cardiovascular imaging. The all-fiber PS-OCT is easier to implement and maintain compared to previous PS-OCT systems and can be more easily translated to clinical applications due to its robust design. PMID:25401008

  7. Deep depth undex simulator

    SciTech Connect

    Higginbotham, R. R.; Malakhoff, A.

    1985-01-29

    A deep depth underwater simulator is illustrated for determining the dual effects of nuclear type underwater explosion shockwaves and hydrostatic pressures on a test vessel while simulating, hydrostatically, that the test vessel is located at deep depths. The test vessel is positioned within a specially designed pressure vessel followed by pressurizing a fluid contained between the test and pressure vessels. The pressure vessel, with the test vessel suspended therein, is then placed in a body of water at a relatively shallow depth, and an explosive charge is detonated at a predetermined distance from the pressure vessel. The resulting shockwave is transmitted through the pressure vessel wall so that the shockwave impinging on the test vessel is representative of nuclear type explosive shockwaves transmitted to an underwater structure at great depths.

  8. Motivation with Depth.

    ERIC Educational Resources Information Center

    DiSpezio, Michael A.

    2000-01-01

    Presents an illusional arena by offering experience in optical illusions in which students must apply critical analysis to their innate information gathering systems. Introduces different types of depth illusions for students to experience. (ASK)

  9. Depth Optimization Study

    DOE Data Explorer

    Kawase, Mitsuhiro

    2009-11-22

    The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.

  10. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  11. Capturing Motion and Depth Before Cinematography.

    PubMed

    Wade, Nicholas J

    2016-01-01

    Visual representations of biological states have traditionally faced two problems: they lacked motion and depth. Attempts were made to supply these wants over many centuries, but the major advances were made in the early-nineteenth century. Motion was synthesized by sequences of slightly different images presented in rapid succession and depth was added by presenting slightly different images to each eye. Apparent motion and depth were combined some years later, but they tended to be applied separately. The major figures in this early period were Wheatstone, Plateau, Horner, Duboscq, Claudet, and Purkinje. Others later in the century, like Marey and Muybridge, were stimulated to extend the uses to which apparent motion and photography could be applied to examining body movements. These developments occurred before the birth of cinematography, and significant insights were derived from attempts to combine motion and depth.

  12. Contour detection combined with depth information

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Cai, Chao

    2015-12-01

    Many challenging computer vision problems have been proven to benefit from the incorporation of depth information, to name a few, semantic labellings, pose estimations and even contour detection. Different objects have different depths from a single monocular image. The depth information of one object is coherent and the depth information of different objects may vary discontinuously. Meanwhile, there exists a broad non-classical receptive field (NCRF) outside the classical receptive field (CRF). The response of the central neuron is affected not only by the stimulus inside the CRF, but also modulated by the stimulus surrounding it. The contextual modulation is mediated by horizontal connections across the visual cortex. Based on the findings and researches, a biological-inspired contour detection model which combined with depth information is proposed in this paper.

  13. Depth from Optical Turbulence

    DTIC Science & Technology

    2012-01-01

    cess, and often leads to poor image quality. Several works in remote sensing and astronomical imag- ing have focused on image correction through...24] G. Wetzstein, W. Heidrich, and R. Raskar. Hand-held schlieren photography with light field probes. In ICCP, 2011. 2 [25] C. Zhou, O. Cossairt, and

  14. Joint inpainting of depth and reflectance with visibility estimation

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Marco; Aujol, Jean-François; Biasutti, Pierre; Brédif, Mathieu; Bugeau, Aurélie

    2017-03-01

    This paper presents a novel strategy to generate, from 3-D lidar measures, dense depth and reflectance images coherent with given color images. It also estimates for each pixel of the input images a visibility attribute. 3-D lidar measures carry multiple information, e.g. relative distances to the sensor (from which we can compute depths) and reflectances. When projecting a lidar point cloud onto a reference image plane, we generally obtain sparse images, due to undersampling. Moreover, lidar and image sensor positions typically differ during acquisition; therefore points belonging to objects that are hidden from the image view point might appear in the lidar images. The proposed algorithm estimates the complete depth and reflectance images, while concurrently excluding those hidden points. It consists in solving a joint (depth and reflectance) variational image inpainting problem, with an extra variable to concurrently estimate handling the selection of visible points. As regularizers, two coupled total variation terms are included to match, two by two, the depth, reflectance, and color image gradients. We compare our algorithm with other image-guided depth upsampling methods, and show that, when dealing with real data, it produces better inpainted images, by solving the visibility issue.

  15. Single-shot depth camera lens design optimization based on a blur metric

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Lin; Chang, Chuan-Chung; Angot, Ludovic; Chang, Chir-Weei; Tien, Chung-Hao

    2010-08-01

    Computational imaging technology can capture extra information at the sensor and can be used for various photographic applications, including imaging with extended depth of field or depth extraction for 3D applications. The depth estimation from a single captured photograph can be achieved through a phase coded lens and image processing. In this paper, we propose a new method to design a phase coded lens, using a blur metric (BM) as the design criterion. Matlab and Zemax are used for the co-optimization of optical coding and digital image process. The purpose of the design is to find a curve for which the BM changes continuously and seriously within a distance range. We verified our approach by simulation, and got a axial symmetric phase mask as the coded lens. By using a pseudo-random pattern which contains uniform black and white patches as the input image, and the on-axis point spread function (PSF) calculated from Zemax, we can evaluate the BM of the simulated image which is convoluted by the pseudo-random pattern and PSF. In order to ensure the BM curve evaluated from the on-axis PSF represents the result of the whole field of view, the PSF is also optimized to get high off-axis similarity.

  16. Depth estimation algorithm based on data-driven approach and depth cues for stereo conversion in three-dimensional displays

    NASA Astrophysics Data System (ADS)

    Xu, Huihui; Jiang, Mingyan; Li, Fei

    2016-12-01

    With the advances in three-dimensional (3-D) display technology, stereo conversion has attracted much attention as it can alleviate the problem of stereoscopic content shortage. In two-dimensional (2-D) to 3-D conversion, the most difficult and challenging problem is depth estimation from a single image. In order to recover a perceptually plausible depth map from a single image, a depth estimation algorithm based on a data-driven method and depth cues is presented. Based on the human visual system mechanism, which is sensitive to the foreground object, this study classifies the image into one of two classes, i.e., nonobject image and object image, and then leverages different strategies on the basis of image type. The proposed strategies efficiently extract the depth information from different images. Moreover, depth image-based rendering technology is utilized to generate stereoscopic views by combining 2-D images with their depth maps. The proposed method is also suitable for 2-D to 3-D video conversion. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and producing visually pleasing and realistic 3-D views.

  17. Polarization Lidar for Shallow Water Depth Measurement

    NASA Astrophysics Data System (ADS)

    Mitchell, S.; Thayer, J. P.

    2011-12-01

    A bathymetric, polarization lidar system transmitting at 532 nanometers is developed for applications of shallow water depth measurement. The technique exploits polarization attributes of the probed water body to isolate surface and floor returns, enabling constant fraction detection schemes to determine depth. The minimum resolvable water depth is no longer dictated by the system's laser or detector pulse width and can achieve better than an order of magnitude improvement over current water depth determination techniques. In laboratory tests, a Nd:YAG microchip laser coupled with polarization optics, a single photomultiplier tube, a constant fraction discriminator and a time to digital converter are used to target various water depths. Measurement of 1 centimeter water depths with an uncertainty of ±3 millimeters are demonstrated using the technique. Additionally, a dual detection channel version of the lidar system is in development, permitting simultaneous measurement of co- and cross-polarized signals scattered from the target water body. This novel approach enables new approaches to designing laser bathymetry systems for shallow depth determination from remote platforms while not compromising deep water depth measurement, supporting comprehensive hydrodynamic studies.

  18. Improved Boundary Layer Depth Retrievals from MPLNET

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Welton, Ellsworth J.; Molod, Andrea M.; Joseph, Everette

    2013-01-01

    Continuous lidar observations of the planetary boundary layer (PBL) depth have been made at the Micropulse Lidar Network (MPLNET) site in Greenbelt, MD since April 2001. However, because of issues with the operational PBL depth algorithm, the data is not reliable for determining seasonal and diurnal trends. Therefore, an improved PBL depth algorithm has been developed which uses a combination of the wavelet technique and image processing. The new algorithm is less susceptible to contamination by clouds and residual layers, and in general, produces lower PBL depths. A 2010 comparison shows the operational algorithm overestimates the daily mean PBL depth when compared to the improved algorithm (1.85 and 1.07 km, respectively). The improved MPLNET PBL depths are validated using radiosonde comparisons which suggests the algorithm performs well to determine the depth of a fully developed PBL. A comparison with the Goddard Earth Observing System-version 5 (GEOS-5) model suggests that the model may underestimate the maximum daytime PBL depth by 410 m during the spring and summer. The best agreement between MPLNET and GEOS-5 occurred during the fall and they diered the most in the winter.

  19. Depth Transfer: Depth Extraction from Video Using Non-Parametric Sampling.

    PubMed

    Karsch, Kevin; Liu, Ce; Kang, Sing Bing

    2014-11-01

    We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail (non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large data set containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.

  20. Ambiguity in pictorial depth.

    PubMed

    Battu, Balaraju; Kappers, Astrid M L; Koenderink, Jan J

    2007-01-01

    Pictorial space is the 3-D impression that one obtains when looking 'into' a 2-D picture. One is aware of 3-D 'opaque' objects. 'Pictorial reliefs' are the surfaces of such pictorial objects in 'pictorial space'. Photographs (or any pictures) do in no way fully specify physical scenes. Rather, any photograph is compatible with an infinite number of possible scenes that may be called 'metameric scenes'. If pictorial relief is one of these metameric scenes, the response may be considered 'veridical'. The conventional usage is more restrictive and is indeed inconsistent. Thus the observer has much freedom in arriving at such a 'veridical' response. To address this ambiguity, we determined the pictorial reliefs for eight observers, six pictures, and two psychophysical methods. We used 'methods of cross-sections' to operationalise pictorial reliefs. We find that linear regression of the depths of relief at corresponding locations in the picture for different observers often lead to very low (even insignificant) R2s. Thus the responses are idiosyncratic to a large degree. Perhaps surprisingly, we also observed that multiple regression of depth and picture coordinates at corresponding locations often lead to very high R2s. Often R2s increased from insignificant up to almost 1. Apparently, to a large extent 'depth' is irrelevant as a psychophysical variable, in the sense that it does not uniquely account for the relation of the response to the pictorial structure. This clearly runs counter to the bulk of the literature on pictorial 'depth perception'. The invariant core of interindividual perception proves to be of an 'affine' rather than a Euclidean nature; that is to say, 'pictorial space' is not simply the picture plane augmented with a depth dimension.

  1. Real-time viewpoint image synthesis using strips of multi-camera images

    NASA Astrophysics Data System (ADS)

    Date, Munekazu; Takada, Hideaki; Kojima, Akira

    2015-03-01

    A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.

  2. Motion parallax thresholds for unambiguous depth perception.

    PubMed

    Holmin, Jessica; Nawrot, Mark

    2015-10-01

    The perception of unambiguous depth from motion parallax arises from the neural integration of retinal image motion and extra-retinal eye movement signals. It is only recently that these parameters have been articulated in the form of the motion/pursuit ratio. In the current study, we explored the lower limits of the parameter space in which observers could accurately perform near/far relative depth-sign discriminations for a translating random-dot stimulus. Stationary observers pursued a translating random dot stimulus containing relative image motion. Their task was to indicate the location of the peak in an approximate square-wave stimulus. We measured thresholds for depth from motion parallax, quantified as motion/pursuit ratios, as well as lower motion thresholds and pursuit accuracy. Depth thresholds were relatively stable at pursuit velocities 5-20 deg/s, and increased at lower and higher velocities. The pattern of results indicates that minimum motion/pursuit ratios are limited by motion and pursuit signals, both independently and in combination with each other. At low and high pursuit velocities, depth thresholds were limited by inaccurate pursuit signals. At moderate pursuit velocities, depth thresholds were limited by motion signals.

  3. Depth Effects in Micro-PIV

    NASA Astrophysics Data System (ADS)

    Wereley, Steve; Meinhart, Carl; Gray, Mike

    1999-11-01

    When measuring flows in microscale geometries using PIV, it is frequently necessary to illuminate the entire test section with a volume of light, as opposed to a two-dimensional sheet of light. With volume-illuminated PIV, the thickness of the measurement plane must be defined by the focusing characteristics of the recording optics, instead of the thickness of the light sheet. The term 'depth of correlation' is introduced as an estimate of the thickness of the measurement plane since depth of field alone does not adequately account for all the phenomena that affect the thickness of the measurement plane. A theoretical expression for depth of correlation is derived, and is shown to agree well with experimental observations. The effect of the unfocused particle images (i.e. images from particles located outside the depth of correlation) on the background noise and spatial resolution of the measurements is discussed. Experimental results varying flow depth and particle concentration show that there is a trade off between image signal-to-noise ratio and particle concentration. These experiments and analyses demonstrate the potential for PIV to provide the same highly-accurate quantitative measurements at microscopic length scales that have made it a valuable tool at macroscopic length scales.

  4. Investigating the San Andreas Fault System in the Northern Salton Trough by a Combination of Seismic Tomography and Pre-stack Depth Migration: Results from the Salton Seismic Imaging Project (SSIP)

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Ryberg, T.; Fuis, G. S.; Goldman, M.; Catchings, R.; Rymer, M. J.; Hole, J. A.; Stock, J. M.

    2013-12-01

    The Salton Trough in southern California is a tectonically active pull-apart basin which was formed in migrating step-overs between strike-slip faults, of which the San Andreas fault (SAF) and the Imperial fault are current examples. It is located within the large-scale transition between the onshore SAF strike-slip system to the north and the marine rift system of the Gulf of California to the south. Crustal stretching and sinking formed the distinct topographic features and sedimentary successions of the Salton Trough. The active SAF and related fault systems can produce potentially large damaging earthquakes. The Salton Seismic Imaging Project (SSIP), funded by NSF and USGS, was undertaken to generate seismic data and images to improve the knowledge of fault geometry and seismic velocities within the sedimentary basins and underlying crystalline crust around the SAF in this key region. The results from these studies are required as input for modeling of earthquake scenarios and prediction of strong ground motion in the surrounding populated areas and cities. We present seismic data analysis and results from tomography and pre-stack depth migration for a number of seismic profiles (Lines 1, 4-7) covering mainly the northern Salton Trough. The controlled-source seismic data were acquired in 2011. The seismic lines have lengths ranging from 37 to 72 km. On each profile, 9-17 explosion sources with charges of 110-460 kg were recorded by 100-m spaced vertical component receivers. On Line 7, additional OBS data were acquired within the Salton Sea. Travel times of first arrivals were picked and inverted for initial 1D velocity models. Alternatively, the starting models were derived from the crustal-scale velocity models developed by the Southern California Earthquake Center. The final 2D velocity models were obtained using the algorithm of Hole (1992; JGR). We have also tested the tomography packages FAST and SIMUL2000, resulting in similar velocity structures. An

  5. Depth estimation from multiple coded apertures for 3D interaction

    NASA Astrophysics Data System (ADS)

    Suh, Sungjoo; Choi, Changkyu; Park, Dusik

    2013-09-01

    In this paper, we propose a novel depth estimation method from multiple coded apertures for 3D interaction. A flat panel display is transformed into lens-less multi-view cameras which consist of multiple coded apertures. The sensor panel behind the display captures the scene in front of the display through the imaging pattern of the modified uniformly redundant arrays (MURA) on the display panel. To estimate the depth of an object in the scene, we first generate a stack of synthetically refocused images at various distances by using the shifting and averaging approach for the captured coded images. And then, an initial depth map is obtained by applying a focus operator to a stack of the refocused images for each pixel. Finally, the depth is refined by fitting a parametric focus model to the response curves near the initial depth estimates. To demonstrate the effectiveness of the proposed algorithm, we construct an imaging system to capture the scene in front of the display. The system consists of a display screen and an x-ray detector without a scintillator layer so as to act as a visible sensor panel. Experimental results confirm that the proposed method accurately determines the depth of an object including a human hand in front of the display by capturing multiple MURA coded images, generating refocused images at different depth levels, and refining the initial depth estimates.

  6. Burn Depth Monitor

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Supra Medical Systems is successfully marketing a device that detects the depth of burn wounds in human skin. To develop the product, the company used technology developed by NASA Langley physicists looking for better ultrasonic detection of small air bubbles and cracks in metal. The device is being marketed to burn wound analysis and treatment centers. Through a Space Act agreement, NASA and the company are also working to further develop ultrasonic instruments for new medical applications

  7. Burn Depth Monitor

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Supra Medical Systems is successfully marketing a device that detects the depth of burn wounds in human skin. To develop the product, the companyused technology developed by NASA Langley physicists looking for better ultrasonic detection of small air bubbles and cracks in metal. The device is being marketed to burn wound analysis and treatment centers. Through a Space Act agreement, NASA and the company are also working to further develop ultrasonic instruments for new medical applications.

  8. Burn Depth Monitor

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Supra Medical Systems is successfully marketing a device that detects the depth of burn wounds in human skin. To develop the product, the company used technology developed by NASA Langley physicists looking for better ultrasonic detection of small air bubbles and cracks in metal. The device is being marketed to burn wound analysis and treatment centers. Through a Space Act agreement, NASA and the company are also working to further develop ultrasonic instruments for new medical applications.

  9. Variable depth core sampler

    DOEpatents

    Bourgeois, Peter M.; Reger, Robert J.

    1996-01-01

    A variable depth core sampler apparatus comprising a first circular hole saw member, having longitudinal sections that collapses to form a point and capture a sample, and a second circular hole saw member residing inside said first hole saw member to support the longitudinal sections of said first hole saw member and prevent them from collapsing to form a point. The second hole saw member may be raised and lowered inside said first hole saw member.

  10. Design of an optical system with large depth of field using in the micro-assembly

    NASA Astrophysics Data System (ADS)

    Li, Rong; Chang, Jun; Zhang, Zhi-jing; Ye, Xin; Zheng, Hai-jing

    2013-08-01

    Micro system currently is the mainstream of application and demand of the field of micro fabrication of civilian and national defense. Compared with the macro assembly, the requirements on location accuracy of the micro-assembly system are much higher. Usually the dimensions of the components of the micro-assembly are mostly between a few microns to several hundred microns. The general assembly precision requires for the sub-micron level. Micro system assembly is the bottleneck of micro fabrication currently. The optical stereo microscope used in the field of micro assembly technology can achieve high-resolution imaging, but the depth of field of the optical imaging system is too small. Thus it's not conducive to the three-dimensional observation process of the micro-assembly. This paper summarizes the development of micro system assembly at home and abroad firstly. Based on the study of the core features of the technology, a program is proposed which uses wave front coding technology to increase the depth of field of the optical imaging system. In the wave front coding technology, by combining traditional optical design with digital image processing creatively, the depth of field can be greatly increased, moreover, all defocus-related aberrations, such as spherical aberration, chromatic aberration, astigmatism, Ptzvel(field) curvature, distortion, and other defocus induced by the error of assembling and temperature change, can be corrected or minimized. In this paper, based on the study of theory, a set of optical microscopy imaging system is designed. This system is designed and optimized by optical design software CODE V and ZEMAX. At last, the imaging results of the traditional optical stereo microscope and the optical stereo microscope applied wave front coding technology are compared. The results show that: the method has a practical operability and the phase plate obtained by optimized has a good effect on improving the imaging quality and increasing the

  11. Variable depth core sampler

    SciTech Connect

    Bourgeois, P.M.; Reger, R.J.

    1994-12-31

    This invention relates to a sampling means, more particularly to a device to sample hard surfaces at varying depths. Often it is desirable to take samples of a hard surface wherein the samples are of the same diameter but of varying depths. Current practice requires that a full top-to-bottom sample of the material be taken, using a hole saw, and boring a hole from one end of the material to the other. The sample thus taken is removed from the hole saw and the middle of said sample is then subjected to further investigation. This paper describes a variable depth core sampler comprimising a circular hole saw member, having longitudinal sections that collapse to form a point and capture a sample, and a second saw member residing inside the first hole saw member to support the longitudinal sections of the first member and prevent them from collapsing to form a point. The second hole saw member may be raised and lowered inside the the first hole saw member.

  12. Focus cues affect perceived depth

    PubMed Central

    Watt, Simon J.; Akeley, Kurt; Ernst, Marc O.; Banks, Martin S.

    2007-01-01

    Depth information from focus cues—accommodation and the gradient of retinal blur—is typically incorrect in three-dimensional (3-D) displays because the light comes from a planar display surface. If the visual system incorporates information from focus cues into its calculation of 3-D scene parameters, this could cause distortions in perceived depth even when the 2-D retinal images are geometrically correct. In Experiment 1 we measured the direct contribution of focus cues to perceived slant by varying independently the physical slant of the display surface and the slant of a simulated surface specified by binocular disparity (binocular viewing) or perspective/texture (monocular viewing). In the binocular condition, slant estimates were unaffected by display slant. In the monocular condition, display slant had a systematic effect on slant estimates. Estimates were consistent with a weighted average of slant from focus cues and slant from disparity/texture, where the cue weights are determined by the reliability of each cue. In Experiment 2, we examined whether focus cues also have an indirect effect on perceived slant via the distance estimate used in disparity scaling. We varied independently the simulated distance and the focal distance to a disparity-defined 3-D stimulus. Perceived slant was systematically affected by changes in focal distance. Accordingly, depth constancy (with respect to simulated distance) was significantly reduced when focal distance was held constant compared to when it varied appropriately with the simulated distance to the stimulus. The results of both experiments show that focus cues can contribute to estimates of 3-D scene parameters. Inappropriate focus cues in typical 3-D displays may therefore contribute to distortions in perceived space. PMID:16441189

  13. Segmentation of biological target volumes on multi-tracer PET images based on information fusion for achieving dose painting in radiotherapy.

    PubMed

    Lelandais, Benoît; Gardin, Isabelle; Mouchard, Laurent; Vera, Pierre; Ruan, Su

    2012-01-01

    Medical imaging plays an important role in radiotherapy. Dose painting consists in the application of a nonuniform dose prescription on a tumoral region, and is based on an efficient segmentation of biological target volumes (BTV). It is derived from PET images, that highlight tumoral regions of enhanced glucose metabolism (FDG), cell proliferation (FLT) and hypoxia (FMiso). In this paper, a framework based on Belief Function Theory is proposed for BTV segmentation and for creating 3D parametric images for dose painting. We propose to take advantage of neighboring voxels for BTV segmentation, and also multi-tracer PET images using information fusion to create parametric images. The performances of BTV segmentation was evaluated on an anthropomorphic phantom and compared with two other methods. Quantitative results show the good performances of our method. It has been applied to data of five patients suffering from lung cancer. Parametric images show promising results by highlighting areas where a high frequency or dose escalation could be planned.

  14. Prestack depth migration for complex 2D structure using phase-screen propagators

    SciTech Connect

    Roberts, P.; Huang, Lian-Jie; Burch, C.; Fehler, M.; Hildebrand, S.

    1997-11-01

    We present results for the phase-screen propagator method applied to prestack depth migration of the Marmousi synthetic data set. The data were migrated as individual common-shot records and the resulting partial images were superposed to obtain the final complete Image. Tests were performed to determine the minimum number of frequency components required to achieve the best quality image and this in turn provided estimates of the minimum computing time. Running on a single processor SUN SPARC Ultra I, high quality images were obtained in as little as 8.7 CPU hours and adequate images were obtained in as little as 4.4 CPU hours. Different methods were tested for choosing the reference velocity used for the background phase-shift operation and for defining the slowness perturbation screens. Although the depths of some of the steeply dipping, high-contrast features were shifted slightly the overall image quality was fairly insensitive to the choice of the reference velocity. Our jests show the phase-screen method to be a reliable and fast algorithm for imaging complex geologic structures, at least for complex 2D synthetic data where the velocity model is known.

  15. Extended depth of field system for long distance iris acquisition

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Lin; Hsieh, Sheng-Hsun; Hung, Kuo-En; Yang, Shi-Wen; Li, Yung-Hui; Tien, Chung-Hao

    2012-10-01

    Using biometric signatures for identity recognition has been practiced for centuries. Recently, iris recognition system attracts much attention due to its high accuracy and high stability. The texture feature of iris provides a signature that is unique for each subject. Currently most commercial iris recognition systems acquire images in less than 50 cm, which is a serious constraint that needs to be broken if we want to use it for airport access or entrance that requires high turn-over rate . In order to capture the iris patterns from a distance, in this study, we developed a telephoto imaging system with image processing techniques. By using the cubic phase mask positioned front of the camera, the point spread function was kept constant over a wide range of defocus. With adequate decoding filter, the blurred image was restored, where the working distance between the subject and the camera can be achieved over 3m associated with 500mm focal length and aperture F/6.3. The simulation and experimental results validated the proposed scheme, where the depth of focus of iris camera was triply extended over the traditional optics, while keeping sufficient recognition accuracy.

  16. Passive depth estimation using chromatic aberration and a depth from defocus approach.

    PubMed

    Trouvé, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Sabater, Jacques; Avignon, Thierry; Idier, Jérôme

    2013-10-10

    In this paper, we propose a new method for passive depth estimation based on the combination of a camera with longitudinal chromatic aberration and an original depth from defocus (DFD) algorithm. Indeed a chromatic lens, combined with an RGB sensor, produces three images with spectrally variable in-focus planes, which eases the task of depth extraction with DFD. We first propose an original DFD algorithm dedicated to color images having spectrally varying defocus blurs. Then we describe the design of a prototype chromatic camera so as to evaluate experimentally the effectiveness of the proposed approach for depth estimation. We provide comparisons with results of an active ranging sensor and real indoor/outdoor scene reconstructions.

  17. SU-F-303-02: Achieving 4D MRI in Regular Breathing Cycle with Extended Acquisition Time of Dynamic MR Images

    SciTech Connect

    Hui, C; Beddar, S; Wen, Z; Stemkens, B; Tijssen, R; Berg, C van den

    2015-06-15

    Purpose: The purpose of this study is to develop a technique to obtain four-dimensional (4D) magnetic resonance (MR) images that are more representative of a patient’s typical breathing cycle by utilizing an extended acquisition time while minimizing the image artifacts. Methods: The 4D MR data were acquired with the balanced steady state free precession in two-dimensional sagittal plane of view. Each slice was acquired repeatedly for about 15 s, thereby obtaining multiple images at each of the 10 phases in the respiratory cycle. This improves the probability that at least one of the images were acquired at the desired phase during a regular breathing cycle. To create optimal 4D MR images, an iterative approach was used to identify the set of images that yielded the highest slice-to-slice similarity. To assess the effectiveness of the approach, the data set was truncated into periods of 7 s (50 time points), 11 s (75 time points) and the full 15 s (100 time points). The 4D MR images were then sorted with data of the three different acquisition periods for comparison. Results: In general, the 4D MR images sorted using data from longer acquisition periods showed less mismatched artifacts. In addition, the normalized cross correlation (NCC) between slices of a 4D volume increases with increased acquisition period. The average NCC was 0.791 from the 7 s period, 0.794 from the 11 s period and 0.796 from the 15 s period. Conclusion: Our preliminary study showed that extending the acquisition time with the proposed sorting technique can improve image quality and reduce artifact presence in the 4D MR images. Data acquisition over two breathing cycles is a good trade-off between artifact reduction and scan time. This research was partially funded by the the Center for Radiation Oncology Research from UT MD Anderson Cancer Center.

  18. Photoacoustic molecular imaging

    NASA Astrophysics Data System (ADS)

    Kiser, William L., Jr.; Reinecke, Daniel; DeGrado, Timothy; Bhattacharyya, Sibaprasad; Kruger, Robert A.

    2007-02-01

    It is well documented that photoacoustic imaging has the capability to differentiate tissue based on the spectral characteristics of tissue in the optical regime. The imaging depth in tissue exceeds standard optical imaging techniques, and systems can be designed to achieve excellent spatial resolution. A natural extension of imaging the intrinsic optical contrast of tissue is to demonstrate the ability of photoacoustic imaging to detect contrast agents based on optically absorbing dyes that exhibit well defined absorption peaks in the infrared. The ultimate goal of this project is to implement molecular imaging, in which Herceptin TM, a monoclonal antibody that is used as a therapeutic agent in breast cancer patients that over express the HER2 gene, is labeled with an IR absorbing dye, and the resulting in vivo bio-distribution is mapped using multi-spectral, infrared stimulation and subsequent photoacoustic detection. To lay the groundwork for this goal and establish system sensitivity, images were collected in tissue mimicking phantoms to determine maximum detection depth and minimum detectable concentration of Indocyanine Green (ICG), a common IR absorbing dye, for a single angle photoacoustic acquisition. A breast mimicking phantom was constructed and spectra were also collected for hemoglobin and methanol. An imaging schema was developed that made it possible to separate the ICG from the other tissue mimicking components in a multiple component phantom. We present the results of these experiments and define the path forward for the detection of dye labeled Herceptin TM in cell cultures and mice models.

  19. The relation between Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol optical depth and PM2.5 over the United States: a geographical comparison by U.S. Environmental Protection Agency regions.

    PubMed

    Zhang, Hai; Hoff, Raymond M; Engel-Cox, Jill A

    2009-11-01

    Aerosol optical depth (AOD) acquired from satellite measurements demonstrates good correlation with particulate matter with diameters less than 2.5 microm (PM2.5) in some regions of the United States and has been used for monitoring and nowcasting air quality over the United States. This work investigates the relation between Moderate Resolution Imaging Spectroradiometer (MODIS) AOD and PM2.5 over the 10 U.S. Environmental Protection Agency (EPA)-defined geographic regions in the United States on the basis of a 2-yr (2005-2006) match-up dataset of MODIS AOD and hourly PM2.5 measurements. The AOD retrievals demonstrate a geographical and seasonal variation in their relation with PM2.5. Good correlations are mostly observed over the eastern United States in summer and fall. The southeastern United States has the highest correlation coefficients at more than 0.6. The southwestern United States has the lowest correlation coefficient of approximately 0.2. The seasonal regression relations derived for each region are used to estimate the PM2.5 from AOD retrievals, and it is shown that the estimation using this method is more accurate than that using a fixed ratio between PM2.5 and AOD. Two versions of AOD from Terra (v4.0.1 and v5.2.6) are also compared in terms of the inversion methods and screening algorithms. The v5.2.6 AOD retrievals demonstrate better correlation with PM2.5 than v4.0.1 retrievals, but they have much less coverage because of the differences in the cloud-screening algorithm.

  20. Phase II dose escalation study of image-guided adaptive radiotherapy for prostate cancer: Use of dose-volume constraints to achieve rectal isotoxicity

    SciTech Connect

    Vargas, Carlos; Yan Di; Kestin, Larry L.; Krauss, Daniel; Lockman, David M.; Brabbins, Donald S.; Martinez, Alvaro A. . E-mail: amartinez@beaumont.edu

    2005-09-01

    Purpose: In our Phase II prostate cancer Adaptive Radiation Therapy (ART) study, the highest possible dose was selected on the basis of normal tissue tolerance constraints. We analyzed rectal toxicity rates in different dose levels and treatment groups to determine whether equivalent toxicity rates were achieved as hypothesized when the protocol was started. Methods and Materials: From 1999 to 2002, 331 patients with clinical stage T1 to T3, node-negative prostate cancer were prospectively treated with three-dimensional conformal adaptive RT. A patient-specific confidence-limited planning target volume was constructed on the basis of 5 CT scans and 4 sets of electronic portal images after the first 4 days of treatment. For each case, the rectum (rectal solid) was contoured in its entirety. The rectal wall was defined by use of a 3-mm wall thickness (median volume: 29.8 cc). The prescribed dose level was chosen using the following rectal wall dose constraints: (1) Less than 30% of the rectal wall volume can receive more than 75.6 Gy. (2) Less than 5% of the rectal wall can receive more than 82 Gy. Low-risk patients (PSA < 10, Stage {<=} T2a, Gleason score < 7) were treated to the prostate alone (Group 1). All other patients, intermediate and high risk, where treated to the prostate and seminal vesicles (Group 2). The risk of chronic toxicity (NCI Common Toxicity Criteria 2.0) was assessed for the different dose levels prescribed. HIC approval was acquired for all patients. Median follow-up was 1.6 years. Results: Grade 2 chronic rectal toxicity was experienced by 34 patients (10%) (9% experienced rectal bleeding, 6% experienced proctitis, 3% experienced diarrhea, and 1% experienced rectal pain) at a median interval of 1.1 year. Nine patients (3%) experienced grade 3 or higher chronic rectal toxicity (1 Grade 4) at a median interval of 1.2 years. The 2-year rates of Grade 2 or higher and Grade 3 or higher chronic rectal toxicity were 17% and 3%, respectively. No

  1. Perceived Suprathreshold Depth under Conditions that Elevate the Stereothreshold

    PubMed Central

    Bedell, Harold E.; Gantz, Liat; Jackson, Danielle N.

    2012-01-01

    Purpose Previous studies considered the possibility that individuals with impaired stereoacuity can be identified by estimating the perceived depth of a target with a suprathreshold retinal image disparity. These studies showed that perceived suprathreshold depth is reduced when the image presented to one eye is blurred, but they did not address, as we did, whether a similar reduction of perceived depth occurs when the stereothreshold is elevated using other manipulations. Methods Stereothresholds were measured in 6 adult observers for a pair of bright, 1 deg vertical lines during normal viewing and under 5 conditions that elevated the stereothreshold: (1) monocular dioptric blur, (2) monocular glare, (3) binocular luminance reduction, (4) monocular luminance reduction, and (5) imposed disjunctive image motion. The observers subsequently matched the perceived depth of degraded targets presented with crossed or uncrossed disparities corresponding to 2, 4, and 6 times the elevated stereothreshold for each stimulus condition. Results The image manipulations used elevated the stereothreshold by a factor of 3.7 to 5.5 times. For targets with suprathreshold disparities, monocular blur, monocular luminance reduction, and disjunctive image motion resulted in a significant decrease in perceived depth. However, the magnitude of perceived suprathreshold depth was unaffected when monocular glare was introduced or the binocular luminance of the stereotargets was reduced. Conclusions Not all conditions that increase the stereothreshold reduce the perceived depth of targets with suprathreshold disparities. Observers who have poor stereopsis therefore may or may not exhibit an associated reduction of perceived suprathreshold depth. PMID:23160439

  2. Infants' Coordination of Auditory and Visual Depth Information.

    ERIC Educational Resources Information Center

    Morrongiello, Barbara A.; Fenwick, Kimberley D.

    1991-01-01

    Infants of five, seven, and nine months were shown two video images on monitors placed side by side. Images were accompanied by a soundtrack that matched one of the images. Results indicated that age-related changes in infants' coordination of auditory and visual depth information took place between the ages of five and nine months. (SH)

  3. Noninvasive Methods for Determining Lesion Depth from Vesicant Exposure

    DTIC Science & Technology

    2007-01-01

    evaluation of two noninvasive bioengineering methodologies, laser Doppler perfusion imaging (LDPI) and indo- cyanine green fluorescence imaging (ICGFI...depth. under the instrument and remain perfectly still for Further definitive studies will incorporate the use several minutes during the collection...analysis giving the mean (±SD) cyanine green fluorescence brightness ratio comparing le- laser Doppler perfusion imaging blood perfusion ratio com

  4. Depth perception not found in human observers for static or dynamic anti-correlated random dot stereograms.

    PubMed

    Hibbard, Paul B; Scott-Brown, Kenneth C; Haigh, Emma C; Adrain, Melanie

    2014-01-01

    One of the greatest challenges in visual neuroscience is that of linking neural activity with perceptual experience. In the case of binocular depth perception, important insights have been achieved through comparing neural responses and the perception of depth, for carefully selected stimuli. One of the most important types of stimulus that has been used here is the anti-correlated random dot stereogram (ACRDS). In these stimuli, the contrast polarity of one half of a stereoscopic image is reversed. While neurons in cortical area V1 respond reliably to the binocular disparities in ACRDS, they do not create a sensation of depth. This discrepancy has been used to argue that depth perception must rely on neural activity elsewhere in the brain. Currently, the psychophysical results on which this argument rests are not clear-cut. While it is generally assumed that ACRDS do not support the perception of depth, some studies have reported that some people, some of the time, perceive depth in some types of these stimuli. Given the importance of these results for understanding the neural correlates of stereopsis, we studied depth perception in ACRDS using a large number of observers, in order to provide an unambiguous conclusion about the extent to which these stimuli support the perception of depth. We presented observers with random dot stereograms in which correlated dots were presented in a surrounding annulus and correlated or anti-correlated dots were presented in a central circular region. While observers could reliably report the depth of the central region for correlated stimuli, we found no evidence for depth perception in static or dynamic anti-correlated stimuli. Confidence ratings for stereoscopic perception were uniformly low for anti-correlated stimuli, but showed normal variation with disparity for correlated stimuli. These results establish that the inability of observers to perceive depth in ACRDS is a robust phenomenon.

  5. Estimation of insertion depth angle based on cochlea diameter and linear insertion depth: a prediction tool for the CI422.

    PubMed

    Franke-Trieger, Annett; Mürbe, Dirk

    2015-11-01

    Beside the cochlear size, the linear insertion depth (LID) influences the insertion depth angle of cochlear implant electrode arrays. For the specific implant CI422 the recommended LID is not fixed but can vary continuously between 20 and 25 mm. In the current study, the influence of cochlea size and LID on the final insertion depth angle was investigated to develop a prediction tool for the insertion depth angle by means of cochlea diameter and LID. Preoperative estimation of insertion depth angles might help surgeons avoid exceeding an intended insertion depth, especially with respect to low-frequency residual hearing preservation. Postoperative, high-resolution 3D-radiographs provided by Flat Panel Computed Volume Tomography (FPCT) were used to investigate the insertion depth angle in 37 CI422 recipients. Furthermore, the FPCT images were used to measure linear insertion depth and diameter of the basal turn of the cochlea. A considerable variation of measured insertion depth angles ranging from 306° to 579° was identified. The measured linear insertion depth ranged from -18.6 to 26.2 mm and correlated positively with the insertion depth angle. The cochlea diameter ranged from 8.11 to 10.42 mm and correlated negatively with the insertion depth angle. The results suggest that preoperatively measured cochlea diameter combined with the option of different array positions by means of LID may act as predictors for the final insertion depth angle.

  6. Neutron depth profiling by large angle coincidence spectroscopy

    SciTech Connect

    Vacik, J.; Cervena, J.; Hnatowicz, V.; Havranek, V.; Fink, D.

    1995-12-31

    Extremely low concentrations of several technologically important elements (mainly lithium and boron) have been studied by a modified neutron depth profiling technique. Large angle coincidence spectroscopy using neutrons to probe solids with a thickness not exceeding several micrometers has proved to be a powerful analytical method with an excellent detection sensitivity. Depth profiles in the ppb atomic range are accessible for any solid material. A depth resolution of about 20 nanometers can be achieved.

  7. There are solutions to LWD depth measurement problems

    SciTech Connect

    Tait, C.A.; Hamlin, K.H.

    1996-03-18

    The use of well-calibrated depth control sensors, good bookkeeping practices with the pipe tally, and better operating practices can help eliminate depth measurement errors on logs produced from logging-while-drilling tools. Other factors that help eliminate depth errors include advances in tool technology and mathematical corrections for tensional stretch, ballooning effect, and thermal expansion of the drill pipe. Accurate depth measurements are required to achieve wire line-quality logs with logging-while-drilling (LWD) tools. Without good depth control, pay zone thickness measurements can be in error, correlation between LWD logs and wire line logs can be poor, and subsequent LWD runs may produce data at differing depths. Critical depth control problems include nonlinearities caused by the draw works, heave effects, and drill pipe stretch and compression. An interdisciplinary team investigated the causes and various solutions to these problems and developed solution comprised of improvements in hardware, rig site operating procedures, and calibration techniques.

  8. THEMIS Observations of Atmospheric Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Smith, Michael D.; Bandfield, Joshua L.; Christensen, Philip R.; Richardson, Mark I.

    2003-01-01

    The Mars Odyssey spacecraft entered into Martian orbit in October 2001 and after successful aerobraking began mapping in February 2002 (approximately Ls=330 deg.). Images taken by the Thermal Emission Imaging System (THEMIS) on-board the Odyssey spacecraft allow the quantitative retrieval of atmospheric dust and water-ice aerosol optical depth. Atmospheric quantities retrieved from THEMIS build upon existing datasets returned by Mariner 9, Viking, and Mars Global Surveyor (MGS). Data from THEMIS complements the concurrent MGS Thermal Emission Spectrometer (TES) data by offering a later local time (approx. 2:00 for TES vs. approx. 4:00 - 5:30 for THEMIS) and much higher spatial resolution.

  9. Stereoscopic Depth Perception during Binocular Rivalry

    PubMed Central

    Andrews, Timothy J.; Holmes, David

    2011-01-01

    When we view nearby objects, we generate appreciably different retinal images in each eye. Despite this, the visual system can combine these different images to generate a unified view that is distinct from the perception generated from either eye alone (stereopsis). However, there are occasions when the images in the two eyes are too disparate to fuse. Instead, they alternate in perceptual dominance, with the image from one eye being completely excluded from awareness (binocular rivalry). It has been thought that binocular rivalry is the default outcome when binocular fusion is not possible. However, other studies have reported that stereopsis and binocular rivalry can coexist. The aim of this study was to address whether a monocular stimulus that is reported to be suppressed from awareness can continue to contribute to the perception of stereoscopic depth. Our results showed that stereoscopic depth perception was still evident when incompatible monocular images differing in spatial frequency, orientation, spatial phase, or direction of motion engage in binocular rivalry. These results demonstrate a range of conditions in which binocular rivalry and stereopsis can coexist. PMID:21960966

  10. Higher resolution stimulus facilitates depth perception: MT+ plays a significant role in monocular depth perception.

    PubMed

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Hiruma, Nobuyuki

    2014-10-20

    Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies.

  11. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    PubMed

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  12. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    PubMed Central

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-01-01

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942

  13. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  14. Depth propagation and surface construction in 3-D vision.

    PubMed

    Georgeson, Mark A; Yates, Tim A; Schofield, Andrew J

    2009-01-01

    In stereo vision, regions with ambiguous or unspecified disparity can acquire perceived depth from unambiguous regions. This has been called stereo capture, depth interpolation or surface completion. We studied some striking induced depth effects suggesting that depth interpolation and surface completion are distinct stages of visual processing. An inducing texture (2-D Gaussian noise) had sinusoidal modulation of disparity, creating a smooth horizontal corrugation. The central region of this surface was replaced by various test patterns whose perceived corrugation was measured. When the test image was horizontal 1-D noise, shown to one eye or to both eyes without disparity, it appeared corrugated in much the same way as the disparity-modulated (DM) flanking regions. But when the test image was 2-D noise, or vertical 1-D noise, little or no depth was induced. This suggests that horizontal orientation was a key factor. For a horizontal sine-wave luminance grating, strong depth was induced, but for a square-wave grating, depth was induced only when its edges were aligned with the peaks and troughs of the DM flanking surface. These and related results suggest that disparity (or local depth) propagates along horizontal 1-D features, and then a 3-D surface is constructed from the depth samples acquired. The shape of the constructed surface can be different from the inducer, and so surface construction appears to operate on the results of a more local depth propagation process.

  15. Efficient holoscopy image reconstruction.

    PubMed

    Hillmann, Dierck; Franke, Gesa; Lührs, Christian; Koch, Peter; Hüttmann, Gereon

    2012-09-10

    Holoscopy is a tomographic imaging technique that combines digital holography and Fourier-domain optical coherence tomography (OCT) to gain tomograms with diffraction limited resolution and uniform sensitivity over several Rayleigh lengths. The lateral image information is calculated from the spatial interference pattern formed by light scattered from the sample and a reference beam. The depth information is obtained from the spectral dependence of the recorded digital holograms. Numerous digital holograms are acquired at different wavelengths and then reconstructed for a common plane in the sample. Afterwards standard Fourier-domain OCT signal processing achieves depth discrimination. Here we describe and demonstrate an optimized data reconstruction algorithm for holoscopy which is related to the inverse scattering reconstruction of wavelength-scanned full-field optical coherence tomography data. Instead of calculating a regularized pseudoinverse of the forward operator, the recorded optical fields are propagated back into the sample volume. In one processing step the high frequency components of the scattering potential are reconstructed on a non-equidistant grid in three-dimensional spatial frequency space. A Fourier transform yields an OCT equivalent image of the object structure. In contrast to the original holoscopy reconstruction with backpropagation and Fourier transform with respect to the wavenumber, the required processing time does neither depend on the confocal parameter nor on the depth of the volume. For an imaging NA of 0.14, the processing time was decreased by a factor of 15, at higher NA the gain in reconstruction speed may reach two orders of magnitude.

  16. Visual fatigue evaluation based on depth in 3D videos

    NASA Astrophysics Data System (ADS)

    Wang, Feng-jiao; Sang, Xin-zhu; Liu, Yangdong; Shi, Guo-zhong; Xu, Da-xiong

    2013-08-01

    In recent years, 3D technology has become an emerging industry. However, visual fatigue always impedes the development of 3D technology. In this paper we propose some factors affecting human perception of depth as new quality metrics. These factors are from three aspects of 3D video--spatial characteristics, temporal characteristics and scene movement characteristics. They play important roles for the viewer's visual perception. If there are many objects with a certain velocity and the scene changes fast, viewers will feel uncomfortable. In this paper, we propose a new algorithm to calculate the weight values of these factors and analyses their effect on visual fatigue.MSE (Mean Square Error) of different blocks is taken into consideration from the frame and inter-frame for 3D stereoscopic videos. The depth frame is divided into a number of blocks. There are overlapped and sharing pixels (at half of the block) in the horizontal and vertical direction. Ignoring edge information of objects in the image can be avoided. Then the distribution of all these data is indicated by kurtosis with regard of regions which human eye may mainly gaze at. Weight values can be gotten by the normalized kurtosis. When the method is used for individual depth, spatial variation can be achieved. When we use it in different frames between current and previous one, we can get temporal variation and scene movement variation. Three factors above are linearly combined, so we can get objective assessment value of 3D videos directly. The coefficients of three factors can be estimated based on the liner regression. At last, the experimental results show that the proposed method exhibits high correlation with subjective quality assessment results.

  17. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis

    PubMed Central

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687

  18. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.

    PubMed

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-06-28

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.

  19. Extended depth of field in an intrinsically wavefront-encoded biometric iris camera

    NASA Astrophysics Data System (ADS)

    Bergkoetter, Matthew D.; Bentley, Julie L.

    2014-12-01

    This work describes a design process which greatly increases the depth of field of a simple three-element lens system intended for biometric iris recognition. The system is optimized to produce a point spread function which is insensitive to defocus, so that recorded images may be deconvolved without knowledge of the exact object distance. This is essentially a variation on the technique of wavefront encoding, however the desired encoding effect is achieved by aberrations intrinsic to the lens system itself, without the need for a pupil phase mask.

  20. Objective methods for achieving an early prediction of the effectiveness of regional block anesthesia using thermography and hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Klaessens, John H. G. M.; Landman, Mattijs; de Roode, Rowland; Noordmans, Herke J.; Verdaasdonk, Rudolf M.

    2011-03-01

    An objective method to measure the effectiveness of regional anesthesia can reduce time and unintended pain inflicted to the patient. A prospective observational study was performed on 22 patients during a local anesthesia before undergoing hand surgery. Two non-invasive techniques thermal and oxygenation imaging were applied to observe the region affected by the peripheral block and the results were compared to the standard cold sensation test. The supraclavicular block was placed under ultrasound guidance around the brachial plexus by injecting 20 cc Ropivacaine. The sedation causes a relaxation of the muscles around the blood vessels resulting in dilatation and hence an increase of blood perfusion, skin temperature and skin oxygenation in the lower arm and hand. Temperatures were acquired with an IR thermal camera (FLIR ThermoCam SC640). The data were recorded and analyzed with the ThermaCamTMResearcher and Matlab software. Narrow band spectral images were acquired at selected wavelengths with a CCD camera either combined with a Liquid Crystal Tunable Filter (420-730 nm) or a tunable hyper-wavelength LED light source (450-880nm). Concentration changes of oxygenated and deoxygenated hemoglobin in the dermis of the skin were calculated using the modified Lambert Beer equation. Both imaging methods showed distinct oxygenation and temperature differences at the surface of the skin of the hand with a good correlation to the anesthetized areas. A temperature response was visible within 5 minutes compared to the standard of 30 minutes. Both non-contact methods show to be more objective and can have an earlier prediction for the effectiveness of the anesthetic block.

  1. Snapshot depth sensitive Raman spectroscopy in layered tissues.

    PubMed

    Liu, Wei; Ong, Yi Hong; Yu, Xiao Jun; Ju, Jian; Perlaki, Clint Michael; Liu, Lin Bo; Liu, Quan

    2016-12-12

    Depth sensitive Raman spectroscopy has been shown effective in the detection of depth dependent Raman spectra in layered tissues. However, the current techniques for depth sensitive Raman measurements based on fiber-optic probes suffer from poor depth resolution and significant variation in probe-sample contact. In contrast, those lens based techniques either require the change in objective-sample distance or suffer from slow spectral acquisition. We report a snapshot depth-sensitive Raman technique based on an axicon lens and a ring-to-line fiber assembly to simultaneously acquire Raman signals emitted from five different depths in the non-contact manner without moving any component. A numerical tool was developed to simulate ray tracing and optimize the snapshot depth sensitive setup to achieve the tradeoff between signal collection efficiency and depth resolution for Raman measurements in the skin. Moreover, the snapshot system was demonstrated to be able to acquire depth sensitive Raman spectra from not only transparent and turbid skin phantoms but also from ex vivo pork tissues and in vivo human thumbnails when the excitation laser power was limited to the maximum permissible exposure for human skin. The results suggest the great potential of snapshot depth sensitive Raman spectroscopy in the characterization of the skin and other layered tissues in the clinical setting or other similar applications such as quality monitoring of tablets and capsules in pharmaceutical industry requiring the rapid measurement of depth dependent Raman spectra.

  2. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  3. Contribution of motion parallax to segmentation and depth perception.

    PubMed

    Yoonessi, Ahmad; Baker, Curtis L

    2011-08-24

    Relative image motion resulting from active movement of the observer could potentially serve as a powerful perceptual cue, both for segmentation of object boundaries and for depth perception. To examine the perceptual role of motion parallax from shearing motion, we measured human performance in three psychophysical tasks: segmentation, depth ordering, and depth magnitude estimation. Stimuli consisted of random dot textures that were synchronized to head movement with sine- or square-wave modulation patterns. Segmentation was assessed with a 2AFC orientation judgment of a motion-defined boundary. In the depth-ordering task, observers reported which modulation half-cycle appeared in front of the other. Perceived depth magnitude was matched to that of a 3D rendered image with multiple static cues. The results indicate that head movement might not be important for segmentation, even though it is crucial for obtaining depth from motion parallax--thus, concomitant depth perception does not appear to facilitate segmentation. Our findings suggest that segmentation works best for abrupt, sharply defined motion boundaries, whereas smooth gradients are more powerful for obtaining depth from motion parallax. Thus, motion parallax may contribute in a different manner to segmentation and to depth perception and suggests that their underlying mechanisms might be distinct.

  4. Method and apparatus to measure the depth of skin burns

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.

    2002-01-01

    A new device for measuring the depth of surface tissue burns based on the rate at which the skin temperature responds to a sudden differential temperature stimulus. This technique can be performed without physical contact with the burned tissue. In one implementation, time-dependent surface temperature data is taken from subsequent frames of a video signal from an infrared-sensitive video camera. When a thermal transient is created, e.g., by turning off a heat lamp directed at the skin surface, the following time-dependent surface temperature data can be used to determine the skin burn depth. Imaging and non-imaging versions of this device can be implemented, thereby enabling laboratory-quality skin burn depth imagers for hospitals as well as hand-held skin burn depth sensors the size of a small pocket flashlight for field use and triage.

  5. Scene Depth Perception Based on Omnidirectional Structured Light.

    PubMed

    Jia, Tong; Wang, BingNan; Zhou, ZhongXuan; Meng, Haixiu

    2016-07-11

    A depth perception method combining omnidirectional images and encoding structured light was proposed. Firstly, a new structured light pattern was presented by using monochromatic light. The primitive of the pattern consists of "Four-Direction Sand Clock-like" (FDSC) image. FDSC can provide more robust and accurate position compared with conventional pattern primitive. Secondly, on the basis of multiple reference planes, a calibration method of projector was proposed to significantly simplify projector calibration in the constructed omnidirectional imaging system. Thirdly, a depth point cloud matching algorithm based on the principle of prior constraint iterative closest point under mobile condition was proposed to avoid the effect of occlusion. Experimental results demonstrated that the proposed method can acquire omnidirectional depth information about large-scale scenes. The error analysis of 16 groups of depth data reported a maximum measuring error of 0.53 mm and an average measuring error of 0.25 mm.

  6. Accelerated Focused Ultrasound Imaging

    PubMed Central

    White, P. Jason; Thomenius, Kai; Clement, Gregory T.

    2010-01-01

    One of the most, basic trade-offs in ultrasound imaging involves frame rate, depth, and number of lines. Achieving good spatial resolution and coverage requires a large number of lines, leading to decreases in frame rate. An even more serious imaging challenge occurs with imaging modes involving spatial compounding and 3-D/4-D imaging, which are severely limited by the slow speed of sound in tissue. The present work can overcome these traditional limitations, making ultrasound imaging many-fold faster. By emitting several beams at once, and by separating the resulting overlapped signals through spatial and temporal processing, spatial resolution and/or coverage can be increased by many-fold while leaving frame rates unaffected. The proposed approach can also be extended to imaging strategies that do not involve transmit beamforming, such as synthetic aperture imaging. Simulated and experimental results are presented where imaging speed is improved by up to 32-fold, with little impact on image quality. Object complexity has little impact on the method’s performance, and data from biological systems can readily be handled. The present work may open the door to novel multiplexed and/or multidimensional protocols considered impractical today. PMID:20040398

  7. Aerosol optical properties derived from the DRAGON-NE Asia campaign, and implications for a single-channel algorithm to retrieve aerosol optical depth in spring from Meteorological Imager (MI) on-board the Communication, Ocean, and Meteorological Satellite (COMS)

    NASA Astrophysics Data System (ADS)

    Kim, M.; Kim, J.; Jeong, U.; Kim, W.; Hong, H.; Holben, B.; Eck, T. F.; Lim, J. H.; Song, C. K.; Lee, S.; Chung, C.-Y.

    2016-02-01

    An aerosol model optimized for northeast Asia is updated with the inversion data from the Distributed Regional Aerosol Gridded Observation Networks (DRAGON)-northeast (NE) Asia campaign which was conducted during spring from March to May 2012. This updated aerosol model was then applied to a single visible channel algorithm to retrieve aerosol optical depth (AOD) from a Meteorological Imager (MI) on-board the geostationary meteorological satellite, Communication, Ocean, and Meteorological Satellite (COMS). This model plays an important role in retrieving accurate AOD from a single visible channel measurement. For the single-channel retrieval, sensitivity tests showed that perturbations by 4 % (0.926 ± 0.04) in the assumed single scattering albedo (SSA) can result in the retrieval error in AOD by over 20 %. Since the measured reflectance at the top of the atmosphere depends on both AOD and SSA, the overestimation of assumed SSA in the aerosol model leads to an underestimation of AOD. Based on the AErosol RObotic NETwork (AERONET) inversion data sets obtained over East Asia before 2011, seasonally analyzed aerosol optical properties (AOPs) were categorized by SSAs at 675 nm of 0.92 ± 0.035 for spring (March, April, and May). After the DRAGON-NE Asia campaign in 2012, the SSA during spring showed a slight increase to 0.93 ± 0.035. In terms of the volume size distribution, the mode radius of coarse particles was increased from 2.08 ± 0.40 to 2.14 ± 0.40. While the original aerosol model consists of volume size distribution and refractive indices obtained before 2011, the new model is constructed by using a total data set after the DRAGON-NE Asia campaign. The large volume of data in high spatial resolution from this intensive campaign can be used to improve the representative aerosol model for East Asia. Accordingly, the new AOD data sets retrieved from a single-channel algorithm, which uses a precalculated look-up table (LUT) with the new aerosol model, show an

  8. The neural mechanism of binocular depth discrimination

    PubMed Central

    Barlow, H. B.; Blakemore, C.; Pettigrew, J. D.

    1967-01-01

    1. Binocularly driven units were investigated in the cat's primary visual cortex. 2. It was found that a stimulus located correctly in the visual fields of both eyes was more effective in driving the units than a monocular stimulus, and much more effective than a binocular stimulus which was correctly positioned in only one eye: the response to the correctly located image in one eye is vetoed if the image is incorrectly located in the other eye. 3. The vertical and horizontal disparities of the paired retinal images that yielded the maximum response were measured in 87 units from seven cats: the range of horizontal disparities was 6·6°, of vertical disparities 2·2°. 4. With fixed convergence, different units will be optimally excited by objects lying at different distances. This may be the basic mechanism underlying depth discrimination in the cat. PMID:6065881

  9. Effect of Head Position on Facial Soft Tissue Depth Measurements Obtained Using Computed Tomography.

    PubMed

    Caple, Jodi M; Stephan, Carl N; Gregory, Laura S; MacGregor, Donna M

    2016-01-01

    Facial soft tissue depth (FSTD) studies employing clinical computed tomography (CT) data frequently rely on depth measurements from raw 2D orthoslices. However, the position of each patient's head was not standardized in this method, potentially decreasing measurement reliability and accuracy. This study measured FSTDs along the original orthoslice plane and compared these measurements to those standardized by the Frankfurt horizontal (FH). Subadult cranial CT scans (n = 115) were used to measure FSTDs at 18 landmarks. Significant differences were observed between the methods at eight of these landmarks (p < 0.05), demonstrating that high-quality data are not generated simply by employing modern imaging modalities such as CT. Proper technique is crucial to useful results, and maintaining control over head position during FSTD data collection is important. This is easily and most readily achieved in CT techniques by rotating the head to the FH plane after constructing a 3D rendering of the data.

  10. Disparity Gradients and Depth Scaling

    DTIC Science & Technology

    1989-09-01

    points. This depth scaling effect is discussed in a computational framework of stereo based on a Baysian (continued on back)_ D D F~~ 14 73 EDTION 01 1NOV...stimuli than for points. This depth scaling effect is discussed in a computational framework of stereo based on a Baysian approach ’which allows to

  11. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  12. Perception of relative depth interval: systematic biases in perceived depth.

    PubMed

    Harris, Julie M; Chopin, Adrien; Zeiner, Katharina; Hibbard, Paul B

    2012-01-01

    Given an estimate of the binocular disparity between a pair of points and an estimate of the viewing distance, or knowledge of eye position, it should be possible to obtain an estimate of their depth separation. Here we show that, when points are arranged in different vertical geometric configurations across two intervals, many observers find this task difficult. Those who can do the task tend to perceive the depth interval in one configuration as very different from depth in the other configuration. We explore two plausible explanations for this effect. The first is the tilt of the empirical vertical horopter: Points perceived along an apparently vertical line correspond to a physical line of points tilted backwards in space. Second, the eyes can rotate in response to a particular stimulus. Without compensation for this rotation, biases in depth perception would result. We measured cyclovergence indirectly, using a standard psychophysical task, while observers viewed our depth configuration. Biases predicted from error due either to cyclovergence or to the tilted vertical horopter were not consistent with the depth configuration results. Our data suggest that, even for the simplest scenes, we do not have ready access to metric depth from binocular disparity.

  13. Lamina 3D display: projection-type depth-fused display using polarization-encoded depth information.

    PubMed

    Park, Soon-gi; Yoon, Sangcheol; Yeom, Jiwoon; Baek, Hogil; Min, Sung-Wook; Lee, Byoungho

    2014-10-20

    In order to realize three-dimensional (3D) displays, various multiplexing methods have been proposed to add the depth dimension to two-dimensional scenes. However, most of these methods have faced challenges such as the degradation of viewing qualities, the requirement of complicated equipment, and large amounts of data. In this paper, we further developed our previous concept, polarization distributed depth map, to propose the Lamina 3D display as a method for encoding and reconstructing depth information using the polarization status. By adopting projection optics to the depth encoding system, reconstructed 3D images can be scaled like images of 2D projection displays. 3D reconstruction characteristics of the polarization-encoded images are analyzed with simulation and experiment. The experimental system is also demonstrated to show feasibility of the proposed method.

  14. Single grating x-ray imaging for dynamic biological systems

    NASA Astrophysics Data System (ADS)

    Morgan, Kaye S.; Paganin, David M.; Parsons, David W.; Donnelley, Martin; Yagi, Naoto; Uesugi, Kentaro; Suzuki, Yoshio; Takeuchi, Akihisa; Siu, Karen K. W.

    2012-07-01

    Biomedical studies are already benefiting from the excellent contrast offered by phase contrast x-ray imaging, but live imaging work presents several challenges. Living samples make it particularly difficult to achieve high resolution, sensitive phase contrast images, as exposures must be short and cannot be repeated. We therefore present a single-exposure, high-flux method of differential phase contrast imaging [1, 2, 3] in the context of imaging live airways for Cystic Fibrosis (CF) treatment assessment [4]. The CF study seeks to non-invasively observe the liquid lining the airways, which should increase in depth in response to effective treatments. Both high spatial resolution and sensitivity are required in order to track micron size changes in a liquid that is not easily differentiated from the tissue on which it lies. Our imaging method achieves these goals by using a single attenuation grating or grid as a reference pattern, and analyzing how the sample deforms the pattern to quantitatively retrieve the phase depth of the sample. The deformations are mapped at each pixel in the image using local cross-correlations comparing each 'sample and pattern' image with a reference 'pattern only' image taken before the sample is introduced. This produces a differential phase image, which may be integrated to give the sample phase depth.

  15. Motion-Adaptive Depth Superresolution.

    PubMed

    Kamilov, Ulugbek S; Boufounos, Petros T

    2017-04-01

    Multi-modal sensing is increasingly becoming important in a number of applications, providing new capabilities and processing challenges. In this paper, we explore the benefit of combining a low-resolution depth sensor with a high-resolution optical video sensor, in order to provide a high-resolution depth map of the scene. We propose a new formulation that is able to incorporate temporal information and exploit the motion of objects in the video to significantly improve the results over existing methods. In particular, our approach exploits the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization. We provide experiments to validate our approach and confirm that the quality of the estimated high-resolution depth is improved substantially. Our approach can be a first component in systems using vision techniques that rely on high-resolution depth information.

  16. Neural computations underlying depth perception

    PubMed Central

    Anzai, Akiyuki; DeAngelis, Gregory C.

    2010-01-01

    Summary Neural mechanisms underlying depth perception are reviewed with respect to three computational goals: determining surface depth order, gauging depth intervals, and representing 3D surface geometry and object shape. Accumulating evidence suggests that these three computational steps correspond to different stages of cortical processing. Early visual areas appear to be involved in depth ordering, while depth intervals, expressed in terms of relative disparities, are likely represented at intermediate stages. Finally, 3D surfaces appear to be processed in higher cortical areas, including an area in which individual neurons encode 3D surface geometry, and a population of these neurons may therefore represent 3D object shape. How these processes are integrated to form a coherent 3D percept of the world remains to be understood. PMID:20451369

  17. Joint digital-optical design of imaging systems for grayscale objects

    NASA Astrophysics Data System (ADS)

    Robinson, M. Dirk; Stork, David G.

    2008-09-01

    In many imaging applications, the objects of interest have broad range of strongly correlated spectral components. For example, the spectral components of grayscale objects such as media printed with black ink or toner are nearly perfectly correlated spatially. We describe how to exploit such correlation during the design of electro-optical imaging systems to achieve greater imaging performance and lower optical component cost. These advantages are achieved by jointly optimizing optical, detector, and digital image processing subsystems using a unified statistical imaging performance measure. The resulting optical systems have lower F# and greater depth-of-field than systems that do not exploit spectral correlations.

  18. Three-dimensional differential interference contrast microscopy using synthetic aperture imaging

    PubMed Central

    Kim, Moonseok; Choi, Youngwoon; Fang-Yen, Christopher; Sung, Yongjin; Kim, Kwanhyung; Dasari, Ramachandra R.; Feld, Michael S.

    2012-01-01

    Abstract. We implement differential interference contrast (DIC) microscopy using high-speed synthetic aperture imaging that expands the passband of coherent imaging by a factor of 2.2. For an aperture synthesized coherent image, we apply for the numerical post-processing and obtain a high-contrast DIC image for arbitrary shearing direction and bias retardation. In addition, we obtain images at different depths without a scanning objective lens by numerically propagating the acquired coherent images. Our method achieves high-resolution and high-contrast 3-D DIC imaging of live biological cells. The proposed method will be useful for monitoring 3-D dynamics of intracellular particles. PMID:22463035

  19. Physical Optics Based Computational Imaging Systems

    NASA Astrophysics Data System (ADS)

    Olivas, Stephen Joseph

    There is an ongoing demand on behalf of the consumer, medical and military industries to make lighter weight, higher resolution, wider field-of-view and extended depth-of-focus cameras. This leads to design trade-offs between performance and cost, be it size, weight, power, or expense. This has brought attention to finding new ways to extend the design space while adhering to cost constraints. Extending the functionality of an imager in order to achieve extraordinary performance is a common theme of computational imaging, a field of study which uses additional hardware along with tailored algorithms to formulate and solve inverse problems in imaging. This dissertation details four specific systems within this emerging field: a Fiber Bundle Relayed Imaging System, an Extended Depth-of-Focus Imaging System, a Platform Motion Blur Image Restoration System, and a Compressive Imaging System. The Fiber Bundle Relayed Imaging System is part of a larger project, where the work presented in this thesis was to use image processing techniques to mitigate problems inherent to fiber bundle image relay and then, form high-resolution wide field-of-view panoramas captured from multiple sensors within a custom state-of-the-art imager. The Extended Depth-of-Focus System goals were to characterize the angular and depth dependence of the PSF of a focal swept imager in order to increase the acceptably focused imaged scene depth. The goal of the Platform Motion Blur Image Restoration System was to build a system that can capture a high signal-to-noise ratio (SNR), long-exposure image which is inherently blurred while at the same time capturing motion data using additional optical sensors in order to deblur the degraded images. Lastly, the objective of the Compressive Imager was to design and build a system functionally similar to the Single Pixel Camera and use it to test new sampling methods for image generation and to characterize it against a traditional camera. These computational

  20. Mechanistic evaluation of virus clearance by depth filtration.

    PubMed

    Venkiteshwaran, Adith; Fogle, Jace; Patnaik, Purbasa; Kowle, Ron; Chen, Dayue

    2015-01-01

    Virus clearance by depth filtration has not been well-understood mechanistically due to lack of quantitative data on filter charge characteristics and absence of systematic studies. It is generally believed that both electrostatic interactions and sized based mechanical entrapment contribute to virus clearance by depth filtration. In order to establish whether the effectiveness of virus clearance correlates with the charge characteristics of a given depth filter, a counter-ion displacement technique was employed to determine the ionic capacity for several depth filters. Two depth filters (Millipore B1HC and X0HC) with significant differences in ionic capacities were selected and evaluated for their ability to eliminate viruses. The high ionic capacity X0HC filter showed complete porcine parvovirus (PPV) clearance (eliminating the spiked viruses to below the limit of detection) under low conductivity conditions (≤2.5 mS/cm), achieving a log10 reduction factor (LRF) of > 4.8. On the other hand, the low ionic capacity B1HC filter achieved only ∼2.1-3.0 LRF of PPV clearance under the same conditions. These results indicate that parvovirus clearance by these two depth filters are mainly achieved via electrostatic interactions between the filters and PPV. When much larger xenotropic murine leukemia virus (XMuLV) was used as the model virus, complete retrovirus clearance was obtained under all conditions evaluated for both depth filters, suggesting the involvement of mechanisms other than just electrostatic interactions in XMuLV clearance.

  1. Graded Achievement, Tested Achievement, and Validity

    ERIC Educational Resources Information Center

    Brookhart, Susan M.

    2015-01-01

    Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…

  2. Depth Profilometry via Multiplexed Optical High-Coherence Interferometry

    PubMed Central

    Kazemzadeh, Farnoud; Wong, Alexander; Behr, Bradford B.; Hajian, Arsen R.

    2015-01-01

    Depth Profilometry involves the measurement of the depth profile of objects, and has significant potential for various industrial applications that benefit from non-destructive sub-surface profiling such as defect detection, corrosion assessment, and dental assessment to name a few. In this study, we investigate the feasibility of depth profilometry using an Multiplexed Optical High-coherence Interferometry MOHI instrument. The MOHI instrument utilizes the spatial coherence of a laser and the interferometric properties of light to probe the reflectivity as a function of depth of a sample. The axial and lateral resolutions, as well as imaging depth, are decoupled in the MOHI instrument. The MOHI instrument is capable of multiplexing interferometric measurements into 480 one-dimensional interferograms at a location on the sample and is built with axial and lateral resolutions of 40 μm at a maximum imaging depth of 700 μm. Preliminary results, where a piece of sand-blasted aluminum, an NBK7 glass piece, and an optical phantom were successfully probed using the MOHI instrument to produce depth profiles, demonstrate the feasibility of such an instrument for performing depth profilometry. PMID:25803289

  3. Quantitative phase analysis through scattering media by depth-filtered digital holography

    NASA Astrophysics Data System (ADS)

    Goebel, Sebastian; Jaedicke, Volker; Koukourakis, Nektarios; Wiethoff, Helge; Adinda-Ougba, Adamou; Gerhardt, Nils C.; Welp, Hubert; Hofmann, Martin R.

    2013-02-01

    Digital holography (DH) is capable of providing three-dimensional topological surface profiles with axial resolutions in the nanometer range. To achieve such high resolutions requires an analysis of the phase information of the reflected light by means of numerical reconstruction methods. Unfortunately, the phase analysis of structures located in scattering media is usually disturbed by interference with reflected light from different depths. In contrast, low-coherence interferometry and optical coherence tomography (OCT) use broadband light sources to investigate the sample with a coherence gate providing tomographic measurements in scattering samples with a poorer depth-resolution of a few micrometers. We propose a new approach that allows recovering the phase information even through scattering media. The approach combines both techniques by creating synthesized interference patterns from scanned spectra. After applying an inverse Fourier transform to each spectrum, we yield three-dimensional depth-resolved images. Subsequently, contributions of photons scattered from unwanted regions are suppressed by depth-filtering. The back-transformed data can be considered as multiple synthesized holograms and the corresponding phase information can be extracted directly from the depthfiltered spectra. We used this approach to record and reconstruct holograms of a reflective surface through a scattering layer. Our results demonstrate a proof-of-principle, as the quantitative phase-profile could be recovered and effectively separated from scattering influences. Moreover, additional processing steps could pave the way to further applications, i.e. spectroscopic analysis.

  4. Cortical Depth Dependence of the Diffusion Anisotropy in the Human Cortical Gray Matter In Vivo

    PubMed Central

    Truong, Trong-Kha; Guidon, Arnaud; Song, Allen W.

    2014-01-01

    Diffusion tensor imaging (DTI) is typically used to study white matter fiber pathways, but may also be valuable to assess the microstructure of cortical gray matter. Although cortical diffusion anisotropy has previously been observed in vivo, its cortical depth dependence has mostly been examined in high-resolution ex vivo studies. This study thus aims to investigate the cortical depth dependence of the diffusion anisotropy in the human cortex in vivo on a clinical 3 T scanner. Specifically, a novel multishot constant-density spiral DTI technique with inherent correction of motion-induced phase errors was used to achieve a high spatial resolution (0.625×0.625×3 mm) and high spatial fidelity with no scan time penalty. The results show: (i) a diffusion anisotropy in the cortical gray matter, with a primarily radial diffusion orientation, as observed in previous ex vivo and in vivo studies, and (ii) a cortical depth dependence of the fractional anisotropy, with consistently higher values in the middle cortical lamina than in the deep and superficial cortical laminae, as observed in previous ex vivo studies. These results, which are consistent across subjects, demonstrate the feasibility of this technique for investigating the cortical depth dependence of the diffusion anisotropy in the human cortex in vivo. PMID:24608869

  5. Cortical depth dependence of the diffusion anisotropy in the human cortical gray matter in vivo.

    PubMed

    Truong, Trong-Kha; Guidon, Arnaud; Song, Allen W

    2014-01-01

    Diffusion tensor imaging (DTI) is typically used to study white matter fiber pathways, but may also be valuable to assess the microstructure of cortical gray matter. Although cortical diffusion anisotropy has previously been observed in vivo, its cortical depth dependence has mostly been examined in high-resolution ex vivo studies. This study thus aims to investigate the cortical depth dependence of the diffusion anisotropy in the human cortex in vivo on a clinical 3 T scanner. Specifically, a novel multishot constant-density spiral DTI technique with inherent correction of motion-induced phase errors was used to achieve a high spatial resolution (0.625 × 0.625 × 3 mm) and high spatial fidelity with no scan time penalty. The results show: (i) a diffusion anisotropy in the cortical gray matter, with a primarily radial diffusion orientation, as observed in previous ex vivo and in vivo studies, and (ii) a cortical depth dependence of the fractional anisotropy, with consistently higher values in the middle cortical lamina than in the deep and superficial cortical laminae, as observed in previous ex vivo studies. These results, which are consistent across subjects, demonstrate the feasibility of this technique for investigating the cortical depth dependence of the diffusion anisotropy in the human cortex in vivo.

  6. Depth profiles of D and T in Metal-hydride films up to large depth

    NASA Astrophysics Data System (ADS)

    Zhang, HongLiang; Ding, Wei; Su, Ranran; Zhang, Yang; Shi, Liqun

    2016-03-01

    In this paper, a method combining D(3He, p) 4He nuclear reaction and proton backscattering (PBS) was adopted to detect the depth profile of both D and T in TiDxTy/Mo film with thickness more than 5 μm. Different energies of 3He and proton beam, varied from 1.0 to 3.0 MeV and 1.5 to 3.8 MeV respectively, were used in order to achieve better depth resolution. With carefully varying incident energies, an optimum resolution of less than 0.5 μm for D and T distribution throughout the whole analyzed range could be achieved.

  7. Sub-cellular resolution imaging with Gabor domain optical coherence microscopy

    NASA Astrophysics Data System (ADS)

    Meemon, P.; Lee, K. S.; Murali, S.; Kaya, I.; Thompson, K. P.; Rolland, J. P.

    2010-02-01

    Optical Coherence Microscopy (OCM) utilizes a high NA microscope objective in the sample arm to achieve an axially and laterally high resolution OCT image. An increase in NA, however, leads to a dramatically decreased depth of focus (DOF), and hence shortens the imaging depth range so that high lateral resolution is maintained only within a small depth region around the focal plane. One solution to increase the depth of imaging while keeping a high lateral resolution is dynamic-focusing. Utilizing the voltage controlled refocus capability of a liquid lens, we have recently presented a solution for invariant high resolution imaging using the liquid lens embedded within a fixed optics hand-held custom microscope designed specifically for optical imaging systems using a broadband light source at 800 nm center wavelength. Subsequently, we have developed a Gabor-Domain Optical Coherence Microscopy (GD-OCM) that utilizes the high speed imaging of spectral domain OCT, the high lateral resolution of OCM, and the ability of real time refocusing of our custom design variable focus objective. In this paper we demonstrate in detail how portions of the infocus cross-sectional images can be extracted and fused to form an invariant lateral resolution image with multiple crosssectional images acquired corresponding to a discrete refocusing step along depth enabled by the varifocal probe. We demonstrate sub-cellular resolution imaging of an African frog tadpole (Xenopus Laevis) taken from a 500 μm x 500 μm cross-section.

  8. Depth-resolved measurements with elliptically polarized reflectance spectroscopy

    PubMed Central

    Bailey, Maria J.; Sokolov, Konstantin

    2016-01-01

    The ability of elliptical polarized reflectance spectroscopy (EPRS) to detect spectroscopic alterations in tissue mimicking phantoms and in biological tissue in situ is demonstrated. It is shown that there is a linear relationship between light penetration depth and ellipticity. This dependence is used to demonstrate the feasibility of a depth-resolved spectroscopic imaging using EPRS. The advantages and drawbacks of EPRS in evaluation of biological tissue are analyzed and discussed. PMID:27446712

  9. Use of LIDAR for Measuring Snowpack Depth

    NASA Astrophysics Data System (ADS)

    Miller, S. L.; Elder, K.; Cline, D.; Davis, R. E.; Ochs, E.

    2003-12-01

    Airborne LIDAR measurements were made near the date of peak snow accumulation in Colorado as part of the NASA Cold Land Processes Experiment (CLPX). LIDAR (LIght Detection And Ranging) overflights were repeated in the late summer following the experiment to obtain a baseline on the terrain in the areas where wintertime LIDAR data were collected. These areas were also measured for many snowpack parameters, including snow depth, by field crews near the winter overflight date. The surfaces generated by differencing the two LIDAR images produced a high-resolution spatial map of snow depth. The results were compared to point measurements of snow depth collected by the field teams. Results were also compared to modeled continuous distributions of snow cover to obtain differences in volume of snow predicted over the study sites. Absolute accuracy of the LIDAR data was evaluated using portions of the LIDAR imagery that was snow free during both overflights. The CLPX field campaign made on-site measurements at nine 1-km square study sites. Site characteristics varied greatly from subalpine to alpine, from thick forest to grassland, and from complex to flat terrain. The observed snowpacks varied between the deepest found in Colorado to shallow, discontinuous snow cover.

  10. Diurnal variations in optical depth at Mars

    NASA Technical Reports Server (NTRS)

    Colburn, D. S.; Pollack, J. B.; Haberle, R. M.

    1989-01-01

    Viking lander camera images of the Sun were used to compute atmospheric optical depth at two sites over a period of 1 to 1/3 martian years. The complete set of 1044 optical depth determinations is presented in graphical and tabular form. Error estimates are presented in detail. Otpical depths in the morning (AM) are generally larger than in the afternoon (PM). The AM-PM differences are ascribed to condensation of water vapor into atmospheric ice aerosols at night and their evaporation in midday. A smoothed time series of these differences shows several seasonal peaks. These are simulated using a one-dimensional radiative convective model which predicts martial atmospheric temperature profiles. A calculation combinig these profiles with water vapor measurements from the Mars Atmospheric Water Detector is used to predict when the diurnal variations of water condensation should occur. The model reproduces a majority of the observed peaks and shows the factors influencing the process. Diurnal variation of condensation is shown to peak when the latitude and season combine to warm the atmosphere to the optimum temperature, cool enough to condense vapor at night and warm enough to cause evaporation at midday.

  11. Diurnal variations in optical depth at Mars

    NASA Astrophysics Data System (ADS)

    Colburn, D. S.; Pollack, J. B.; Haberle, R. M.

    1989-05-01

    Viking lander camera images of the Sun were used to compute atmospheric optical depth at two sites over a period of 1 to 1/3 martian years. The complete set of 1044 optical depth determinations is presented in graphical and tabular form. Error estimates are presented in detail. Otpical depths in the morning (AM) are generally larger than in the afternoon (PM). The AM-PM differences are ascribed to condensation of water vapor into atmospheric ice aerosols at night and their evaporation in midday. A smoothed time series of these differences shows several seasonal peaks. These are simulated using a one-dimensional radiative convective model which predicts martial atmospheric temperature profiles. A calculation combinig these profiles with water vapor measurements from the Mars Atmospheric Water Detector is used to predict when the diurnal variations of water condensation should occur. The model reproduces a majority of the observed peaks and shows the factors influencing the process. Diurnal variation of condensation is shown to peak when the latitude and season combine to warm the atmosphere to the optimum temperature, cool enough to condense vapor at night and warm enough to cause evaporation at midday.

  12. Multi-depth photoacoustic microscopy with a focus tunable lens

    NASA Astrophysics Data System (ADS)

    Lee, Kiri; Chung, Euiheon; Eom, Tae Joong

    2015-03-01

    Optical-resolution photoacoustic microscopy (OR-PAM) has been studied to improve its imaging resolution and functional imaging modality without labeling on biology sample. However the use of high numerical aperture (NA) objective lens confines the field of view or the axial imaging range of OR-PAM. In order to obtain images at different layers, one needs to change either the sample position or the focusing position by mechanical scanning. This mechanical movement of the sample or the objective lens limits the scanning speed and the positioning precision. In this study, we propose a multi-depth PAM with a focus tunable lens. We electrically adjusted the focal length in the depth direction of the sample, and twice extended the axial imaging range up to 660 μm with the objective lens (20X, NA 0.4). The proposed approach can increase scanning speed and avoid step motor induced distortions during PA signal acquisitions without mechanical scanning in the depth direction. To investigate the performance of the multi-depth PAM system, we scanned a black human hair and the ear of a living nude mouse (BALB/c Nude). The obtained PAM images presented the volumetric rendering of black hair and the vasculature of the nude mouse.

  13. Estimating spatial distribution of daily snow depth with kriging methods: combination of MODIS snow cover area data and ground-based observations

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Wang, H. W.; Hou, J. L.

    2015-09-01

    Accurately measuring the spatial distribution of the snow depth is difficult because stations are sparse, particularly in western China. In this study, we develop a novel scheme that produces a reasonable spatial distribution of the daily snow depth using kriging interpolation methods. These methods combine the effects of elevation with information from Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover area (SCA) products. The scheme uses snow-free pixels in MODIS SCA images with clouds removed to identify virtual stations, or areas with zero snow depth, to compensate for the scarcity and uneven distribution of stations. Four types of kriging methods are tested: ordinary kriging (OK), universal kriging (UK), ordinary co-kriging (OCK), and universal co-kriging (UCK). These methods are applied to daily snow depth observations at 50 meteorological stations in northern Xinjiang Province, China. The results show that the spatial distribution of snow depth can be accurately reconstructed using these kriging methods. The added virtual stations improve the distribution of the snow depth and reduce the smoothing effects of the kriging process. The best performance is achieved by the OK method in cases with shallow snow cover and by the UCK method when snow cover is widespread.

  14. Solving the depth of the repeated texture areas based on the clustering algorithm

    NASA Astrophysics Data System (ADS)

    Xiong, Zhang; Zhang, Jun; Tian, Jinwen

    2015-12-01

    The reconstruction of the 3D scene in the monocular stereo vision needs to get the depth of the field scenic points in the picture scene. But there will inevitably be error matching in the process of image matching, especially when there are a large number of repeat texture areas in the images, there will be lots of error matches. At present, multiple baseline stereo imaging algorithm is commonly used to eliminate matching error for repeated texture areas. This algorithm can eliminate the ambiguity correspond to common repetition texture. But this algorithm has restrictions on the baseline, and has low speed. In this paper, we put forward an algorithm of calculating the depth of the matching points in the repeat texture areas based on the clustering algorithm. Firstly, we adopt Gauss Filter to preprocess the images. Secondly, we segment the repeated texture regions in the images into image blocks by using spectral clustering segmentation algorithm based on super pixel and tag the image blocks. Then, match the two images and solve the depth of the image. Finally, the depth of the image blocks takes the median in all depth values of calculating point in the bock. So the depth of repeated texture areas is got. The results of a lot of image experiments show that the effect of our algorithm for calculating the depth of repeated texture areas is very good.

  15. Integrated interpretation of overlapping AEM datasets achieved through standardisation

    NASA Astrophysics Data System (ADS)

    Sørensen, Camilla C.; Munday, Tim; Heinson, Graham

    2015-12-01

    Numerous airborne electromagnetic surveys have been acquired in Australia using a variety of systems. It is not uncommon to find two or more surveys covering the same ground, but acquired using different systems and at different times. Being able to combine overlapping datasets and get a spatially coherent resistivity-depth image of the ground can assist geological interpretation, particularly when more subtle geophysical responses are important. Combining resistivity-depth models obtained from the inversion of airborne electromagnetic (AEM) data can be challenging, given differences in system configuration, geometry, flying height and preservation or monitoring of system acquisition parameters such as waveform. In this study, we define and apply an approach to overlapping AEM surveys, acquired by fixed wing and helicopter time domain electromagnetic (EM) systems flown in the vicinity of the Goulds Dam uranium deposit in the Frome Embayment, South Australia, with the aim of mapping the basement geometry and the extent of the Billeroo palaeovalley. Ground EM soundings were used to standardise the AEM data, although results indicated that only data from the REPTEM system needed to be corrected to bring the two surveys into agreement and to achieve coherent spatial resistivity-depth intervals.

  16. Depth sensitive oblique polarized reflectance spectroscopy of oral epithelial tissue

    NASA Astrophysics Data System (ADS)

    Jimenez, Maria K.; Lam, Sylvia; Poh, Catherine; Sokolov, Konstantin

    2014-05-01

    Identifying depth-dependent alterations associated with epithelial cancerous lesions can be challenging in the oral cavity where variable epithelial thicknesses and troublesome keratin growths are prominent. Spectroscopic methods with enhanced depth resolution would immensely aid in isolating optical properties associated with malignant transformation. Combining multiple beveled fibers, oblique collection geometry, and polarization gating, oblique polarized reflectance spectroscopy (OPRS) achieves depth sensitive detection. We report promising results from a clinical trial of patients with oral lesions suspected of dysplasia or carcinoma demonstrating the potential of OPRS for the analysis of morphological and architectural changes in the context of multilayer, epithelial oral tissue.

  17. Three-dimensional depth profiling of molecular structures.

    PubMed

    Wucher, A; Cheng, J; Zheng, L; Winograd, N

    2009-04-01

    Molecular time of flight secondary ion mass spectrometry (ToF-SIMS) imaging and cluster ion beam erosion are combined to perform a three-dimensional chemical analysis of molecular films. The resulting dataset allows a number of artifacts inherent in sputter depth profiling to be assessed. These artifacts arise from lateral inhomogeneities of either the erosion rate or the sample itself. Using a test structure based on a trehalose film deposited on Si, we demonstrate that the "local" depth resolution may approach values which are close to the physical limit introduced by the information depth of the (static) ToF-SIMS method itself.

  18. Plenoptic depth map in the case of occlusions

    NASA Astrophysics Data System (ADS)

    Yu, Zhan; Yu, Jingyi; Lumsdaine, Andrew; Georgiev, Todor

    2013-03-01

    Recent realizations of hand-held plenoptic cameras have given rise to previously unexplored effects in photography. Designing a mobile phone plenoptic camera is becoming feasible with the significant increase of computing power of mobile devices and the introduction of System on a Chip. However, capturing high numbers of views is still impractical due to special requirements such as ultra-thin camera and low costs. In this paper, we analyze a mobile plenoptic camera solution with a small number of views. Such a camera can produce a refocusable high resolution final image if a depth map is generated for every pixel in the sparse set of views. With the captured multi-view images, the obstacle to recovering a high-resolution depth is occlusions. To robustly resolve these, we first analyze the behavior of pixels in such situations. We show that even under severe occlusion, one can still distinguish different depth layers based on statistics. We estimate the depth of each pixel by discretizing the space in the scene and conducting plane sweeping. Specifically, for each given depth, we gather all corresponding pixels from other views and model the in-focus pixels as a Gaussian distribution. We show how it is possible to distinguish occlusion pixels, and in-focus pixels in order to find the depths. Final depth maps are computed in real scenes captured by a mobile plenoptic camera.

  19. Optimal integration of shading and binocular disparity for depth perception.

    PubMed

    Lovell, Paul G; Bloj, Marina; Harris, Julie M

    2012-01-01

    We explore the relative utility of shape from shading and binocular disparity for depth perception. Ray-traced images either featured a smooth surface illuminated from above (shading-only) or were defined by small dots (disparity-only). Observers judged which of a pair of smoothly curved convex objects had most depth. The shading cue was around half as reliable as the rich disparity information for depth discrimination. Shading- and disparity-defined cues where combined by placing dots in the stimulus image, superimposed upon the shaded surface, resulting in veridical shading and binocular disparity. Independently varying the depth delivered by each channel allowed creation of conflicting disparity-defined and shading-defined depth. We manipulated the reliability of the disparity information by adding disparity noise. As noise levels in the disparity channel were increased, perceived depths and variances shifted toward those of the now more reliable shading cue. Several different models of cue combination were applied to the data. Perceived depths and variances were well predicted by a classic maximum likelihood estimator (MLE) model of cue integration, for all but one observer. We discuss the extent to which MLE is the most parsimonious model to account for observer performance.

  20. Composite hull for full-ocean depth

    SciTech Connect

    Garvey, R.E.; Hawkes, G.S.

    1990-01-01

    A lightweight and economical modular design concept for a manned submersible is proposed to give two passengers repeated access to the deepest parts of the ocean in a safe, comfortable, and efficient manner. This versatile craft will allow work and exploration to be accomplished at moderate to maximum depths without any compromise in terms of capabilities or operating cost. Its design follows the experience acquired from the numerous existing minimum volume'' pressure hull submersible, and represents a radical departure from conventional designs. This paper addresses issues of gaining effective, safe working access for full ocean depth. Cylindrical composite hulls have the potential to achieve positive buoyancy sufficient to carry personnel and equipment swiftly back to the surface after completing exploration of the deepest ocean. Buoyancy for a submersible is similar to lift for an airplane, except that without lift, the airplane remains on the surface, but without buoyancy, the submersible never returns to the surface. There are two means of achieving buoyancy. The traditional method used to steel, titanium, or aluminium alloy deep-ocean vehicles is to add a very large buoy to compensate for the negative buoyancy of the hull. The alternate method is for the hull to displace more than its weight in water. This requires at least twice compression strength per unit mass of hull than steel, titanium, or aluminum alloys can provide. Properly constructed organic-matrix composites are light and strong enough to form a dry, 1-atm cabin with buoyancy to carry research staff and equipment to any depth in the ocean. Three different composite hull configurations are presented. Each is capable of serving as a cabin for a two-person crew. None would displace more than 4 tons of seawater. 30 refs., 3 figs., 1 tab.

  1. Depth of anesthesia estimation and control.

    PubMed

    Huang, J W; Lu, Y Y; Nayak, A; Roy, R J

    1999-01-01

    A fully automated system was developed for the depth of anesthesia estimation and control with the intravenous anesthetic, Propofol. The system determines the anesthesia depth by assessing the characteristics of the mid-latency auditory evoked potentials (MLAEP). The discrete time wavelet transformation was used for compacting the MLAEP which localizes the time and the frequency of the waveform. Feature reduction utilizing step discriminant analysis selected those wavelet coefficients which best distinguish the waveforms of those responders from the nonresponders. A total of four features chosen by such analysis coupled with the Propofol effect-site concentration were used to train a four-layer artificial neural network for classifying between the responders and the nonresponders. The Propofol is delivered by a mechanical syringe infusion pump controlled by Stanpump which also estimates the Propofol effect-site and plasma concentrations using a three-compartment pharmacokinetic model with the Tackley parameter set. In the animal experiments on dogs, the system achieved a 89.2% accuracy rate for classifying anesthesia depth. This result was further improved when running in real-time with a confidence level estimator which evaluates the reliability of each neural network output. The anesthesia level is adjusted by scheduled incrementation and a fuzzy-logic based controller which assesses the mean arterial pressure and/or the heart rate for decrementation as necessary. Various safety mechanisms are implemented to safeguard the patient from erratic controller actions caused by external disturbances. This system completed with a friendly interface has shown satisfactory performance in estimating and controlling the depth of anesthesia.

  2. Multidepth imaging by chromatic dispersion confocal microscopy

    NASA Astrophysics Data System (ADS)

    Olsovsky, Cory A.; Shelton, Ryan L.; Saldua, Meagan A.; Carrasco-Zevallos, Oscar; Applegate, Brian E.; Maitland, Kristen C.

    2012-03-01

    Confocal microscopy has shown potential as an imaging technique to detect precancer. Imaging cellular features throughout the depth of epithelial tissue may provide useful information for diagnosis. However, the current in vivo axial scanning techniques for confocal microscopy are cumbersome, time-consuming, and restrictive when attempting to reconstruct volumetric images acquired in breathing patients. Chromatic dispersion confocal microscopy (CDCM) exploits severe longitudinal chromatic aberration in the system to axially disperse light from a broadband source and, ultimately, spectrally encode high resolution images along the depth of the object. Hyperchromat lenses are designed to have severe and linear longitudinal chromatic aberration, but have not yet been used in confocal microscopy. We use a hyperchromat lens in a stage scanning confocal microscope to demonstrate the capability to simultaneously capture information at multiple depths without mechanical scanning. A photonic crystal fiber pumped with a 830nm wavelength Ti:Sapphire laser was used as a supercontinuum source, and a spectrometer was used as the detector. The chromatic aberration and magnification in the system give a focal shift of 140μm after the objective lens and an axial resolution of 5.2-7.6μm over the wavelength range from 585nm to 830nm. A 400x400x140μm3 volume of pig cheek epithelium was imaged in a single X-Y scan. Nuclei can be seen at several depths within the epithelium. The capability of this technique to achieve simultaneous high resolution confocal imaging at multiple depths may reduce imaging time and motion artifacts and enable volumetric reconstruction of in vivo confocal images of the epithelium.

  3. Invariant high resolution optical skin imaging

    NASA Astrophysics Data System (ADS)

    Murali, Supraja; Rolland, Jannick

    2007-02-01

    Optical Coherence Microscopy (OCM) is a bio-medical low coherence interferometric imaging technique that has become a topic of active research because of its ability to provide accurate, non-invasive cross-sectional images of biological tissue with much greater resolution than the current common technique ultrasound. OCM is a derivative of Optical Coherence Tomography (OCT) that enables greater resolution imposed by the implementation of an optical confocal design involving high numerical aperture (NA) focusing in the sample. The primary setback of OCM, however is the depth dependence of the lateral resolution obtained that arises from the smaller depth of focus of the high NA beam. We propose to overcome this limitation using a dynamic focusing lens design that can achieve quasi-invariant lateral resolution up to 1.5mm depth of skin tissue.

  4. High resolution multiplexed functional imaging in live embyros (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xu, Dongli; Peng, Leilei

    2016-03-01

    Optical projection tomography (OPT) creates isotropic 3D imaging of tissue. Two approaches exist today: Wide-field OPT illuminates the entire sample and acquires projection images with a camera; Scanning-laser optical tomography (SLOT) generates the projection with a moving laser beam and point detector. SLOT has superior light collecting efficiency than wide-field optical tomography, making it ideal for tissue fluorescence imaging. Regardless the approach, traditional OPT has to compromise between the resolution and the depth of view. In traditional SLOT, the focused Gaussian beam diverges quickly from the focused plane, making it impossible to achieve high resolution imaging through a large volume specimen. We report using Bessel beam instead of Gaussian beam to perform SLOT. By illuminating samples with a narrow Bessel beam throughout an extended depth, high-resolution projection images can be measured in large volume. Under Bessel illumination, the projection image contains signal from annular-rings of the Bessel beam. Traditional inverse Radon transform of these projections will result in ringing artifacts in reconstructed imaging. Thus a modified 3D filtered back projection algorithm is developed to perform tomography reconstructing of Bessel-illuminated projection images. The resulting 3D imaging is free of artifact and achieved cellular resolution in extended sample volume. The system is applied to in-vivo imaging of transgenic Zebrafish embryos. Results prove Bessel SLOT a promising imaging method in development biology research.

  5. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing

    PubMed Central

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-01-01

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results s