Science.gov

Sample records for achievable imaging depth

  1. Depth-aware image seam carving.

    PubMed

    Shen, Jianbing; Wang, Dapeng; Li, Xuelong

    2013-10-01

    Image seam carving algorithm should preserve important and salient objects as much as possible when changing the image size, while not removing the secondary objects in the scene. However, it is still difficult to determine the important and salient objects that avoid the distortion of these objects after resizing the input image. In this paper, we develop a novel depth-aware single image seam carving approach by taking advantage of the modern depth cameras such as the Kinect sensor, which captures the RGB color image and its corresponding depth map simultaneously. By considering both the depth information and the just noticeable difference (JND) model, we develop an efficient JND-based significant computation approach using the multiscale graph cut based energy optimization. Our method achieves the better seam carving performance by cutting the near objects less seams while removing distant objects more seams. To the best of our knowledge, our algorithm is the first work to use the true depth map captured by Kinect depth camera for single image seam carving. The experimental results demonstrate that the proposed approach produces better seam carving results than previous content-aware seam carving methods. PMID:23893762

  2. Extended depth of field imaging for high speed object analysis

    NASA Technical Reports Server (NTRS)

    Ortyn, William (Inventor); Basiji, David (Inventor); Frost, Keith (Inventor); Liang, Luchuan (Inventor); Bauer, Richard (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)

    2011-01-01

    A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.

  3. Fast planar segmentation of depth images

    NASA Astrophysics Data System (ADS)

    Javan Hemmat, Hani; Pourtaherian, Arash; Bondarev, Egor; de With, Peter H. N.

    2015-03-01

    One of the major challenges for applications dealing with the 3D concept is the real-time execution of the algorithms. Besides this, for the indoor environments, perceiving the geometry of surrounding structures plays a prominent role in terms of application performance. Since indoor structures mainly consist of planar surfaces, fast and accurate detection of such features has a crucial impact on quality and functionality of the 3D applications, e.g. decreasing model size (decimation), enhancing localization, mapping, and semantic reconstruction. The available planar-segmentation algorithms are mostly developed using surface normals and/or curvatures. Therefore, they are computationally expensive and challenging for real-time performance. In this paper, we introduce a fast planar-segmentation method for depth images avoiding surface normal calculations. Firstly, the proposed method searches for 3D edges in a depth image and finds the lines between identified edges. Secondly, it merges all the points on each pair of intersecting lines into a plane. Finally, various enhancements (e.g. filtering) are applied to improve the segmentation quality. The proposed algorithm is capable of handling VGA-resolution depth images at a 6 FPS frame-rate with a single-thread implementation. Furthermore, due to the multi-threaded design of the algorithm, we achieve a factor of 10 speedup by deploying a GPU implementation.

  4. PSF engineering in multifocus microscopy for increased depth volumetric imaging.

    PubMed

    Hajj, Bassam; El Beheiry, Mohamed; Dahan, Maxime

    2016-03-01

    Imaging and localizing single molecules with high accuracy in a 3D volume is a challenging task. Here we combine multifocal microscopy, a recently developed volumetric imaging technique, with point spread function engineering to achieve an increased depth for single molecule imaging. Applications in 3D single molecule localization-based super-resolution imaging is shown over an axial depth of 4 µm as well as for the tracking of diffusing beads in a fluid environment over 8 µm. PMID:27231584

  5. PSF engineering in multifocus microscopy for increased depth volumetric imaging

    PubMed Central

    Hajj, Bassam; El Beheiry, Mohamed; Dahan, Maxime

    2016-01-01

    Imaging and localizing single molecules with high accuracy in a 3D volume is a challenging task. Here we combine multifocal microscopy, a recently developed volumetric imaging technique, with point spread function engineering to achieve an increased depth for single molecule imaging. Applications in 3D single molecule localization-based super-resolution imaging is shown over an axial depth of 4 µm as well as for the tracking of diffusing beads in a fluid environment over 8 µm. PMID:27231584

  6. The IRAC Lensing Survey: Achieving JWST depth with Spitzer

    NASA Astrophysics Data System (ADS)

    Egami, Eiichi; Ellis, Richard; Fazio, Giovanni; Huang, Jiasheng; Jiang, Linghua; Kneib, Jean-Paul; Pello, Roser; Richard, Johan; Rieke, George; Schaerer, Daniel; Smith, Graham; Stark, Daniel; Werner, Mike

    2008-12-01

    Massive clusters of galaxies are now recognized as very effective 'cosmic telescopes'. Because of the gravitational lensing effect, they can amplify significantly the background sources - by factors of a few tens - thereby bringing into view faint sources that would otherwise be unobservable. Note that in the background-limited case, which is applicable to IRAC observations, a factor of 20-30 gravitational amplification translates into increasing the integration time by a factor of 400-900. Because of this tremendous gain in sensitivity, IRAC imaging of lensing clusters will allow us to achieve JWST depth (~10 nJy) with Spitzer. Despite this great possibility, however, the full potential of the lensing cluster technique has not yet been realized due to the small number of clusters that have well-constrained accurate mass models. Here, we propose to conduct an IRAC imaging survey of 47 massive lensing clusters (5 hours/band, 2 bands) for which we have constructed accurate mass models through many years of intensive imaging/spectroscopic campaigns with HST, Keck, and VLT telescopes. This is the first time when such a large, statistical sample of clusters will be systematically employed to probe high-redshift Universe, and this proposed IRAC survey is a key component of our comprehensive program, which includes HST/WFC3 and Herschel observations starting next year. Scientifically, we will use the obtained IRAC data to (1) characterize z>6 galaxies (expecting ~50 z~7-8 galaxy detections), (2) support future Herschel and ALMA surveys, and (3) search for z>6 supernovae. The resultant data set will be a great legacy of Spitzer, allowing us to start tackling JWST sciences well before its launch.

  7. A New Approach for Image Depth from a Single Image

    NASA Astrophysics Data System (ADS)

    Leng, Jiaojiao; Zhao, Tongzhou; Li, Hui; Li, Xiang

    This paper presents a new method called depth from defocus (DFD) to obtain the image depth from a single still image. The traditional approaches always depend on the local features which are insufficient for estimation or need multiple images that cause a large amount of computation. The reverse heat equation is applied to get the defocused image. Then we use confidence interval to segment the defocused image and obtain a hierarchical image with guided image filter. The method need only a single image so it overcomes the massive computation and enhances the computation effect. The result shows that the DFD method is validate and efficient.

  8. Depth dependence of vascular fluorescence imaging

    PubMed Central

    Davis, Mitchell A.; Shams Kazmi, S. M.; Ponticorvo, Adrien; Dunn, Andrew K.

    2011-01-01

    In vivo surface imaging of fluorescently labeled vasculature has become a widely used tool for functional brain imaging studies. Techniques such as phosphorescence quenching for oxygen tension measurements and indocyanine green fluorescence for vessel perfusion monitoring rely on surface measurements of vascular fluorescence. However, the depth dependence of the measured fluorescence signals has not been modeled in great detail. In this paper, we investigate the depth dependence of the measured signals using a three-dimensional Monte Carlo model combined with high resolution vascular anatomy. We found that a bulk-vascularization assumption to modeling the depth dependence of the signal does not provide an accurate picture of penetration depth of the collected fluorescence signal in most cases. Instead the physical distribution of microvasculature, the degree of absorption difference between extravascular and intravascular space, and the overall difference in absorption at the excitation and emission wavelengths must be taken into account to determine the depth penetration of the fluorescence signal. Additionally, we found that using targeted illumination can provide for superior surface vessel sensitivity over wide-field illumination, with small area detection offering an even greater amount of sensitivity to surface vasculature. Depth sensitivity can be enhanced by either increasing the detector area or increasing the illumination area. Finally, we see that excitation wavelength and vessel size can affect intra-vessel sampling distribution, as well as the amount of signal that originates from inside the vessel under targeted illumination conditions. PMID:22162824

  9. Lunar Regolith Depths from LROC Images

    NASA Astrophysics Data System (ADS)

    Bart, Gwendolyn D.; Nickerson, R.; Lawder, M.

    2010-10-01

    Since the 1960's, most lunar photography and science covered the equatorial near side where the Apollo spacecraft landed. As a result, our understanding of lunar regolith depth was also limited to that region. Oberbeck and Quaide (JGR 1968) found regolith depths for the lunar near side: 3 m (Oceanus Procellarum), 16 m (Hipparchus), and 1-10 m at the Surveyor landing sites. The Lunar Reconnaissance Orbiter Camera recently released high resolution images that sample regions all around the lunar globe. We examined a selection of these images across the lunar globe and determined a regolith depth for each area. To do this, we measured the ratio of the diameter of the flat floor to the diameter of the crater, and used it to calculate the regolith thickness using the method of Quaide and Oberbeck (JGR 1968). Analysis of the global distribution of lunar regolith depths will provide new insights into the evolution of the lunar surface and the frequency, distribution, and effect of impacts.

  10. Coding depth perception from image defocus.

    PubMed

    Supèr, Hans; Romeo, August

    2014-12-01

    As a result of the spider experiments in Nagata et al. (2012), it was hypothesized that the depth perception mechanisms of these animals should be based on how much images are defocused. In the present paper, assuming that relative chromatic aberrations or blur radii values are known, we develop a formulation relating the values of these cues to the actual depth distance. Taking into account the form of the resulting signals, we propose the use of latency coding from a spiking neuron obeying Izhikevich's 'simple model'. If spider jumps can be viewed as approximately parabolic, some estimates allow for a sensory-motor relation between the time to the first spike and the magnitude of the initial velocity of the jump.

  11. Shallow depth subsurface imaging with microwave holography

    NASA Astrophysics Data System (ADS)

    Zhuravlev, Andrei; Ivashov, Sergey; Razevig, Vladimir; Vasiliev, Igor; Bechtel, Timothy

    2014-05-01

    In this paper, microwave holography is considered as a tool to obtain high resolution images of shallowly buried objects. Signal acquisition is performed at multiple frequencies on a grid using a two-dimensional mechanical scanner moving a single transceiver over an area of interest in close proximity to the surface. The described FFT-based reconstruction technique is used to obtain a stack of plan view images each using only one selected frequency from the operating waveband of the radar. The extent of a synthetically-formed aperture and the signal wavelength define the plan view resolution, which at sounding frequencies near 7 GHz amounts to 2 cm. The system has a short depth of focus which allows easy selection of proper focusing plane. The small distance from the buried objects to the antenna does not prevent recording of clean images due to multiple reflections (as happens with impulse radars). The description of the system hardware and signal processing technique is illustrated using experiments conducted in dry sand. The microwave images of inert anti-personnel mines are demonstrated as examples. The images allow target discrimination based on the same visually-discernible small features that a human observer would employ. The demonstrated technology shows promise for modification to meet the specific practical needs required for humanitarian demining or in multi-sensor survey systems.

  12. Color and depth priors in natural images.

    PubMed

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2013-06-01

    Natural scene statistics have played an increasingly important role in both our understanding of the function and evolution of the human vision system, and in the development of modern image processing applications. Because range (egocentric distance) is arguably the most important thing a visual system must compute (from an evolutionary perspective), the joint statistics between image information (color and luminance) and range information are of particular interest. It seems obvious that where there is a depth discontinuity, there must be a higher probability of a brightness or color discontinuity too. This is true, but the more interesting case is in the other direction--because image information is much more easily computed than range information, the key conditional probabilities are those of finding a range discontinuity given an image discontinuity. Here, the intuition is much weaker; the plethora of shadows and textures in the natural environment imply that many image discontinuities must exist without corresponding changes in range. In this paper, we extend previous work in two ways--we use as our starting point a very high quality data set of coregistered color and range values collected specifically for this purpose, and we evaluate the statistics of perceptually relevant chromatic information in addition to luminance, range, and binocular disparity information. The most fundamental finding is that the probabilities of finding range changes do in fact depend in a useful and systematic way on color and luminance changes; larger range changes are associated with larger image changes. Second, we are able to parametrically model the prior marginal and conditional distributions of luminance, color, range, and (computed) binocular disparity. Finally, we provide a proof of principle that this information is useful by showing that our distribution models improve the performance of a Bayesian stereo algorithm on an independent set of input images. To summarize

  13. Burn depth assessments by photoacoustic imaging and laser Doppler imaging.

    PubMed

    Ida, Taiichiro; Iwazaki, Hideaki; Kawaguchi, Yasushi; Kawauchi, Satoko; Ohkura, Tsuyako; Iwaya, Keiichi; Tsuda, Hitoshi; Saitoh, Daizoh; Sato, Shunichi; Iwai, Toshiaki

    2016-03-01

    Diagnosis of burn depths is crucial to determine the treatment plan for severe burn patients. However, an objective method for burn depth assessment has yet to be established, although a commercial laser Doppler imaging (LDI) system is used limitedly. We previously proposed burn depth assessment based on photoacoustic imaging (PAI), in which thermoelastic waves originating from blood under the burned tissue are detected, and we showed the validity of the method by experiments using rat models with three different burn depths: superficial dermal burn, deep dermal burn and deep burn. On the basis of those results, we recently developed a real-time PAI system for clinical burn diagnosis. Before starting a clinical trial, however, there is a need to reveal more detailed diagnostic characteristics, such as linearity and error, of the PAI system as well as to compare its characteristics with those of an LDI system. In this study, we prepared rat models with burns induced at six different temperatures from 70 to 98 °C, which showed a linear dependence of injury depth on the temperature. Using these models, we examined correlations of signals obtained by PAI and LDI with histologically determined injury depths and burn induction temperatures at 48 hours postburn. We found that the burn depths indicated by PAI were highly correlative with histologically determined injury depths (depths of viable vessels) as well as with burn induction temperatures. Perfusion values measured by LDI were less correlative with these parameters, especially for burns induced at higher temperatures, being attributable to the limited detectable depth for light involving a Doppler shift in tissue. In addition, the measurement errors in PAI were smaller than those in LDI. On the basis of these results, we will be able to start clinical studies using the present PAI system.

  14. Extending the depth-of-field for microscopic imaging by means of multifocus color image fusion

    NASA Astrophysics Data System (ADS)

    Hurtado-Pérez, R.; Toxqui-Quitl, C.; Padilla-Vivanco, A.; Ortega-Mendoza, G.

    2015-09-01

    In microscopy, the depth of field (DOF) is limited by the physical characteristics of imaging systems. Imaging a scene with the all the field of view in focus can be an impossible task to achieve. In this paper, metal samples are inspected on multiple focal planes by moving the microscope stage along the z - axis and for each z plane, an image is digitalized. Through digital image processing, an image with all the focused regions is generated from a set of multi focus images. The proposed fusion algorithm gives a single sharp image. The merger scheme is simple, fast and virtually free of artifacts or false color. Experimental fusion results are shown.

  15. Volumetric retinal fluorescence microscopic imaging with extended depth of field

    NASA Astrophysics Data System (ADS)

    Li, Zengzhuo; Fischer, Andrew; Li, Wei; Li, Guoqiang

    2016-03-01

    Wavefront-engineered microscope with greatly extended depth of field (EDoF) is designed and demonstrated for volumetric imaging with near-diffraction limited optical performance. A bright field infinity-corrected transmissive/reflective light microscope is built with Kohler illumination. A home-made phase mask is placed in between the objective lens and the tube lens for ease of use. General polynomial function is adopted in the design of the phase plate for robustness and custom merit function is used in Zemax for optimization. The resulting EDoF system achieves an engineered point spread function (PSF) that is much less sensitive to object depth variation than conventional systems and therefore 3D volumetric information can be acquired in a single frame with expanded tolerance of defocus. In Zemax simulation for a setup using 32X objective (NA = 0.6), the EDoF is 20μm whereas a conventional one has a DoF of 1.5μm, indicating a 13 times increase. In experiment, a 20X objective lens with NA = 0.4 was used and the corresponding phase plate was designed and fabricated. Retinal fluorescence images of the EDoF microscope using passive adaptive optical phase element illustrate a DoF around 100μm and it is able to recover the volumetric fluorescence images that are almost identical to in-focus images after post processing. The image obtained from the EDoF microscope is also better in resolution and contrast, and the retinal structure is better defined. Hence, due to its high tolerance of defocus and fine restored image quality, EDoF optical systems have promising potential in consumer portable medical imaging devices where user's ability to achieve focus is not optimal, and other medical imaging equipment where achieving best focus is not a necessary.

  16. Monocular depth perception using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Hombali, Apoorv; Gorde, Vaibhav; Deshpande, Abhishek

    2011-10-01

    This paper primarily exploits some of the more obscure, but inherent properties of camera and image to propose a simpler and more efficient way of perceiving depth. The proposed method involves the use of a single stationary camera at an unknown perspective and an unknown height to determine depth of an object on unknown terrain. In achieving so a direct correlation between a pixel in an image and the corresponding location in real space has to be formulated. First, a calibration step is undertaken whereby the equation of the plane visible in the field of view is calculated along with the relative distance between camera and plane by using a set of derived spatial geometrical relations coupled with a few intrinsic properties of the system. The depth of an unknown object is then perceived by first extracting the object under observation using a series of image processing steps followed by exploiting the aforementioned mapping of pixel and real space coordinate. The performance of the algorithm is greatly enhanced by the introduction of reinforced learning making the system independent of hardware and environment. Furthermore the depth calculation function is modified with a supervised learning algorithm giving consistent improvement in results. Thus, the system uses the experience in past and optimizes the current run successively. Using the above procedure a series of experiments and trials are carried out to prove the concept and its efficacy.

  17. Efficient patch-based approach for compressive depth imaging.

    PubMed

    Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David; Carin, Lawrence

    2016-09-20

    We present efficient camera hardware and algorithms to capture images with extended depth of field. The camera moves its focal plane via a liquid lens and modulates the scene at different focal planes by shifting a fixed binary mask, with synchronization achieved by using the same triangular wave to control the focal plane and the pizeoelectronic translator that shifts the mask. Efficient algorithms are developed to reconstruct the all-in-focus image and the depth map from a single coded exposure, and various sparsity priors are investigated to enhance the reconstruction, including group sparsity, tree structure, and dictionary learning. The algorithms naturally admit a parallel computational structure due to the independent patch-level operations. Experimental results on both simulation and real datasets demonstrate the efficacy of the new hardware and the inversion algorithms. PMID:27661583

  18. Underwater depth imaging using time-correlated single photon counting

    NASA Astrophysics Data System (ADS)

    Maccarone, Aurora; McCarthy, Aongus; Ren, Ximing; Warburton, Ryan E.; Wallace, Andy M.; Moffat, James; Petillot, Yvan; Buller, Gerald S.

    2015-05-01

    We investigate the potential of a depth imaging system for underwater environments. This system is based on the timeof- flight approach and the time correlated single-photon counting (TCSPC) technique. We report laboratory-based measurements and explore the potential of achieving sub-centimeter xyz resolution at 10's meters stand-off distances. Initial laboratory-based experiments demonstrate depth imaging performed over distances of up to 1.8 meters and under a variety of scattering conditions. The system comprised a monostatic transceiver unit, a fiber-coupled supercontinuum laser with a wavelength tunable acousto-optic filter, and a fiber-coupled individual silicon single-photon avalanche diode (SPAD). The scanning in xy was performed using a pair of galvonometer mirrors directing both illumination and scattered returns via a coaxial optical configuration. Target objects were placed in a 110 liter capacity tank and depth images were acquired through approximately 1.7 meters of water containing different concentrations of scattering agent. Depth images were acquired in clear and highly scattering water using per-pixel acquisition times in the range 0.5-100 ms at average optical powers in the range 0.8 nW to 120 μW. Based on the laboratory measurements, estimations of potential performance, including maximum range possible, were performed with a model based on the LIDAR equation. These predictions will be presented for different levels of scattering agent concentration, optical powers, wavelengths and comparisons made with naturally occurring environments. The experimental and theoretical results indicate that the TCSPC technique has potential for highresolution underwater depth profile measurements.

  19. Depth Analogy: Data-Driven Approach for Single Image Depth Estimation Using Gradient Samples.

    PubMed

    Choi, Sunghwan; Min, Dongbo; Ham, Bumsub; Kim, Youngjung; Oh, Changjae; Sohn, Kwanghoon

    2015-12-01

    Inferring scene depth from a single monocular image is a highly ill-posed problem in computer vision. This paper presents a new gradient-domain approach, called depth analogy, that makes use of analogy as a means for synthesizing a target depth field, when a collection of RGB-D image pairs is given as training data. Specifically, the proposed method employs a non-parametric learning process that creates an analogous depth field by sampling reliable depth gradients using visual correspondence established on training image pairs. Unlike existing data-driven approaches that directly select depth values from training data, our framework transfers depth gradients as reconstruction cues, which are then integrated by the Poisson reconstruction. The performance of most conventional approaches relies heavily on the training RGB-D data used in the process, and such a dependency severely degenerates the quality of reconstructed depth maps when the desired depth distribution of an input image is quite different from that of the training data, e.g., outdoor versus indoor scenes. Our key observation is that using depth gradients in the reconstruction is less sensitive to scene characteristics, providing better cues for depth recovery. Thus, our gradient-domain approach can support a great variety of training range datasets that involve substantial appearance and geometric variations. The experimental results demonstrate that our (depth) gradient-domain approach outperforms existing data-driven approaches directly working on depth domain, even when only uncorrelated training datasets are available. PMID:26529766

  20. Depth-resolved ballistic imaging in a low-depth-of-field optical Kerr gated imaging system

    NASA Astrophysics Data System (ADS)

    Zheng, Yipeng; Tan, Wenjiang; Si, Jinhai; Ren, YuHu; Xu, Shichao; Tong, Junyi; Hou, Xun

    2016-09-01

    We demonstrate depth-resolved imaging in a ballistic imaging system, in which a heterodyned femtosecond optical Kerr gate is introduced to extract useful imaging photons for detecting an object hidden in turbid media and a compound lens is proposed to ensure both the depth-resolved imaging capability and the long working distance. Two objects of about 15-μm widths hidden in a polystyrene-sphere suspension have been successfully imaged with approximately 600-μm depth resolution. Modulation-transfer-function curves with the object in and away from the object plane have also been measured to confirm the depth-resolved imaging capability of the low-depth-of-field (low-DOF) ballistic imaging system. This imaging approach shows potential for application in research of the internal structure of highly scattering fuel spray.

  1. Objective, comparative assessment of the penetration depth of temporal-focusing microscopy for imaging various organs

    NASA Astrophysics Data System (ADS)

    Rowlands, Christopher J.; Bruns, Oliver T.; Bawendi, Moungi G.; So, Peter T. C.

    2015-06-01

    Temporal focusing is a technique for performing axially resolved widefield multiphoton microscopy with a large field of view. Despite significant advantages over conventional point-scanning multiphoton microscopy in terms of imaging speed, the need to collect the whole image simultaneously means that it is expected to achieve a lower penetration depth in common biological samples compared to point-scanning. We assess the penetration depth using a rigorous objective criterion based on the modulation transfer function, comparing it to point-scanning multiphoton microscopy. Measurements are performed in a variety of mouse organs in order to provide practical guidance as to the achievable penetration depth for both imaging techniques. It is found that two-photon scanning microscopy has approximately twice the penetration depth of temporal-focusing microscopy, and that penetration depth is organ-specific; the heart has the lowest penetration depth, followed by the liver, lungs, and kidneys, then the spleen, and finally white adipose tissue.

  2. Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina

    NASA Astrophysics Data System (ADS)

    An, Lin; Shen, Tueng T.; Wang, Ruikang K.

    2011-10-01

    This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.

  3. Predictive coding of depth images across multiple views

    NASA Astrophysics Data System (ADS)

    Morvan, Yannick; Farin, Dirk; de With, Peter H. N.

    2007-02-01

    A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows the viewer to select his preferred viewpoint, or 3D TV where the depth of the scene can be perceived using a special display. Because the user-selected view does not always correspond to a camera position, it may be necessary to synthesize a virtual camera view. To synthesize such a virtual view, we have adopted a depth image-based rendering technique that employs one depth map for each camera. Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data. This paper presents a predictivecoding algorithm for the compression of depth images across multiple views. The presented algorithm provides (a) an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264), and (b), a random access to different views for fast rendering. The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data avoids an independent transmission of depth for each view, while simplifying the view interpolation by synthesizing depth images for arbitrary view points. We present experimental results for several multiview depth sequences, that result in a quality improvement of up to 1.8 dB as compared to H.264 compression.

  4. High resolution depth-resolved imaging from multi-focal images for medical ultrasound.

    PubMed

    Diamantis, Konstantinos; Dalgarno, Paul A; Greenaway, Alan H; Anderson, Tom; Jensen, Jørgen Arendt; Sboros, Vassilis

    2015-01-01

    An ultrasound imaging technique providing sub-diffraction limit axial resolution for point sources is proposed. It is based on simultaneously acquired multi-focal images of the same object, and on the image metric of sharpness. The sharpness is extracted by image data and presents higher values for in-focus images. The technique is derived from biological microscopy and is validated here with simulated ultrasound data. A linear array probe is used to scan a point scatterer phantom that moves in depth with a controlled step. From the beamformed responses of each scatterer position the image sharpness is assessed. Values from all positions plotted together form a curve that peaks at the receive focus, which is set during the beamforming. Selection of three different receive foci for each acquired dataset will result in the generation of three overlapping sharpness curves. A set of three calibration curves combined with the use of a maximum-likelihood algorithm is then able to estimate, with high precision, the depth location of any emitter fron each single image. Estimated values are compared with the ground truth demonstrating that an accuracy of 28.6 μm (0.13λ) is achieved for a 4 mm depth range. PMID:26737920

  5. Ultra-slim 2D- and depth-imaging camera modules for mobile imaging

    NASA Astrophysics Data System (ADS)

    Brückner, Andreas; Oberdörster, Alexander; Dunkel, Jens; Reimann, Andreas; Wippermann, Frank

    2016-03-01

    In this contribution, a microoptical imaging system is demonstrated that is inspired by the insect compound eye. The array camera module achieves HD resolution with a z-height of 2.0 mm, which is about 50% compared to traditional cameras with comparable parameters. The FOV is segmented by multiple optical channels imaging in parallel. The partial images are stitched together to form a final image of the whole FOV by image processing software. The system is able to acquire depth maps along with the 2D video and it includes light field imaging features such as software refocusing. The microlens arrays are realized by microoptical technologies on wafer-level which are suitable for a potential fabrication in high volume.

  6. Increasing the imaging depth through computational scattering correction (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Koberstein-Schwarz, Benno; Omlor, Lars; Schmitt-Manderbach, Tobias; Mappes, Timo; Ntziachristos, Vasilis

    2016-03-01

    Imaging depth is one of the most prominent limitations in light microscopy. The depth in which we are still able to resolve biological structures is limited by the scattering of light within the sample. We have developed an algorithm to compensate for the influence of scattering. The potential of algorithm is demonstrated on a 3D image stack of a zebrafish embryo captured with a selective plane illumination microscope (SPIM). With our algorithm we were able shift the point in depth, where scattering starts to blur the imaging and effect the image quality by around 30 µm. For the reconstruction the algorithm only uses information from within the image stack. Therefore the algorithm can be applied on the image data from every SPIM system without further hardware adaption. Also there is no need for multiple scans from different views to perform the reconstruction. The underlying model estimates the recorded image as a convolution between the distribution of fluorophores and a point spread function, which describes the blur due to scattering. Our algorithm performs a space-variant blind deconvolution on the image. To account for the increasing amount of scattering in deeper tissue, we introduce a new regularizer which models the increasing width of the point spread function in order to improve the image quality in the depth of the sample. Since the assumptions the algorithm is based on are not limited to SPIM images the algorithm should also be able to work on other imaging techniques which provide a 3D image volume.

  7. Infrared imaging of burn wounds to determine burn depth

    NASA Astrophysics Data System (ADS)

    Hargroder, Andrew G.; Davidson, James E., Sr.; Luther, Donald G.; Head, Jonathan F.

    1999-07-01

    Determination of burn wound depth is at present left to the surgeons visual examination. Many burn wounds are obviously, by visual inspection, superficial 2 degree burns or true 3 degree burns. However, those burn wounds that fall between the obvious depth burns are difficult to assess visually, and therefore wound depth determination often requires waiting 5 to 7 days postburn. Initially, 10 burn patients underwent IR imaging at various times during the evaluation of their burn wounds. These patients were followed to either healing or skin grafting. The IR images were then reviewed to determine their accuracy in determining the depth of the wound. IR imaging of burn wounds with focal plane staring array midrange IR systems appears promising in determination of burn depth one to two days postburn. This will allow clinical decision regarding operative or nonoperative intervention to be made earlier, thus decreasing hospital stays and time to healing.

  8. An image cancellation approach to depth-from-focus

    SciTech Connect

    Lu, Shin-yee; Graser, M.

    1995-03-01

    Depth calculation of an object allows computer reconstruction of the surface of the object in three dimensions. Such information provides human operators 3D measurements for visualization, diagnostic and manipulation. It can also provide the necessary coordinates for semi or fully automated operations. This paper describes a microscopic imaging system with computer vision algorithms that can obtain the depth information by making use of the shallow depth of field of microscopic lenses.

  9. Calibrating river bathymetry via image to depth quantile transformation

    NASA Astrophysics Data System (ADS)

    Legleiter, C. J.

    2015-12-01

    Remote sensing has emerged as a powerful means of measuring river depths, but standard algorithms such as Optimal Band Ratio Analysis (OBRA) require field measurements to calibrate image-derived estimates. Such reliance upon field-based calibration undermines the advantages of remote sensing. This study introduces an alternative approach based on the probability distribution of depths dd within a reach. Provided a quantity XX related to dd can be derived from a remotely sensed data set, image-to-depth quantile transformation (IDQT) infers depths throughout the image by linking the cumulative distribution function (CDF) of XX to that of dd. The algorithm involves determining, for each pixel in the image, the CDF value for that particular value of X/bar{X} and then inferring the depth at that location from the inverse CDF of the scaled depths d/dbard/bar{d}, where the overbar denotes a reach mean. For X/bar{X}, an empirical CDF can be derived directly from pixel values or a probability distribution fitted. Similarly, the CDF of d/dbard/bar{d} can be obtained from field data or from a theoretical model of the frequency distribution of dd within a reach; gamma distributions have been used for this purpose. In essence, the probability distributions calibrate XX to dd while the image provides the spatial distribution of depths. IDQT offers a number of advantages: 1) direct field measurements of dd during image acquisition are not absolutely necessary; 2) because the XX vs. dd relation need not be linear, negative depth estimates along channel margins and shallow bias in pools are avoided; and 3) because individual pixels are not linked to specific depth measurements, accurate geo-referencing of field and image data sets is not critical. Application of OBRA and IDQT to a gravel-bed river indicated that the new, probabilistic algorithm was as accurate as the standard, regression-based approach and lead to more hydraulically reasonable bathymetric maps.

  10. Volumetric depth peeling for medical image display

    NASA Astrophysics Data System (ADS)

    Borland, David; Clarke, John P.; Fielding, Julia R.; TaylorII, Russell M.

    2006-01-01

    Volumetric depth peeling (VDP) is an extension to volume rendering that enables display of otherwise occluded features in volume data sets. VDP decouples occlusion calculation from the volume rendering transfer function, enabling independent optimization of settings for rendering and occlusion. The algorithm is flexible enough to handle multiple regions occluding the object of interest, as well as object self-occlusion, and requires no pre-segmentation of the data set. VDP was developed as an improvement for virtual arthroscopy for the diagnosis of shoulder-joint trauma, and has been generalized for use in other simple and complex joints, and to enable non-invasive urology studies. In virtual arthroscopy, the surfaces in the joints often occlude each other, allowing limited viewpoints from which to evaluate these surfaces. In urology studies, the physician would like to position the virtual camera outside the kidney collecting system and see inside it. By rendering invisible all voxels between the observer's point of view and objects of interest, VDP enables viewing from unconstrained positions. In essence, VDP can be viewed as a technique for automatically defining an optimal data- and task-dependent clipping surface. Radiologists using VDP display have been able to perform evaluations of pathologies more easily and more rapidly than with clinical arthroscopy, standard volume rendering, or standard MRI/CT slice viewing.

  11. Depth determination from defocused images using neural networks

    NASA Astrophysics Data System (ADS)

    Sreenivasan, Koduri K.; Srinath, Mandayam D.

    1993-05-01

    Determination of the depth of objects in a scene is based on interpretation of the visual cues that tell us how near or far away the objects are. Such cues can be binocular or monocular. Most existing algorithms are based on binocular cues and use a pair of stereo images of the scene to compute a depth map from the disparity between corresponding points in the two images, the geometry of the imaging system, and camera parameters. To solve the correspondence problem, certain simplifying assumptions are usually made. Here we propose a method based on the fact that the brain computes the approximate distance of an object from the viewer from the amount of defocus of its image on the retina. Given two images of a scene taken with different focal settings, we model one of the images as the convolution of a blur function with the other image and use the DFT of the two images to obtain an estimate of the blur at each pixel. A multilayer perceptron using backpropagation learning is used to infer the complex relationship between blur and depth, which also involves the imaging system parameters. Blur functions obtained from a set of images with objects at known depths are used to train the neural network. This approach avoids both the correspondence and camera calibration problems.

  12. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    USGS Publications Warehouse

    Legleiter, Carl

    2016-01-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  13. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    NASA Astrophysics Data System (ADS)

    Legleiter, Carl J.

    2016-05-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R2 = 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  14. Total variation based image deconvolution for extended depth-of-field microscopy images

    NASA Astrophysics Data System (ADS)

    Hausser, F.; Beckers, I.; Gierlak, M.; Kahraman, O.

    2015-03-01

    One approach for a detailed understanding of dynamical cellular processes during drug delivery is the use of functionalized biocompatible nanoparticles and fluorescent markers. An appropriate imaging system has to detect these moving particles so as whole cell volumes in real time with high lateral resolution in a range of a few 100 nm. In a previous study Extended depth-of-field microscopy (EDF-microscopy) has been applied to fluorescent beads and tradiscantia stamen hair cells and the concept of real-time imaging has been proved in different microscopic modes. In principle a phase retardation system like a programmable space light modulator or a static waveplate is incorporated in the light path and modulates the wavefront of light. Hence the focal ellipsoid is smeared out and images seem to be blurred in a first step. An image restoration by deconvolution using the known point-spread-function (PSF) of the optical system is necessary to achieve sharp microscopic images of an extended depth-of-field. This work is focused on the investigation and optimization of deconvolution algorithms to solve this restoration problem satisfactorily. This inverse problem is challenging due to presence of Poisson distributed noise and Gaussian noise, and since the PSF used for deconvolution exactly fits in just one plane within the object. We use non-linear Total Variation based image restoration techniques, where different types of noise can be treated properly. Various algorithms are evaluated for artificially generated 3D images as well as for fluorescence measurements of BPAE cells.

  15. Covering the gap in depth resolution between OCT and SLO in imaging the retina

    NASA Astrophysics Data System (ADS)

    Podoleanu, Adrian Gh.; Rogers, John A.; Suruceanu, Grigore I.; Jackson, David A.

    2001-05-01

    Two instruments are now available for high depth resolution imaging of the retina. A scanning laser ophthalmoscope is a confocal instruments which can achieve no more than 0.3 mm depth resolution. A longitudinal OCT instrument uses a superluminescent diode which determines a depth resolution better than 20 microns. There is a gap in depth resolution between the two technologies. Therefore, different OCT configurations and low coherence sources are investigated to produce a choice of depth resolutions, and to cover the gap between the old confocal technology and the new OCT imaging method. We show that an instrument with adjustable depth resolution is especially useful for the en-face OCT technology. Such an instrument can bring additional benefits to the investigation process, where different requirements must be met. For instance, a poor depth resolution is required in the process of positioning the patient's eye prior to investigation. A good depth resolution is however necessary when imaging small details inside the eye. The utility of the OCT en-face imaging with adjustable coherence length for diagnostic is illustrated by images taken from the eye of a volunteer. Images with a similar aspect to those produced by a scanning laser ophthalmoscope can now be obtained in real time using the OCT principle.

  16. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods. PMID:23996589

  17. Increasing the depth of field in Multiview 3D images

    NASA Astrophysics Data System (ADS)

    Lee, Beom-Ryeol; Son, Jung-Young; Yano, Sumio; Jung, Ilkwon

    2016-06-01

    A super-multiview condition simulator which can project up to four different view images to each eye is introduced. This simulator with the image having both disparity and perspective informs that the depth of field (DOF) will be extended to more than the default DOF values as the number of simultaneously but separately projected different view images to each eye increase. The DOF range can be extended to near 2 diopters with the four simultaneous view images. However, the DOF value increments are not prominent as the image with both disparity and perspective with the image with disparity only.

  18. Recovering depth from focus using iterative image estimation techniques

    SciTech Connect

    Vitria, J.; Llacer, J.

    1993-09-01

    In this report we examine the possibility of using linear and nonlinear image estimation techniques to build a depth map of a three dimensional scene from a sequence of partially focused images. In particular, the techniques proposed to solve the problem of construction of a depth map are: (1) linear methods based on regularization procedures and (2) nonlinear methods based on statistical modeling. In the first case, we have implemented a matrix-oriented method to recover the point spread function (PSF) of a sequence of partially defocused images. In the second case, the chosen method has been a procedure based on image estimation by means of the EM algorithm, a well known technique in image reconstruction in medical applications. This method has been generalized to deal with optically defocused image sequences.

  19. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal

  20. Visually preserving stereoscopic image retargeting using depth carving

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Ma, Huadong; Liu, Liang

    2016-03-01

    This paper presents a method for retargeting a pair of stereoscopic images. Previous works have leveraged seam carving and image warping methods for two-dimensional image editing to address this issue. However, they did not consider the full advantages of the properties of stereoscopic images. Our approach offers substantial performance improvements over the state-of-the-art; the key insights driving the approach are that the input image pair can be decomposed into different depth layers according to the disparity and image segmentation, and the depth cues allow us to address the problem in a three-dimensional (3-D) space domain for best preserving objects. We propose depth carving that extends seam carving in a single image to resize the stereo image pair with disparity consistency. Our method minimizes the shape distortion and preserves object boundaries by creating new occlusions. As a result, the retargeted image pair preserves the stereoscopic quality and protects the original 3-D scene structure. Experimental results demonstrate that our method outperforms the previous methods.

  1. Depth perception from image defocus in a jumping spider.

    PubMed

    Nagata, Takashi; Koyanagi, Mitsumasa; Tsukamoto, Hisao; Saeki, Shinjiro; Isono, Kunio; Shichida, Yoshinori; Tokunaga, Fumio; Kinoshita, Michiyo; Arikawa, Kentaro; Terakita, Akihisa

    2012-01-27

    The principal eyes of jumping spiders have a unique retina with four tiered photoreceptor layers, on each of which light of different wavelengths is focused by a lens with appreciable chromatic aberration. We found that all photoreceptors in both the deepest and second-deepest layers contain a green-sensitive visual pigment, although green light is only focused on the deepest layer. This mismatch indicates that the second-deepest layer always receives defocused images, which contain depth information of the scene in optical theory. Behavioral experiments revealed that depth perception in the spider was affected by the wavelength of the illuminating light, which affects the amount of defocus in the images resulting from chromatic aberration. Therefore, we propose a depth perception mechanism based on how much the retinal image is defocused.

  2. Depth perception from image defocus in a jumping spider.

    PubMed

    Nagata, Takashi; Koyanagi, Mitsumasa; Tsukamoto, Hisao; Saeki, Shinjiro; Isono, Kunio; Shichida, Yoshinori; Tokunaga, Fumio; Kinoshita, Michiyo; Arikawa, Kentaro; Terakita, Akihisa

    2012-01-27

    The principal eyes of jumping spiders have a unique retina with four tiered photoreceptor layers, on each of which light of different wavelengths is focused by a lens with appreciable chromatic aberration. We found that all photoreceptors in both the deepest and second-deepest layers contain a green-sensitive visual pigment, although green light is only focused on the deepest layer. This mismatch indicates that the second-deepest layer always receives defocused images, which contain depth information of the scene in optical theory. Behavioral experiments revealed that depth perception in the spider was affected by the wavelength of the illuminating light, which affects the amount of defocus in the images resulting from chromatic aberration. Therefore, we propose a depth perception mechanism based on how much the retinal image is defocused. PMID:22282813

  3. Demineralization Depth Using QLF and a Novel Image Processing Software.

    PubMed

    Wu, Jun; Donly, Zachary R; Donly, Kevin J; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization.

  4. Demineralization Depth Using QLF and a Novel Image Processing Software

    PubMed Central

    Wu, Jun; Donly, Zachary R.; Donly, Kevin J.; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization. PMID:20445755

  5. Quantitative comparison of the OCT imaging depth at 1300 nm and 1600 nm

    PubMed Central

    Kodach, V. M.; Kalkman, J.; Faber, D. J.; van Leeuwen, T. G.

    2010-01-01

    One of the present challenges in optical coherence tomography (OCT) is the visualization of deeper structural morphology in biological tissues. Owing to a reduced scattering, a larger imaging depth can be achieved by using longer wavelengths. In this work, we analyze the OCT imaging depth at wavelengths around 1300 nm and 1600 nm by comparing the scattering coefficient and OCT imaging depth for a range of Intralipid concentrations at constant water content. We observe an enhanced OCT imaging depth for 1600 nm compared to 1300 nm for Intralipid concentrations larger than 4 vol.%. For higher Intralipid concentrations, the imaging depth enhancement reaches 30%. The ratio of scattering coefficients at the two wavelengths is constant over a large range of scattering coefficients and corresponds to a scattering power of 2.8 ± 0.1. Based on our results we expect for biological tissues an increase of the OCT imaging depth at 1600 nm compared to 1300 nm for samples with high scattering power and low water content. PMID:21258456

  6. Towards Depth-Resolved Optical Imaging of Cardiac Electrical Activity.

    PubMed

    Walton, Richard D; Bernus, Olivier

    2015-01-01

    The spatiotemporal dynamics of arrhythmias are likely to be complex three-dimensional phenomena. Yet, the lack of high-resolution three-dimensional imaging techniques, both in the clinic and the experimental lab, limits our ability to better understand the mechanisms of such arrhythmias. Optical mapping using voltage-sensitive dyes is a widely used tool in experimental electrophysiology. It has been known for decades that even in its most basic application, epi-fluorescence, the optical signal contains information from within a certain intramural volume. Understanding of this fundamental property of optical signals has paved the way towards novel three-dimensional optical imaging techniques. Here, we review our current understanding of the three-dimensional nature of optical signals; how penetration depths of cardiac optical imaging can be improved by using novel imaging modalities and finally, we highlight new techniques inspired from optical tomography and aiming at full depth-resolved optical mapping of cardiac electrical activity. PMID:26238062

  7. Achievements and challenges of EUV mask imaging

    NASA Astrophysics Data System (ADS)

    Davydova, Natalia; van Setten, Eelco; de Kruif, Robert; Connolly, Brid; Fukugami, Norihito; Kodera, Yutaka; Morimoto, Hiroaki; Sakata, Yo; Kotani, Jun; Kondo, Shinpei; Imoto, Tomohiro; Rolff, Haiko; Ullrich, Albrecht; Lammers, Ad; Schiffelers, Guido; van Dijk, Joep

    2014-07-01

    The impact of various mask parameters on CDU combined in a total mask budget is presented, for 22 nm lines, for reticles used for NXE:3300 qualification. Apart from the standard mask CD measurements, actinic spectrometry of multilayer is used to qualify reflectance uniformity over the image field; advanced 3D metrology is applied for absorber profile characterization including absorber height and side wall angle. The predicted mask impact on CDU is verified using actual exposure data collected on multiple NXE:3300 scanners. Mask 3D effects are addressed, manifesting themselves in best focus shifts for different structures exposed with off-axis illumination. Experimental NXE:3300 results for 16 nm dense lines and 20 nm (semi-)isolated spaces are shown: best focus range reaches 24 nm. A mitigation strategy by absorber height optimization is proposed based on experimental results of a special mask with varying absorber heights. Further development of a black image border for EUV mask is considered. The image border is a pattern free area surrounding image field preventing exposure the image field neighborhood on wafer. Normal EUV absorber is not suitable for this purpose as it has 1-3% EUV reflectance. A current solution is etching of ML down to substrate reducing EUV reflectance to <0.05%. A next step in the development of the black border is the reduction of DUV Out-of-Band reflectance (<1.5%) in order to cope with DUV light present in EUV scanners. Promising results achieved in this direction are shown.

  8. Cell depth imaging by point laser scanning fluorescence microscopy with an optical disk pickup head

    NASA Astrophysics Data System (ADS)

    Tsai, Rung-Ywan; Chen, Jung-Po; Lee, Yuan-Chin; Chiang, Hung-Chih; Cheng, Chih-Ming; Huang, Chun-Chieh; Huang, Tai-Ting; Cheng, Chung-Ta; Tiao, Golden

    2015-09-01

    A compact, cost-effective, and position-addressable digital laser scanning microscopy (DLSM) instrument is made using a commercially available Blu-ray disc read-only memory (BD-ROM) pickup head. Fluorescent cell images captured by DLSM have resolutions of 0.38 µm. Because of the position-addressable function, multispectral fluorescence cell images are captured using the same sample slide with different excitation laser sources. Specially designed objective lenses with the same working distance as the image-capturing beam are used for the different excitation laser sources. By accurately controlling the tilting angles of the sample slide or by moving the collimator lens of the image-capturing beam, the fluorescence cell images along different depth positions of the sample are obtained. Thus, z-section images with micrometer-depth resolutions are achievable.

  9. Comparison of curricular breadth, depth, and recurrence and physics achievement of TIMSS Population 3 countries

    NASA Astrophysics Data System (ADS)

    Murdock, John

    This study is a secondary analysis of data from the 1995 administration of the Third International Mathematics and Science Study (TIMSS). The purpose is to compare breadth, depth, and recurrence of the typical physics curriculum in the United States with the typical curricula in different countries and to determine if there are associations between these three curricular constructs and physics achievement. The first data analysis consisted of descriptive statistics (means, standard deviations, and standardized scores) for each of the three curricular variables. This analysis was used to compare the curricular profile in physics of the United States with the profiles of the other countries in the sample. The second data analysis consisted of six sets of correlations relating the three curricular variables with achievement. Five of the correlations were for the five physics content areas and the sixth was for all of physics. This analysis was used to determine if any associations exist between the three curricular constructs and achievement. The results show that the U.S. curriculum has low breadth, low depth, and high recurrence. The U.S. curricular profile was also found to be unique when compared with the profiles of the other countries in the sample. The only statistically significant correlation is between achievement and depth in a positive direction. The correlations between breadth and achievement and between recurrence and achievement were both not statistically significant. Based on the results of this study, depth of curriculum is the only curricular variable that is closely related to physics achievement for the TIMSS sample. Recurrence of curriculum is not related to physics achievement in TIMSS Population 3 countries. The results show no relationship between breadth and achievement, but the physics topics in the TIMSS content framework do not give a complete picture of breadth of physics curriculum in the participating countries. The unique curricular

  10. Convective gas flow development and the maximum depths achieved by helophyte vegetation in lakes

    PubMed Central

    Sorrell, Brian K.; Hawes, Ian

    2010-01-01

    Background and Aims Convective gas flow in helophytes (emergent aquatic plants) is thought to be an important adaptation for the ability to colonize deep water. In this study, the maximum depths achieved by seven helophytes were compared in 17 lakes differing in nutrient enrichment, light attenuation, shoreline exposure and sediment characteristics to establish the importance of convective flow for their ability to form the deepest helophyte vegetation in different environments. Methods Convective gas flow development was compared amongst the seven species, and species were allocated to ‘flow absent’, ‘low flow’ and ‘high flow’ categories. Regression tree analysis and quantile regression analysis were used to determine the roles of flow category, lake water quality, light attenuation and shoreline exposure on maximum helophyte depths. Key Results Two ‘flow absent’ species were restricted to very shallow water in all lakes and their depths were not affected by any environmental parameters. Three ‘low flow’ and two ‘high flow’ species had wide depth ranges, but ‘high flow’ species formed the deepest vegetation far more frequently than ‘low flow’ species. The ‘low flow’ species formed the deepest vegetation most commonly in oligotrophic lakes where oxygen demands in sediments were low, especially on exposed shorelines. The ‘high flow’ species were almost always those forming the deepest vegetation in eutrophic lakes, with Eleocharis sphacelata predominant when light attenuation was low, and Typha orientalis when light attenuation was high. Depths achieved by all five species with convective flow were limited by shoreline exposure, but T. orientalis was the least exposure-sensitive species. Conclusions Development of convective flow appears to be essential for dominance of helophyte species in >0·5 m depth, especially under eutrophic conditions. Exposure, sediment characteristics and light attenuation frequently constrain them

  11. Extended focused imaging and depth map reconstruction in optical scanning holography.

    PubMed

    Ren, Zhenbo; Chen, Ni; Lam, Edmund Y

    2016-02-10

    In conventional microscopy, specimens lying within the depth of field are clearly recorded whereas other parts are blurry. Although digital holographic microscopy allows post-processing on holograms to reconstruct multifocus images, it suffers from defocus noise as a traditional microscope in numerical reconstruction. In this paper, we demonstrate a method that can achieve extended focused imaging (EFI) and reconstruct a depth map (DM) of three-dimensional (3D) objects. We first use a depth-from-focus algorithm to create a DM for each pixel based on entropy minimization. Then we show how to achieve EFI of the whole 3D scene computationally. Simulation and experimental results involving objects with multiple axial sections are presented to validate the proposed approach. PMID:26906373

  12. Thermal parametric imaging in the evaluation of skin burn depth.

    PubMed

    Rumiński, Jacek; Kaczmarek, Mariusz; Renkielska, Alicja; Nowakowski, Antoni

    2007-02-01

    The aim of this paper is to determine the extent to which infrared (IR) thermal imaging may be used for skin burn depth evaluation. The analysis can be made on the basis of the development of a thermal model of the burned skin. Different methods such as the traditional clinical visual approach and the IR imaging modalities of static IR thermal imaging, active IR thermal imaging and active-dynamic IR thermal imaging (ADT) are analyzed from the point of view of skin burn depth diagnostics. In ADT, a new approach is proposed on the basis of parametric image synthesis. Calculation software is implemented for single-node and distributed systems. The properties of all the methods are verified in experiments using phantoms and subsequently in vivo with animals with a reference histopathological examination. The results indicate that it is possible to distinguish objectively and quantitatively burns which will heal spontaneously within three weeks of infliction and which should be treated conservatively from those which need surgery because they will not heal within this period. PMID:17278587

  13. Ultra-long scan depth optical coherence tomography for imaging the anterior segment of human eye

    NASA Astrophysics Data System (ADS)

    Zhu, Dexi; Shen, Meixiao; Leng, Lin

    2012-12-01

    Spectral domain optical coherence tomography (SD-OCT) was developed in order to image the anterior segment of human eye. The optical path at reference arm was switched to compensate the sensitivity drop in OCT images. An scan depth of 12.28 mm and an axial resolution of 12.8 μm in air were achieved. The anterior segment from cornea to posterior surface of crystalline lens was clearly imaged and measured using this system. A custom designed Badal optometer was coupled into the sample arm to induce the accommodation, and the movement of crystalline lens was traced after the image registration. Our research demonstrates that SD-OCT with ultra-long scan depth can be used to image the human eye for accommodation research.

  14. Depth

    PubMed Central

    Koenderink, Jan J; van Doorn, Andrea J; Wagemans, Johan

    2011-01-01

    Depth is the feeling of remoteness, or separateness, that accompanies awareness in human modalities like vision and audition. In specific cases depths can be graded on an ordinal scale, or even measured quantitatively on an interval scale. In the case of pictorial vision this is complicated by the fact that human observers often appear to apply mental transformations that involve depths in distinct visual directions. This implies that a comparison of empirically determined depths between observers involves pictorial space as an integral entity, whereas comparing pictorial depths as such is meaningless. We describe the formal structure of pictorial space purely in the phenomenological domain, without taking recourse to the theories of optics which properly apply to physical space—a distinct ontological domain. We introduce a number of general ways to design and implement methods of geodesy in pictorial space, and discuss some basic problems associated with such measurements. We deal mainly with conceptual issues. PMID:23145244

  15. Depth.

    PubMed

    Koenderink, Jan J; van Doorn, Andrea J; Wagemans, Johan

    2011-01-01

    Depth is the feeling of remoteness, or separateness, that accompanies awareness in human modalities like vision and audition. In specific cases depths can be graded on an ordinal scale, or even measured quantitatively on an interval scale. In the case of pictorial vision this is complicated by the fact that human observers often appear to apply mental transformations that involve depths in distinct visual directions. This implies that a comparison of empirically determined depths between observers involves pictorial space as an integral entity, whereas comparing pictorial depths as such is meaningless. We describe the formal structure of pictorial space purely in the phenomenological domain, without taking recourse to the theories of optics which properly apply to physical space-a distinct ontological domain. We introduce a number of general ways to design and implement methods of geodesy in pictorial space, and discuss some basic problems associated with such measurements. We deal mainly with conceptual issues.

  16. Obtaining anisotropic velocity data for proper depth seismic imaging

    SciTech Connect

    Egerev, Sergey; Yushin, Victor; Ovchinnikov, Oleg; Dubinsky, Vladimir; Patterson, Doug

    2012-05-24

    The paper deals with the problem of obtaining anisotropic velocity data due to continuous acoustic impedance-based measurements while scanning in the axial direction along the walls of the borehole. Diagrams of full conductivity of the piezoceramic transducer were used to derive anisotropy parameters of the rock sample. The measurements are aimed to support accurate depth imaging of seismic data. Understanding these common anisotropy effects is important when interpreting data where it is present.

  17. Dual-imaging system for burn depth diagnosis.

    PubMed

    Ganapathy, Priya; Tamminedi, Tejaswi; Qin, Yi; Nanney, Lillian; Cardwell, Nancy; Pollins, Alonda; Sexton, Kevin; Yadegar, Jacob

    2014-02-01

    Currently, determination of burn depth and healing outcomes has been limited to subjective assessment or a single modality, e.g., laser Doppler imaging. Such measures have proven less than ideal. Recent developments in other non-contact technologies such as optical coherence tomography (OCT) and pulse speckle imaging (PSI) offer the promise that an intelligent fusion of information across these modalities can improve visualization of burn regions thereby increasing the sensitivity of the diagnosis. In this work, we combined OCT and PSI images to classify the degree of burn (superficial, partial-thickness and full-thickness burns). Algorithms were developed to integrate and visualize skin structure (with and without burns) from the two modalities. We have completed the proposed initiatives by employing a porcine burn model and compiled results that attest to the utility of our proposed dual-modal fusion approach. Computer-derived data indicating the varying burn depths were validated through immunohistochemical analysis performed on burned skin tissue. The combined performance of OCT and PSI modalities provided an overall ROC-AUC=0.87 (significant at p<0.001) in classifying different burn types measured after 1-h of creating the burn wounds. Porcine model studies to assess feasibility of this dual-imaging system for wound tracking are underway.

  18. A Bayesian framework for human body pose tracking from depth image sequences.

    PubMed

    Zhu, Youding; Fujimura, Kikuo

    2010-01-01

    This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach.

  19. Developing a methodology for imaging stress transients at seismogenic depth

    NASA Astrophysics Data System (ADS)

    Valette-Silver, N.; Silver, P. G.; Niu, F.; Daley, T.; Majer, E. L.

    2003-12-01

    It is well known that the crust contains cracks down to a depth of several kilometers. The dependence of crustal seismic velocities on crack properties, and in turn, the dependence of crack properties on stress, means that seismic velocity exhibits stress dependence. This dependence constitutes a powerful instrument for studying subsurface transient changes in stress. While these relationships have been known for several decades, time-dependent seismic imaging has not, as of yet, become a reliable means of measuring subsurface seismogenic stress changes. There are two primary reasons for this: 1) lack of sufficient delay-time precision necessary to detect small changes in stress, and 2) the difficulty in establishing a reliable calibration between stress and seismic velocity. The best sources of calibration are the solid-earth tides and barometric pressure, both of which produce weak stress perturbations of order 102-103 Pa. Detecting these sources of stress requires precision in the measurement of fractional velocity changes δ v/v of order 10-5-10-6, based on laboratory experiments. Preliminary field experiments and the analysis of uncertainty from known sources of error suggest that the above precision is now in fact achievable with an active source. Since the most common way of measuring δ v/v is by measuring the fractional change in travel time along the path, δ T/T = -δ v/v, one of the dominant issues in measuring temporal changes in velocity between source and receiver is how precisely we can measure travel time. Analysis based on the Cramer-Rao Lower Bound in signal processing provides a means of identifying optimal choices of parameters in designing the experimental setup, the geometry, and source characteristics so as to maximize precision. For example, the optimal frequency for measuring δ T/T is found to be proportional to the Q of the medium. As an illustration, given a Q of 60 and source-receiver distances of 3 m, 30 m, 100 m and 2000 m the

  20. Efficient human pose estimation from single depth images.

    PubMed

    Shotton, Jamie; Girshick, Ross; Fitzgibbon, Andrew; Sharp, Toby; Cook, Mat; Finocchio, Mark; Moore, Richard; Kohli, Pushmeet; Criminisi, Antonio; Kipman, Alex; Blake, Andrew

    2013-12-01

    We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features and parallelizable decision forests, both approaches can run super-real time on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities. PMID:24136424

  1. Efficient Human Pose Estimation from Single Depth Images.

    PubMed

    Shotton, Jamie; Girshick, Ross; Fitzgibbon, Andrew; Sharp, Toby; Cook, Mat; Finocchio, Mark; Moore, Richard; Kohli, Pushmeet; Criminisi, Antonio; Kipman, Alex; Blake, Andrew

    2012-10-26

    We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image, without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features, and parallelizable decision forests, both approaches can run super-realtime on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities. PMID:23109523

  2. Hyperspectral Imaging for Burn Depth Assessment in an Animal Model

    PubMed Central

    Chin, Michael S.; Babchenko, Oksana; Lujan-Hernandez, Jorge; Nobel, Lisa; Ignotz, Ronald; Lalikos, Janice F.

    2015-01-01

    Abstract Background: Differentiating between superficial and deep-dermal (DD) burns remains challenging. Superficial-dermal burns heal with conservative treatment; DD burns often require excision and skin grafting. Decision of surgical treatment is often delayed until burn depth is definitively identified. This study’s aim is to assess the ability of hyperspectral imaging (HSI) to differentiate burn depth. Methods: Thermal injury of graded severity was generated on the dorsum of hairless mice with a heated brass rod. Perfusion and oxygenation parameters of injured skin were measured with HSI, a noninvasive method of diffuse reflectance spectroscopy, at 2 minutes, 1, 24, 48 and 72 hours after wounding. Burn depth was measured histologically in 12 mice from each burn group (n = 72) at 72 hours. Results: Three levels of burn depth were verified histologically: intermediate-dermal (ID), DD, and full-thickness. At 24 hours post injury, total hemoglobin (tHb) increased by 67% and 16% in ID and DD burns, respectively. In contrast, tHb decreased to 36% of its original levels in full-thickness burns. Differences in deoxygenated and tHb among all groups were significant (P < 0.001) at 24 hours post injury. Conclusions: HSI was able to differentiate among 3 discrete levels of burn injury. This is likely because of its correlation with skin perfusion: superficial burn injury causes an inflammatory response and increased perfusion to the burn site, whereas deeper burns destroy the dermal microvasculature and a decrease in perfusion follows. This study supports further investigation of HSI in early burn depth assessment. PMID:26894016

  3. Comparison of computational methods developed to address depth-variant imaging in fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Rahman, Muhammad Mizanur; Schaefer, Lutz H.; Schuster, Dietwald; Preza, Chrysanthe

    2013-02-01

    In three-dimensional microscopy, the image formation process is inherently depth variant (DV) due to the refractive index mismatch between the imaging layers. In this study, we present a quantitative comparison among different image restoration techniques developed based on a depth-variant (DV) imaging model for fluorescence microscopy. The imaging models employed by these methods approximate DV imaging by either stratifying the object space (analogous to the discrete Fourier transform (DFT) "overlap-add" method) or image space (analogous to the DFT "overlap-save" method). We compare DV implementations based on maximum likelihood (ML) estimation and a previously developed expectation maximization algorithm to a ML conjugate gradient algorithm, using both these stratification approaches in order to assess their impact on the restoration methods. Simulations show that better restoration results are achieved with iterative methods implemented using the overlap-add method than with their implementation using the overlap-save method. However, the overlap-save method makes it possible to implement a non-iterative DV inverse filter that can trade off accuracy of the achieved result for computational speed. Results from a non-iterative regularized inverse filtering approach are also presented.

  4. Stereoscopic imaging: filling disoccluded areas in depth image-based rendering

    NASA Astrophysics Data System (ADS)

    Vázquez, Carlos; Tam, Wa James; Speranza, Filippo

    2006-10-01

    Depth image based rendering (DIBR) is a method for converting 2D material to stereoscopic 3D. With DIBR, information contained in a gray-level (luminance intensity) depth map is used to shift pixels in the 2D image to generate a new image as if it were captured from a new viewpoint. The larger the shift (binocular parallax), the larger is the perceived depth of the generated stereoscopic pair. However, a major problem with DIBR is that the shifted pixels now occupy new positions and leave areas that they originally occupied "empty." These disoccluded regions have to be filled properly, otherwise they can degrade image quality. In this study we investigated different methods for filling these disoccluded regions: (a) Filling regions with a constant color, (b) filling regions with horizontal linear interpolation of values on the hole border, (c) solving the Laplace equation on the hole boundary and propagate the values inside the region, (d) horizontal extrapolation with depth information taken into account, (e) variational inpainting with depth information taken into account, and (f) preprocessing of the depth map to prevent disoccluded regions from appearing. The methods differed in the time required for computing and filling, and the appearance of the filled-in regions. We assessed the subjective image quality outcome for several stereoscopic test images in which the left-eye view was the source and the right-eye view was a rendered view, in line with suggestions in the literature for the asymmetrical coding of stereoscopic images.

  5. Imaging depth-of-thaw beneath arctic streams using ground-penetrating radar

    NASA Astrophysics Data System (ADS)

    Bradford, J. H.; McNamara, J. P.; Bowden, W.; Gooseff, M. N.

    2003-12-01

    We are investigating the responses of arctic tundra stream geomorphology, hyporheic zone hydrology, and biogeochemical cycling to climate change. In particular, we expect that hyporheic exchange dynamics in tundra streams are controlled by 1) channel features (pools, riffles, etc.), and 2) depth-of-thaw beneath the stream channel. A key objective of this effort is monitoring sub-stream thaw through the thaw season using ground-penetrating radar (GPR). In general, GPR is a well established tool for imaging active layer thickness. However, sub-stream imaging presents a unique set of challenges. This is primarily related to strong frequency dependence and high levels of attenuation as the radar signal propagates through water. To test the effectiveness of GPR imaging of sub-stream permafrost we conducted a field investigation near the end of the thaw season when we expected the depth of thaw to be near its maximum. We investigated three sites located within the Kuparuk River and Toolik Lake basins, north of the Brooks Range, Alaska. The sites were characterized by low energy water flow, organic material lining the streambeds, and water depths ranging from 20 cm to 2 m. Water saturated peat with some pooled water was present along the stream banks. We acquired data using a pulsed radar system with high-power transmitter and 200 MHz antennas. We placed the radar antennas in the bottom of a small rubber boat, then pulled the boat across the bank and through the stream while triggering the radar at a constant rate. We verified depth to permafrost by pressing a metal probe through the active layer to the point of refusal. Although there is significant shift toward the low end of the frequency spectrum due to frequency dependent signal attenuation, we achieved excellent results at all three sites with a clear continuous image of the permafrost boundary both peripheral to, and beneath the stream. Depth migration was applied to the profiles to provide an accurate image of

  6. Self-interference fluorescence microscopy: three dimensional fluorescence imaging without depth scanning

    PubMed Central

    de Groot, Mattijs; Evans, Conor L.; de Boer, Johannes F.

    2012-01-01

    We present a new method for high-resolution, three-dimensional fluorescence imaging. In contrast to beam-scanning confocal microscopy, where the laser focus must be scanned both laterally and axially to collect a volume, we obtain depth information without the necessity of depth scanning. In this method, the emitted fluorescence is collected in the backward direction and is sent through a phase plate that encodes the depth information into the phase of a spectrally resolved interference pattern. We demonstrate that decoding this phase information allows for depth localization accuracy better than 4 µm over a 500 µm depth-of-field. In a high numerical aperture configuration with a much smaller depth of field, a localization accuracy of tens of nanometers can be achieved. This approach is ideally suited for miniature endoscopes, where space limitations at the endoscope tip render depth scanning difficult. We illustrate the potential for 3D visualization of complex biological samples by constructing a three-dimensional volume of the microvasculature of ex vivo murine heart tissue from a single 2D scan. PMID:22772223

  7. A depth camera for natural human-computer interaction based on near-infrared imaging and structured light

    NASA Astrophysics Data System (ADS)

    Liu, Yue; Wang, Liqiang; Yuan, Bo; Liu, Hao

    2015-08-01

    Designing of a novel depth camera is presented, which targets close-range (20-60cm) natural human-computer interaction especially for mobile terminals. In order to achieve high precision through the working range, a two-stepping method is employed to match the near infrared intensity image to absolute depth in real-time. First, we use structured light achieved by an 808nm laser diode and a Dammann grating to coarsely quantize the output space of depth values into discrete bins. Then use a learning-based classification forest algorithm to predict the depth distribution over these bins for each pixel in the image. The quantitative experimental results show that this depth camera has 1% precision over range of 20-60cm, which show that the camera suit resource-limited and low-cost application.

  8. Electrical resistivity imaging for unknown bridge foundation depth determination

    NASA Astrophysics Data System (ADS)

    Arjwech, Rungroj

    Unknown bridge foundations pose a significant safety risk due to stream scour and erosion. Records from older structures may be non-existent, incomplete, or incorrect. Nondestructive and inexpensive geophysical methods have been identified as suitable to investigate unknown bridge foundations. The objective of the present study is to apply advanced 2D electrical resistivity imaging (ERI) in order to identify depth of unknown bridge foundations. A survey procedure is carried out in mixed terrain water and land environments with rough topography. A conventional resistivity survey procedure is used with the electrodes installed on the stream banks. However, some electrodes must be adapted for underwater use. Tests were conducted in one laboratory experimentation and at five field experimentations located at three roadway bridges, a geotechnical test site, and a railway bridge. The first experimentation was at the bridges with the smallest foundations, later working up in size to larger drilled shafts and spread footings. Both known to unknown foundations were investigated. The geotechnical test site is used as an experimental site for 2D and 3D ERI. The data acquisition is carried out along 2D profile with a linear array in the dipole-dipole configuration. The data collections have been carried out using electrodes deployed directly across smaller foundations. Electrodes are deployed in proximity to larger foundations to image them from the side. The 2D ERI can detect the presence of a bridge foundation but is unable to resolve its precise shape and depth. Increasing the spatial extent of the foundation permits better image of its shape and depth. Using electrode < 1 m to detect a slender foundation < 1 m in diameter is not feasible. The 2D ERI method that has been widely used for land surface surveys presently can be adapted effectively in water-covered environments. The method is the most appropriate geophysical method for determination of unknown bridge foundations

  9. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers

    PubMed Central

    Buyel, Johannes F.; Gruchow, Hannah M.; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m−2 when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre–coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m−2 with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins. PMID:26734037

  10. Depth Filters Containing Diatomite Achieve More Efficient Particle Retention than Filters Solely Containing Cellulose Fibers.

    PubMed

    Buyel, Johannes F; Gruchow, Hannah M; Fischer, Rainer

    2015-01-01

    The clarification of biological feed stocks during the production of biopharmaceutical proteins is challenging when large quantities of particles must be removed, e.g., when processing crude plant extracts. Single-use depth filters are often preferred for clarification because they are simple to integrate and have a good safety profile. However, the combination of filter layers must be optimized in terms of nominal retention ratings to account for the unique particle size distribution in each feed stock. We have recently shown that predictive models can facilitate filter screening and the selection of appropriate filter layers. Here we expand our previous study by testing several filters with different retention ratings. The filters typically contain diatomite to facilitate the removal of fine particles. However, diatomite can interfere with the recovery of large biopharmaceutical molecules such as virus-like particles and aggregated proteins. Therefore, we also tested filtration devices composed solely of cellulose fibers and cohesive resin. The capacities of both filter types varied from 10 to 50 L m(-2) when challenged with tobacco leaf extracts, but the filtrate turbidity was ~500-fold lower (~3.5 NTU) when diatomite filters were used. We also tested pre-coat filtration with dispersed diatomite, which achieved capacities of up to 120 L m(-2) with turbidities of ~100 NTU using bulk plant extracts, and in contrast to the other depth filters did not require an upstream bag filter. Single pre-coat filtration devices can thus replace combinations of bag and depth filters to simplify the processing of plant extracts, potentially saving on time, labor and consumables. The protein concentrations of TSP, DsRed and antibody 2G12 were not affected by pre-coat filtration, indicating its general applicability during the manufacture of plant-derived biopharmaceutical proteins.

  11. Enhancing imaging depth by multi-angle imaging of embryonic structures

    NASA Astrophysics Data System (ADS)

    Sudheendran, Narendran; Wu, Chen; Dickinson, Mary E.; Larina, Irina V.; Larin, Kirill V.

    2014-03-01

    Because of the ease in generating transgenic/gene knock out models and accessibility to early stages of embryogenesis, mouse and rat models have become invaluable to studying the mechanisms that underlie human birth defects. To study precisely how structural birth defects arise, Ultrasound, MRI, microCT, Optical Projection Tomography (OPT), Optical Coherence Tomography (OCT) and histological methods have all been used for imaging mouse/rat embryos. However, of these methods, only OCT enables live, functional imaging with high spatial and temporal resolution. However, one of the major limitations of conventional OCT imaging is the light depth penetration, which limits acquisition of structural information from the whole embryo. Here we introduce new imaging scheme by OCT imaging from different sides of the embryos that extend the depth penetration of OCT to permit high-resolution imaging of 3D and 4D volumes.

  12. Hybrid Imaging for Extended Depth of Field Microscopy

    NASA Astrophysics Data System (ADS)

    Zahreddine, Ramzi Nicholas

    An inverse relationship exists in optical systems between the depth of field (DOF) and the minimum resolvable feature size. This trade-off is especially detrimental in high numerical aperture microscopy systems where resolution is pushed to the diffraction limit resulting in a DOF on the order of 500 nm. Many biological structures and processes of interest span over micron scales resulting in significant blurring during imaging. This thesis explores a two-step computational imaging technique known as hybrid imaging to create extended DOF (EDF) microscopy systems with minimal sacrifice in resolution. In the first step a mask is inserted at the pupil plane of the microscope to create a focus invariant system over 10 times the traditional DOF, albeit with reduced contrast. In the second step the contrast is restored via deconvolution. Several EDF pupil masks from the literature are quantitatively compared in the context of biological microscopy. From this analysis a new mask is proposed, the incoherently partitioned pupil with binary phase modulation (IPP-BPM), that combines the most advantageous properties from the literature. Total variation regularized deconvolution models are derived for the various noise conditions and detectors commonly used in biological microscopy. State of the art algorithms for efficiently solving the deconvolution problem are analyzed for speed, accuracy, and ease of use. The IPP-BPM mask is compared with the literature and shown to have the highest signal-to-noise ratio and lowest mean square error post-processing. A prototype of the IPP-BPM mask is fabricated using a combination of 3D femtosecond glass etching and standard lithography techniques. The mask is compared against theory and demonstrated in biological imaging applications.

  13. Ice Cloud Optical Depth Retrievals from CRISM Multispectral Images

    NASA Astrophysics Data System (ADS)

    Klassen, David R.

    2014-11-01

    cubes.Presented here are the results of this PCA/TT work to find the singular set of spectral endmembers and their use in recovering ice cloud optical depth from the MRO-CRISM multispectral image cubes.

  14. Theory of reflectivity blurring in seismic depth imaging

    NASA Astrophysics Data System (ADS)

    Thomson, C. J.; Kitchenside, P. W.; Fletcher, R. P.

    2016-05-01

    A subsurface extended image gather obtained during controlled-source depth imaging yields a blurred kernel of an interface reflection operator. This reflectivity kernel or reflection function is comprised of the interface plane-wave reflection coefficients and so, in principle, the gather contains amplitude versus offset or angle information. We present a modelling theory for extended image gathers that accounts for variable illumination and blurring, under the assumption of a good migration-velocity model. The method involves forward modelling as well as migration or back propagation so as to define a receiver-side blurring function, which contains the effects of the detector array for a given shot. Composition with the modelled incident wave and summation over shots then yields an overall blurring function that relates the reflectivity to the extended image gather obtained from field data. The spatial evolution or instability of blurring functions is a key concept and there is generally not just spatial blurring in the apparent reflectivity, but also slowness or angle blurring. Gridded blurring functions can be estimated with, for example, a reverse-time migration modelling engine. A calibration step is required to account for ad hoc band limitedness in the modelling and the method also exploits blurring-function reciprocity. To demonstrate the concepts, we show numerical examples of various quantities using the well-known SIGSBEE test model and a simple salt-body overburden model, both for 2-D. The moderately strong slowness/angle blurring in the latter model suggests that the effect on amplitude versus offset or angle analysis should be considered in more realistic structures. Although the description and examples are for 2-D, the extension to 3-D is conceptually straightforward. The computational cost of overall blurring functions implies their targeted use for the foreseeable future, for example, in reservoir characterization. The description is for scalar

  15. Image reconstruction enables high resolution imaging at large penetration depths in fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dilipkumar, Shilpa; Montalescot, Sandra; Mondal, Partha Pratim

    2013-10-01

    Imaging thick specimen at a large penetration depth is a challenge in biophysics and material science. Refractive index mismatch results in spherical aberration that is responsible for streaking artifacts, while Poissonian nature of photon emission and scattering introduces noise in the acquired three-dimensional image. To overcome these unwanted artifacts, we introduced a two-fold approach: first, point-spread function modeling with correction for spherical aberration and second, employing maximum-likelihood reconstruction technique to eliminate noise. Experimental results on fluorescent nano-beads and fluorescently coated yeast cells (encaged in Agarose gel) shows substantial minimization of artifacts. The noise is substantially suppressed, whereas the side-lobes (generated by streaking effect) drops by 48.6% as compared to raw data at a depth of 150 μm. Proposed imaging technique can be integrated to sophisticated fluorescence imaging techniques for rendering high resolution beyond 150 μm mark.

  16. Laser speckle contrast imaging with extended depth of field for in-vivo tissue imaging

    PubMed Central

    Sigal, Iliya; Gad, Raanan; Caravaca-Aguirre, Antonio M.; Atchia, Yaaseen; Conkey, Donald B.; Piestun, Rafael; Levi, Ofer

    2013-01-01

    This work presents, to our knowledge, the first demonstration of the Laser Speckle Contrast Imaging (LSCI) technique with extended depth of field (DOF). We employ wavefront coding on the detected beam to gain quantitative information on flow speeds through a DOF extended two-fold compared to the traditional system. We characterize the system in-vitro using controlled microfluidic experiments, and apply it in-vivo to imaging the somatosensory cortex of a rat, showing improved ability to image flow in a larger number of vessels simultaneously. PMID:24466481

  17. Depth-Resolved Multispectral Sub-Surface Imaging Using Multifunctional Upconversion Phosphors with Paramagnetic Properties

    PubMed Central

    Ovanesyan, Zaven; Mimun, L. Christopher; Kumar, Gangadharan Ajith; Yust, Brian G.; Dannangoda, Chamath; Martirosyan, Karen S.; Sardar, Dhiraj K.

    2015-01-01

    Molecular imaging is very promising technique used for surgical guidance, which requires advancements related to properties of imaging agents and subsequent data retrieval methods from measured multispectral images. In this article, an upconversion material is introduced for subsurface near-infrared imaging and for the depth recovery of the material embedded below the biological tissue. The results confirm significant correlation between the analytical depth estimate of the material under the tissue and the measured ratio of emitted light from the material at two different wavelengths. Experiments with biological tissue samples demonstrate depth resolved imaging using the rare earth doped multifunctional phosphors. In vitro tests reveal no significant toxicity, whereas the magnetic measurements of the phosphors show that the particles are suitable as magnetic resonance imaging agents. The confocal imaging of fibroblast cells with these phosphors reveals their potential for in vivo imaging. The depth-resolved imaging technique with such phosphors has broad implications for real-time intraoperative surgical guidance. PMID:26322519

  18. Undetectable Changes in Image Resolution of Luminance-Contrast Gradients Affect Depth Perception

    PubMed Central

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Morita, Toshiya

    2016-01-01

    A great number of studies have suggested a variety of ways to get depth information from two dimensional images such as binocular disparity, shape-from-shading, size gradient/foreshortening, aerial perspective, and so on. Are there any other new factors affecting depth perception? A recent psychophysical study has investigated the correlation between image resolution and depth sensation of Cylinder images (A rectangle contains gradual luminance-contrast changes.). It was reported that higher resolution images facilitate depth perception. However, it is still not clear whether or not the finding generalizes to other kinds of visual stimuli, because there are more appropriate visual stimuli for exploration of depth perception of luminance-contrast changes, such as Gabor patch. Here, we further examined the relationship between image resolution and depth perception by conducting a series of psychophysical experiments with not only Cylinders but also Gabor patches having smoother luminance-contrast gradients. As a result, higher resolution images produced stronger depth sensation with both images. This finding suggests that image resolution affects depth perception of simple luminance-contrast differences (Gabor patch) as well as shape-from-shading (Cylinder). In addition, this phenomenon was found even when the resolution difference was undetectable. This indicates the existence of consciously available and unavailable information in our visual system. These findings further support the view that image resolution is a cue for depth perception that was previously ignored. It partially explains the unparalleled viewing experience of novel high resolution displays. PMID:26941693

  19. Undetectable Changes in Image Resolution of Luminance-Contrast Gradients Affect Depth Perception.

    PubMed

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Morita, Toshiya

    2016-01-01

    A great number of studies have suggested a variety of ways to get depth information from two dimensional images such as binocular disparity, shape-from-shading, size gradient/foreshortening, aerial perspective, and so on. Are there any other new factors affecting depth perception? A recent psychophysical study has investigated the correlation between image resolution and depth sensation of Cylinder images (A rectangle contains gradual luminance-contrast changes.). It was reported that higher resolution images facilitate depth perception. However, it is still not clear whether or not the finding generalizes to other kinds of visual stimuli, because there are more appropriate visual stimuli for exploration of depth perception of luminance-contrast changes, such as Gabor patch. Here, we further examined the relationship between image resolution and depth perception by conducting a series of psychophysical experiments with not only Cylinders but also Gabor patches having smoother luminance-contrast gradients. As a result, higher resolution images produced stronger depth sensation with both images. This finding suggests that image resolution affects depth perception of simple luminance-contrast differences (Gabor patch) as well as shape-from-shading (Cylinder). In addition, this phenomenon was found even when the resolution difference was undetectable. This indicates the existence of consciously available and unavailable information in our visual system. These findings further support the view that image resolution is a cue for depth perception that was previously ignored. It partially explains the unparalleled viewing experience of novel high resolution displays.

  20. The "Teacher's Image" as Predictor of Student Achievement

    ERIC Educational Resources Information Center

    Jungwirth, E.; Tamir, P.

    1973-01-01

    Reports the results of a study conducted at the Israeli Science Teaching Center, Hebrew University of Jerusalem, which attempted to correlate the Teacher's Image'' with actual student achievement in science. (JR)

  1. Performance of reduced bit-depth acquisition for optical frequency domain imaging.

    PubMed

    Goldberg, Brian D; Vakoc, Benjamin J; Oh, Wang-Yuhl; Suter, Melissa J; Waxman, Sergio; Freilich, Mark I; Bouma, Brett E; Tearney, Guillermo J

    2009-09-14

    High-speed optical frequency domain imaging (OFDI) has enabled practical wide-field microscopic imaging in the biological laboratory and clinical medicine. The imaging speed of OFDI, and therefore the field of view, of current systems is limited by the rate at which data can be digitized and archived rather than the system sensitivity or laser performance. One solution to this bottleneck is to natively digitize OFDI signals at reduced bit depths, e.g., at 8-bit depth rather than the conventional 12-14 bit depth, thereby reducing overall bandwidth. However, the implications of reduced bit-depth acquisition on image quality have not been studied. In this paper, we use simulations and empirical studies to evaluate the effects of reduced depth acquisition on OFDI image quality. We show that image acquisition at 8-bit depth allows high system sensitivity with only a minimal drop in the signal-to-noise ratio compared to higher bit-depth systems. Images of a human coronary artery acquired in vivo at 8-bit depth are presented and compared with images at higher bit-depth acquisition.

  2. Instantaneous three-dimensional sensing using spatial light modulator illumination with extended depth of field imaging

    PubMed Central

    Quirin, Sean; Peterka, Darcy S.; Yuste, Rafael

    2013-01-01

    Imaging three-dimensional structures represents a major challenge for conventional microscopies. Here we describe a Spatial Light Modulator (SLM) microscope that can simultaneously address and image multiple targets in three dimensions. A wavefront coding element and computational image processing enables extended depth-of-field imaging. High-resolution, multi-site three-dimensional targeting and sensing is demonstrated in both transparent and scattering media over a depth range of 300-1,000 microns. PMID:23842387

  3. Tripling the maximum imaging depth with third-harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Yildirim, Murat; Durr, Nicholas; Ben-Yakar, Adela

    2015-09-01

    The growing interest in performing high-resolution, deep-tissue imaging has galvanized the use of longer excitation wavelengths and three-photon-based techniques in nonlinear imaging modalities. This study presents a threefold improvement in maximum imaging depth of ex vivo porcine vocal folds using third-harmonic generation (THG) microscopy at 1552-nm excitation wavelength compared to two-photon microscopy (TPM) at 776-nm excitation wavelength. The experimental, analytical, and Monte Carlo simulation results reveal that THG improves the maximum imaging depth observed in TPM significantly from 140 to 420 μm in a highly scattered medium, reaching the expected theoretical imaging depth of seven extinction lengths. This value almost doubles the previously reported normalized imaging depths of 3.5 to 4.5 extinction lengths using three-photon-based imaging modalities. Since tissue absorption is substantial at the excitation wavelength of 1552 nm, this study assesses the tissue thermal damage during imaging by obtaining the depth-resolved temperature distribution through a numerical simulation incorporating an experimentally obtained thermal relaxation time (τ). By shuttering the laser for a period of 2τ, the numerical algorithm estimates a maximum temperature increase of ˜2°C at the maximum imaging depth of 420 μm. The paper demonstrates that THG imaging using 1552 nm as an illumination wavelength with effective thermal management proves to be a powerful deep imaging modality for highly scattering and absorbing tissues, such as scarred vocal folds.

  4. Depth resolved hyperspectral imaging spectrometer based on structured light illumination and Fourier transform interferometry.

    PubMed

    Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T; So, Peter T C

    2014-10-01

    A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively.

  5. Comparison of Curricular Breadth, Depth, and Recurrence and Physics Achievement of TIMSS Population 3 Countries

    ERIC Educational Resources Information Center

    Murdock, John

    2008-01-01

    This study is a secondary analysis of data from the 1995 administration of the Third International Mathematics and Science Study (TIMSS). The purpose is to compare the breadth, depth, and recurrence of the typical physics curriculum in the United States with the typical curricula in different countries and to determine whether there are…

  6. Noncontact imaging of burn depth and extent in a porcine model using spatial frequency domain imaging

    PubMed Central

    Mazhar, Amaan; Saggese, Steve; Pollins, Alonda C.; Cardwell, Nancy L.; Nanney, Lillian; Cuccia, David J.

    2014-01-01

    Abstract. The standard of care for clinical assessment of burn severity and extent lacks a quantitative measurement. In this work, spatial frequency domain imaging (SFDI) was used to measure 48 thermal burns of graded severity (superficial partial, deep partial, and full thickness) in a porcine model. Functional (total hemoglobin and tissue oxygen saturation) and structural parameters (tissue scattering) derived from the SFDI measurements were monitored over 72 h for each burn type and compared to gold standard histological measurements of burn depth. Tissue oxygen saturation (stO2) and total hemoglobin (ctHbT) differentiated superficial partial thickness burns from more severe burn types after 2 and 72 h, respectively (p<0.01), but were unable to differentiate deep partial from full thickness wounds in the first 72 h. Tissue scattering parameters separated superficial burns from all burn types immediately after injury (p<0.01), and separated all three burn types from each other after 24 h (p<0.01). Tissue scattering parameters also showed a strong negative correlation to histological burn depth as measured by vimentin immunostain (r2>0.89). These results show promise for the use of SFDI-derived tissue scattering as a correlation to burn depth and the potential to assess burn depth via a combination of SFDI functional and structural parameters. PMID:25147961

  7. Pareto-depth for multiple-query image retrieval.

    PubMed

    Hsiao, Ko-Jen; Calder, Jeff; Hero, Alfred O

    2015-02-01

    Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.

  8. Compact and large depth of field image scanner for auto document feeder with compound eye system

    NASA Astrophysics Data System (ADS)

    Kawano, Hiroyuki; Okamoto, Tatsuki; Matsuzawa, Taku; Nakajima, Hajime; Makita, Junko; Toyoda, Yoshitaka; Funakura, Tetsuo; Nakanishi, Takahito; Kunieda, Tatsuya; Minobe, Tadashi

    2013-03-01

    We designed a compact and large depth of field image scanner targeted for auto document feeders (ADF) by using a compound eye system design with plural optical units in which the ray paths are folded by a reflective optics. Though we have previously proposed the principle concept, we advance the design using a free-form surface mirror to reduce the F-number for less illumination energy and to shrink its optical track width to 40 mm. We achieved large depth of field (DOF) of 1.2 mm, defined as a range exceeding 30% modulation transfer function (MTF) at 300 dpi, which is about twice as large as a conventional gradient index (GRIN) lens array contact image sensor (CIS). The aperture stop has a rectangular-shaped aperture, where one side length is as large as 4.0mm for collecting much light, and another side length is as small as 1.88mm for avoiding interference of folded ray paths.

  9. Enhancement of imaging depth of two-photon microscopy using pinholes: analytical simulation and experiments.

    PubMed

    Song, Woosub; Lee, Jihoon; Kwon, Hyuk-Sang

    2012-08-27

    Achieving a greater imaging depth with two-photon fluorescence microscopy (TPFM) is mainly limited by out-of-focus fluorescence generated from both ballistic and scattered light excitation. We report on an improved signal-to-noise ratio (SNR) in a highly scattering medium as demonstrated by analytical simulation and experiments for TPFM. Our technique is based on out-of-focus rejection using a confocal pinhole. We improved the SNR by introducing the pinhole in the collection beam path. Using the radiative transfer theory and the ray-optics approach, we analyzed the effects of different sizes of pinholes on the generation of the fluorescent signal in the TPFM system. The analytical simulation was evaluated by comparing its results with the experimental results in a scattering medium. In a combined confocal pinhole and two-photon microscopy system, the imaging depth limit of approximately 5 scattering mean free paths (MFP) was found to have improved to 6.2 MFP. PMID:23037108

  10. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  11. Material depth reconstruction method of multi-energy X-ray images using neural network.

    PubMed

    Lee, Woo-Jin; Kim, Dae-Seung; Kang, Sung-Won; Yi, Won-Jin

    2012-01-01

    With the advent of technology, multi-energy X-ray imaging is promising technique that can reduce the patient's dose and provide functional imaging. Two-dimensional photon-counting detector to provide multi-energy imaging is under development. In this work, we present a material decomposition method using multi-energy images. To acquire multi-energy images, Monte Carlo simulation was performed. The X-ray spectrum was modeled and ripple effect was considered. Using the dissimilar characteristics in energy-dependent X-ray attenuation of each material, multiple energy X-ray images were decomposed into material depth images. Feedforward neural network was used to fit multi-energy images to material depth images. In order to use the neural network, step wedge phantom images were used for training neuron. Finally, neural network decomposed multi-energy X-ray images into material depth image. To demonstrate the concept of this method, we applied it to simulated images of a 3D head phantom. The results show that neural network method performed effectively material depth reconstruction.

  12. Cardiac image modelling: Breadth and depth in heart disease.

    PubMed

    Suinesiaputra, Avan; McCulloch, Andrew D; Nash, Martyn P; Pontre, Beau; Young, Alistair A

    2016-10-01

    With the advent of large-scale imaging studies and big health data, and the corresponding growth in analytics, machine learning and computational image analysis methods, there are now exciting opportunities for deepening our understanding of the mechanisms and characteristics of heart disease. Two emerging fields are computational analysis of cardiac remodelling (shape and motion changes due to disease) and computational analysis of physiology and mechanics to estimate biophysical properties from non-invasive imaging. Many large cohort studies now underway around the world have been specifically designed based on non-invasive imaging technologies in order to gain new information about the development of heart disease from asymptomatic to clinical manifestations. These give an unprecedented breadth to the quantification of population variation and disease development. Also, for the individual patient, it is now possible to determine biophysical properties of myocardial tissue in health and disease by interpreting detailed imaging data using computational modelling. For these population and patient-specific computational modelling methods to develop further, we need open benchmarks for algorithm comparison and validation, open sharing of data and algorithms, and demonstration of clinical efficacy in patient management and care. The combination of population and patient-specific modelling will give new insights into the mechanisms of cardiac disease, in particular the development of heart failure, congenital heart disease, myocardial infarction, contractile dysfunction and diastolic dysfunction.

  13. The influence of structure depth on image blurring of micrometres-thick specimens in MeV transmission electron imaging.

    PubMed

    Wang, Fang; Sun, Ying; Cao, Meng; Nishi, Ryuji

    2016-04-01

    This study investigates the influence of structure depth on image blurring of micrometres-thick films by experiment and simulation with a conventional transmission electron microscope (TEM). First, ultra-high-voltage electron microscope (ultra-HVEM) images of nanometer gold particles embedded in thick epoxy-resin films were acquired in the experiment and compared with simulated images. Then, variations of image blurring of gold particles at different depths were evaluated by calculating the particle diameter. The results showed that with a decrease in depth, image blurring increased. This depth-related property was more apparent for thicker specimens. Fortunately, larger particle depth involves less image blurring, even for a 10-μm-thick epoxy-resin film. The quality dependence on depth of a 3D reconstruction of particle structures in thick specimens was revealed by electron tomography. The evolution of image blurring with structure depth is determined mainly by multiple elastic scattering effects. Thick specimens of heavier materials produced more blurring due to a larger lateral spread of electrons after scattering from the structure. Nevertheless, increasing electron energy to 2MeV can reduce blurring and produce an acceptable image quality for thick specimens in the TEM.

  14. MEMS scanner enabled real-time depth sensitive hyperspectral imaging of biological tissue

    PubMed Central

    Wang, Youmin; Bish, Sheldon; Tunnell, James W; Zhang, Xiaojing

    2010-01-01

    We demonstrate a hyperspectral and depth sensitive diffuse optical imaging microsystem, where fast scanning is provided by a CMOS compatible 2-axis MEMS mirror. By using lissajous scanning patterns, large field-of-view (FOV) of 1.2cm x 1.2cm images with lateral resolution of 100µm can be taken at 1.3 frames-per-second (fps). Hyperspectral and depth-sensitive images were acquired on tissue simulating phantom samples containing quantum dots (QDs) patterned at various depths in Polydimethylsiloxane (PDMS). Device performance delivers 6 nm spectral resolution and 0.43 wavelengths per second acquisition speed. A sample of porcine epithelium with subcutaneously placed QDs was also imaged. Images of the biological sample were processed by spectral unmixing in order to qualitatively separate chromophores in the final images and demonstrate spectral performance of the imaging system. PMID:21164757

  15. MEMS scanner enabled real-time depth sensitive hyperspectral imaging of biological tissue.

    PubMed

    Wang, Youmin; Bish, Sheldon; Tunnell, James W; Zhang, Xiaojing

    2010-11-01

    We demonstrate a hyperspectral and depth sensitive diffuse optical imaging microsystem, where fast scanning is provided by a CMOS compatible 2-axis MEMS mirror. By using lissajous scanning patterns, large field-of-view (FOV) of 1.2 cmx1.2 cm images with lateral resolution of 100 µm can be taken at 1.3 frames-per-second (fps). Hyperspectral and depth-sensitive images were acquired on tissue simulating phantom samples containing quantum dots (QDs) patterned at various depths in Polydimethylsiloxane (PDMS). Device performance delivers 6 nm spectral resolution and 0.43 wavelengths per second acquisition speed. A sample of porcine epithelium with subcutaneously placed QDs was also imaged. Images of the biological sample were processed by spectral unmixing in order to qualitatively separate chromophores in the final images and demonstrate spectral performance of the imaging system. PMID:21164757

  16. Depth enhancement in spectral domain optical coherence tomography using bidirectional imaging modality with a single spectrometer

    NASA Astrophysics Data System (ADS)

    Ravichandran, Naresh Kumar; Wijesinghe, Ruchire Eranga; Shirazi, Muhammad Faizan; Park, Kibeom; Jeon, Mansik; Jung, Woonggyu; Kim, Jeehyun

    2016-07-01

    A method for depth enhancement is presented using a bidirectional imaging modality for spectral domain optical coherence tomography (SD-OCT). Two precisely aligned sample arms along with two reference arms were utilized in the optical configuration to scan the samples. Using exemplary images of the optical resolution target, Scotch tape, a silicon sheet with two needles, and a leaf, we demonstrated how the developed bidirectional SD-OCT imaging method increases the ability to characterize depth-enhanced images. The results of the developed system were validated by comparing the images with the standard OCT configuration (single-sample arm setup). Given the advantages of higher resolution and the ability to visualize deep morphological structures, this method can be utilized to increase the depth dependent fall-off in samples with limited thickness. Thus, the proposed bidirectional imaging modality is apt for cross-sectional imaging of entire samples, which has the potential capability to improve the diagnostic ability.

  17. Enhancing the depth of tissue microscope imaging using two-photon excitation of the second singlet state of fluorescent agents

    NASA Astrophysics Data System (ADS)

    Pu, Yang; Shi, Lingyan; Pratavieira, Sebastião.; Alfano, R. R.

    2014-03-01

    Increasing the depth to image inside tissue is critical in biomedicine. Two-photon (2P) excitation of the second singlet (S2) state of a group of fluorescent agents with near infrared emission, Chlorophyll a (Chl a) and Indocyanine green (ICG), is used to extend the optical imaging regime of 2PM into "tissue optical window" for deep tissue penetration. The fast nonradiative from S2 to S1 yields both emission and absorption wavelengths in the therapeutic window. The salient feature is to place both the 2P excitation and emission wavelengths of the imaging agents falling into the "tissue optical window". As a first step to achieve deeper optical imaging, Chl a and ICG are investigated and demonstrated as imaging agents for 2P S2 excitation microscope image.

  18. Burn-depth estimation using thermal excitation and imaging

    NASA Astrophysics Data System (ADS)

    Dickey, Fred M.; Holswade, Scott C.; Yee, Mark L.

    1999-07-01

    Accurate estimation of the depth of partial-thickness burns and the early prediction of a need for surgical intervention are difficult. A non-invasive technique utilizing the difference in thermal relaxation time between burned and normal skin may be useful in this regard. In practice, a thermal camera would record the skin's response to heating or cooling by a small amount--roughly 5 degree(s) Celsius for a short duration. The thermal stimulus would be provided by a heat lamp, hot or cold air, or other means. Processing of the thermal transients would reveal areas that returned to equilibrium at different rates, which should correspond to different burn depths. In deeper thickness burns, the outside layer of skin is further removed from the constant- temperature region maintained through blood flow. Deeper thickness areas should thus return to equilibrium more slowly than other areas. Since the technique only records changes in the skin's temperature, it is not sensitive to room temperature, the burn's location, or the state of the patient. Preliminary results are presented for analysis of a simulated burn, formed by applying a patch of biosynthetic wound dressing on top of normal skin tissue.

  19. Burn Depth Estimation Using Thermal Excitation and Imaging

    SciTech Connect

    Dickey, F.M.; Holswade, S.C.; Yee, M.L.

    1998-12-17

    Accurate estimation of the depth of partial-thickness burns and the early prediction of a need for surgical intervention are difficult. A non-invasive technique utilizing the difference in thermal relaxation time between burned and normal skin may be useful in this regard. In practice, a thermal camera would record the skin's response to heating or cooling by a small amount-roughly 5{degrees} Celsius for a short duration. The thermal stimulus would be provided by a heat lamp, hot or cold air, or other means. Processing of the thermal transients would reveal areas that returned to equilibrium at different rates, which should correspond to different burn depths. In deeper thickness burns, the outside layer of skin is further removed from the constant-temperature region maintained through blood flow. Deeper thickness areas should thus return to equilibrium more slowly than other areas. Since the technique only records changes in the skin's temperature, it is not sensitive to room temperature, the burn's location, or the state of the patient. Preliminary results are presented for analysis of a simulated burn, formed by applying a patch of biosynthetic wound dressing on top of normal skin tissue.

  20. Aerial image retargeting (AIR): achieving litho-friendly designs

    NASA Astrophysics Data System (ADS)

    Yehia Hamouda, Ayman; Word, James; Anis, Mohab; Karim, Karim S.

    2011-04-01

    In this work, we present a new technique to detect non-Litho-Friendly design areas based on their Aerial Image signature. The aerial image is calculated for the litho target (pre-OPC). This is followed by the fixing (retargeting) the design to achieve a litho friendly OPC target. This technique is applied and tested on 28 nm metal layer and shows a big improvement in the process window performance. For an optimized Aerial-Image-Retargeting (AIR) recipe is very computationally efficient and its runtime doesn't consume more than 1% of the OPC flow runtime.

  1. Exploring High-Achieving Students' Images of Mathematicians

    ERIC Educational Resources Information Center

    Aguilar, Mario Sánchez; Rosas, Alejandro; Zavaleta, Juan Gabriel Molina; Romo-Vázquez, Avenilde

    2016-01-01

    The aim of this study is to describe the images that a group of high-achieving Mexican students hold of mathematicians. For this investigation, we used a research method based on the Draw-A-Scientist Test (DAST) with a sample of 63 Mexican high school students. The group of students' pictorial and written descriptions of mathematicians assisted us…

  2. Study of a holographic TV system based on multi-view images and depth maps

    NASA Astrophysics Data System (ADS)

    Senoh, Takanori; Ichihashi, Yasuyuki; Oi, Ryutaro; Sasaki, Hisayuki; Yamamoto, Kenji

    2013-03-01

    Electronic holography technology is expected to be used for realizing an ideal 3DTV system in the future, providing perfect 3D images. Since the amount of fringe data is huge, however, it is difficult to broadcast or transmit it directly. To resolve this problem, we investigated a method of generating holograms from depth images. Since computer generated holography (CGH) generates huge fringe patterns from a small amount of data for the coordinates and colors of 3D objects, it solves half of this problem, mainly for computer generated objects (artificial objects). For the other half of the problem (how to obtain 3D models for a natural scene), we propose a method of generating holograms from multi-view images and associated depth maps. Multi-view images are taken by multiple cameras. The depth maps are estimated from the multi-view images by introducing an adaptive matching error selection algorithm in the stereo-matching process. The multi-view images and depth maps are compressed by a 2D image coding method that converts them into Global View and Depth (GVD) format. The fringe patterns are generated from the decoded data and displayed on 8K×4K liquid crystal on silicon (LCOS) display panels. The reconstructed holographic image quality is compared using uncompressed and compressed images.

  3. Can the perception of depth in stereoscopic images be influenced by 3D sound?

    NASA Astrophysics Data System (ADS)

    Turner, Amy; Berry, Jonathan; Holliman, Nick

    2011-03-01

    The creation of binocular images for stereoscopic display has benefited from significant research and commercial development in recent years. However, perhaps surprisingly, the effect of adding 3D sound to stereoscopic images has rarely been studied. If auditory depth information can enhance or extend the visual depth experience it could become an important way to extend the limited depth budget on all 3D displays and reduce the potential for fatigue from excessive use of disparity. Objective: As there is limited research in this area our objective was to ask two preliminary questions. First what is the smallest difference in forward depth that can be reliably detected using 3D sound alone? Second does the addition of auditory depth information influence the visual perception of depth in a stereoscopic image? Method: To investigate auditory depth cues we use a simple sound system to test the experimental hypothesis that: participants will perform better than chance at judging the depth differences between two speakers a set distance apart. In our second experiment investigating both auditory and visual depth cues we setup a sound system and a stereoscopic display to test the experimental hypothesis that: participants judge a visual stimulus to be closer if they hear a closer sound when viewing the stimulus. Results: In the auditory depth cue trial every depth difference tested gave significant results demonstrating that the human ear can hear depth differences between physical sources as short as 0.25 m at 1 m. In our trial investigating whether audio information can influence the visual perception of depth we found that participants did report visually perceiving an object to be closer when the sound was played closer to them even though the image depth remained unchanged. Conclusion: The positive results in the two trials show that we can hear small differences in forward depth between sound sources and suggest that it could be practical to extend the apparent

  4. Self-Motion and Depth Estimation from Image Sequences

    NASA Technical Reports Server (NTRS)

    Perrone, John

    1999-01-01

    An image-based version of a computational model of human self-motion perception (developed in collaboration with Dr. Leland S. Stone at NASA Ames Research Center) has been generated and tested. The research included in the grant proposal sought to extend the utility of the self-motion model so that it could be used for explaining and predicting human performance in a greater variety of aerospace applications. The model can now be tested with video input sequences (including computer generated imagery) which enables simulation of human self-motion estimation in a variety of applied settings.

  5. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images

    PubMed Central

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J.

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer’s own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed. PMID:26447793

  6. The (In)Effectiveness of Simulated Blur for Depth Perception in Naturalistic Images.

    PubMed

    Maiello, Guido; Chessa, Manuela; Solari, Fabio; Bex, Peter J

    2015-01-01

    We examine depth perception in images of real scenes with naturalistic variation in pictorial depth cues, simulated dioptric blur and binocular disparity. Light field photographs of natural scenes were taken with a Lytro plenoptic camera that simultaneously captures images at up to 12 focal planes. When accommodation at any given plane was simulated, the corresponding defocus blur at other depth planes was extracted from the stack of focal plane images. Depth information from pictorial cues, relative blur and stereoscopic disparity was separately introduced into the images. In 2AFC tasks, observers were required to indicate which of two patches extracted from these images was farther. Depth discrimination sensitivity was highest when geometric and stereoscopic disparity cues were both present. Blur cues impaired sensitivity by reducing the contrast of geometric information at high spatial frequencies. While simulated generic blur may not assist depth perception, it remains possible that dioptric blur from the optics of an observer's own eyes may be used to recover depth information on an individual basis. The implications of our findings for virtual reality rendering technology are discussed.

  7. Penetration depth of linear polarization imaging for two-layer anisotropic samples

    NASA Astrophysics Data System (ADS)

    Liao, Ran; Zeng, Nan; Li, Dongzhi; Yun, Tianliang; He, Yonghong; Ma, Hui

    2011-08-01

    Polarization techniques can suppress multiply scattering light and have been demonstrated as an effective tool to improve image quality of superficial tissues where many cancers start to develop. Learning the penetration depth behavior of different polarization imaging techniques is important for their clinical applications in diagnosis of skin abnormalities. In the present paper, we construct a two-layer sample consisting of isotropic and anisotropic media and examine quantitatively using both experiments and Monte Carlo simulations the penetration depths of three different polarization imaging methods, i.e., linear differential polarization imaging (LDPI), degree of linear polarization imaging (DOLPI), and rotating linear polarization imaging (RLPI). The results show that the contrast curves of the three techniques are distinctively different, but their characteristic depths are all of the order of the transport mean free path length of the top layer. Penetration depths of LDPI and DOLPI depend on the incident polarization angle. The characteristic depth of DOLPI, and approximately of LDPI at small g, scales with the transport mean free path length. The characteristic depth of RLPI is almost twice as big as that of DOLPI and LDPI, and increases significantly as g increases.

  8. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches. PMID:26660697

  9. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  10. Depth estimation and occlusion boundary recovery from a single outdoor image

    NASA Astrophysics Data System (ADS)

    Zhang, Shihui; Yan, Shuo

    2012-08-01

    A novel depth estimation and occlusion boundary recovery approach for a single outdoor image is described. This work is distinguished by three contributions. The first contribution is the introduction of a new depth estimation model, which takes the camera rotation and pitch into account, thus improving the depth estimation accuracy. The second contribution is a depth estimation algorithm, in which we classify the standing object region with visible ground-contact points into three cases according to the information of vanishing point for the first time, meanwhile, we propose the depth reference line concept for estimating the depth of the region with depth change. Two advantages can thereby be obtained: improving the depth estimation accuracy further and avoiding the occlusion mismarked phenomenon. The third contribution is the depth estimation method for the standing object region without visible ground-contact points, which takes the mean of minimum and maximum depth estimation result as region depth and prevents the missing phenomenon of occlusion boundaries. Extensive experiments show that our works are better than previously published results.

  11. No-Reference Depth Assessment Based on Edge Misalignment Errors for T+D Images.

    PubMed

    Xiang, Sen; Yu, Li; Chen, Chang Wen

    2016-03-01

    The quality of depth is crucial in all depth-based applications. Unfortunately, the error-free ground truth is often unattainable for depth. Therefore, no-reference quality assessment is very much desired. This paper presents a novel depth quality assessment scheme that is completely different from conventional approaches. In particular, this scheme focuses on depth edge misalignment errors in texture-plus-depth (T + D) images and develops a robust method to detect them. Based on the detected misalignments, a no-reference metric is calculated to evaluate the quality of depth maps. In the proposed scheme, misalignments are detected by matching texture and depth edges through three constraints: 1) spatial similarity; 2) edge orientation similarity; and 3) segment length similarity. Furthermore, the matching is performed on edge segments instead of individual pixels, which enables robust edge matching. Experimental results demonstrate that the proposed scheme can detect misalignment errors accurately. The proposed no-reference depth quality metric is highly consistent with the full-reference metric, and is also well-correlated with the quality of synthesized virtual views. Moreover, the proposed scheme can also use the detected edge misalignments to facilitate depth enhancement in various practical texture-plus-depth-based applications. PMID:26841393

  12. The critical evaluation of laser Doppler imaging in determining burn depth.

    PubMed

    Gill, Parneet

    2013-01-01

    This review article discusses the use of laser Doppler imaging as a clinimetric tool to determine burn depth in patients presenting to hospital. Laser Doppler imaging is a very sensitive and specific tool to measure burn depth, easy to use, reliable and acceptable to the patient due to its quick and non-invasive nature. Improvements in validity, cost and reproducibility would improve its use in clinical practice however it is difficult to satisfy the entire evaluation criterion all the time. It remains a widely accepted tool to assess burn depth, with an ever-increasing body of evidence to support its use, as discussed in this review. Close collaboration between clinicians, statisticians, epidemiologists and psychologists is necessary in order to develop the evidence base for the use of laser Doppler imaging as standard in burn depth assessment and therefore act as an influencing factor in management decisions.

  13. Three-dimensional range-gated imaging at infrared wavelengths with super-resolution depth mapping

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Christnacher, Frank; Metzger, Nicolas; Bacher, Emmanuel; Zielenski, Ingo

    2009-05-01

    Range-gated viewing is a prominent technique for night vision, remote sensing and vision trough obstacles (fog, smoke, camouflage netting ). Furthermore, range-gated images reflect not only the scene reflectance but also contain depth information. The whole depth information can be calculated from a minimum number of two range-gated images via the super-resolution depth mapping technique. For the first time, this method is applied to range-gated viewing at infrared wavelengths. An EBCMOS camera and a solid sate laser illumination in the 1.5 μm wavelength scale were used to depth-map a scene with minimal laser activity of 9 ns per image.

  14. Penetration depth measurement of near-infrared hyperspectral imaging light for milk powder

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The increasingly common application of near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging ligh...

  15. Velocity profile of thin film flows measured using a confocal microscopy particle image velocimetry system with simultaneous multi depth position

    NASA Astrophysics Data System (ADS)

    Kikuchi, K.; Mochizuki, O.

    2015-02-01

    In this paper, we report a technique for simultaneously visualizing flows near walls at nano-depth positions. To achieve such a high interval of depth gradient, we developed a tilted observation technique in a particle image velocimetry (PIV) system based on confocal microscopy. The focal plane along the bottom of the flow channel was tilted by tilting the micro-channel, enabling depth scanning in the microscopic field of view. Our system is suitable for measuring 3D two-component flow fields. The depth interval was approximately 220 nm over a depth range of 10 μm, depending on the tilt angle of the micro-channel. Applying the proposed system, we visualized the near-wall flow in a drainage film flow under laminar conditions to the depth of approximately 30 μm via vertical scanning from the bottom to the free surface. The velocity gradient was proportional to the distance from the wall, consistent with theoretical predictions. From the measured near-wall velocity gradient, we calculated the wall shear stress. The measurement accuracy was approximately 1.3 times higher in our proposed method than in the conventional confocal micro-PIV method.

  16. Influence on Depth Perception Caused by Modifying Gradation of Depth Map Images with Gray Level for Computer-Generated Stereogram and Its Subjective Estimation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, and various types of stereograms. A stereogram is a two dimensional flat image viewed in such a fashion as to produce a three dimensional effect, i.e., visual depth perception. A variety of software for generating effectively the random dot stereogram (RDS) and single image stereogram (SIS) has been released on the internet web site. On the other hand, various hidden object images often called depth map image (DMI) with gradation in monochrome must be prepared in advance. This research note focuses on the influence of the depth perception caused be modifying hidden object images for digital stereogram. The possibility of subjective estimation of the depth is discussed using the simultaneous observation of a few stereograms.

  17. Computational superposition compound eye imaging for extended depth-of-field and field-of-view.

    PubMed

    Nakamura, Tomoya; Horisaki, Ryoichi; Tanida, Jun

    2012-12-01

    This paper describes a superposition compound eye imaging system for extending the depth-of-field (DOF) and the field-of-view (FOV) using a spherical array of erect imaging optics and deconvolution processing. This imaging system had a three-dimensionally space-invariant point spread function generated by the superposition optics. A sharp image with a deep DOF and a wide FOV could be reconstructed by deconvolution processing with a single filter from a single captured image. The properties of the proposed system were confirmed by ray-trace simulations.

  18. Depth-resolved incoherent and coherent wide-field high-content imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    So, Peter T.

    2016-03-01

    Recent advances in depth-resolved wide-field imaging technique has enabled many high throughput applications in biology and medicine. Depth resolved imaging of incoherent signals can be readily accomplished with structured light illumination or nonlinear temporal focusing. The integration of these high throughput systems with novel spectroscopic resolving elements further enable high-content information extraction. We will introduce a novel near common-path interferometer and demonstrate its uses in toxicology and cancer biology applications. The extension of incoherent depth-resolved wide-field imaging to coherent modality is non-trivial. Here, we will cover recent advances in wide-field 3D resolved mapping of refractive index, absorbance, and vibronic components in biological specimens.

  19. Multispectral upconversion luminescence intensity ratios for ascertaining the tissue imaging depth

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Wang, Yu; Kong, Xianggui; Liu, Xiaomin; Zhang, Youlin; Tu, Langping; Ding, Yadan; Aalders, Maurice C. G.; Buma, Wybren Jan; Zhang, Hong

    2014-07-01

    Upconversion nanoparticles (UCNPs) have in recent years emerged as excellent contrast agents for in vivo luminescence imaging of deep tissues. But information abstracted from these images is in most cases restricted to 2-dimensions, without the depth information. In this work, a simple method has been developed to accurately ascertain the tissue imaging depth based on the relative luminescence intensity ratio of multispectral NaYF4:Yb3+,Er3+ UCNPs. A theoretical mode was set up, where the parameters in the quantitative relation between the relative intensities of the upconversion luminescence spectra and the depth of the UCNPs were determined using tissue mimicking liquid phantoms. The 540 nm and 650 nm luminescence intensity ratios (G/R ratio) of NaYF4:Yb3+,Er3+ UCNPs were monitored following excitation path (Ex mode) and emission path (Em mode) schemes, respectively. The model was validated by embedding NaYF4:Yb3+,Er3+ UCNPs in layered pork muscles, which demonstrated a very high accuracy of measurement in the thickness up to centimeter. This approach shall promote significantly the power of nanotechnology in medical optical imaging by expanding the imaging information from 2-dimensional to real 3-dimensional.Upconversion nanoparticles (UCNPs) have in recent years emerged as excellent contrast agents for in vivo luminescence imaging of deep tissues. But information abstracted from these images is in most cases restricted to 2-dimensions, without the depth information. In this work, a simple method has been developed to accurately ascertain the tissue imaging depth based on the relative luminescence intensity ratio of multispectral NaYF4:Yb3+,Er3+ UCNPs. A theoretical mode was set up, where the parameters in the quantitative relation between the relative intensities of the upconversion luminescence spectra and the depth of the UCNPs were determined using tissue mimicking liquid phantoms. The 540 nm and 650 nm luminescence intensity ratios (G/R ratio) of NaYF4:Yb3

  20. Macroscopic optical imaging technique for wide-field estimation of fluorescence depth in optically turbid media for application in brain tumor surgical guidance

    PubMed Central

    Kolste, Kolbein K.; Kanick, Stephen C.; Valdés, Pablo A.; Jermyn, Michael; Wilson, Brian C.; Roberts, David W.; Paulsen, Keith D.; Leblond, Frederic

    2015-01-01

    Abstract. A diffuse imaging method is presented that enables wide-field estimation of the depth of fluorescent molecular markers in turbid media by quantifying the deformation of the detected fluorescence spectra due to the wavelength-dependent light attenuation by overlying tissue. This is achieved by measuring the ratio of the fluorescence at two wavelengths in combination with normalization techniques based on diffuse reflectance measurements to evaluate tissue attenuation variations for different depths. It is demonstrated that fluorescence topography can be achieved up to a 5 mm depth using a near-infrared dye with millimeter depth accuracy in turbid media having optical properties representative of normal brain tissue. Wide-field depth estimates are made using optical technology integrated onto a commercial surgical microscope, making this approach feasible for real-world applications. PMID:25652704

  1. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  2. Penetration Depth Measurement of Near-Infrared Hyperspectral Imaging Light for Milk Powder.

    PubMed

    Huang, Min; Kim, Moon S; Chao, Kuanglin; Qin, Jianwei; Mo, Changyeun; Esquerre, Carlos; Delwiche, Stephen; Zhu, Qibing

    2016-01-01

    The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging light for milk powder. Hyperspectral NIR reflectance images were collected for eight different milk powder products that included five brands of non-fat milk powder and three brands of whole milk powder. For each milk powder, five different powder depths ranging from 1 mm-5 mm were prepared on the top of a base layer of melamine, to test spectral-based detection of the melamine through the milk. A relationship was established between the NIR reflectance spectra (937.5-1653.7 nm) and the penetration depth was investigated by means of the partial least squares-discriminant analysis (PLS-DA) technique to classify pixels as being milk-only or a mixture of milk and melamine. With increasing milk depth, classification model accuracy was gradually decreased. The results from the 1-mm, 2-mm and 3-mm models showed that the average classification accuracy of the validation set for milk-melamine samples was reduced from 99.86% down to 94.93% as the milk depth increased from 1 mm-3 mm. As the milk depth increased to 4 mm and 5 mm, model performance deteriorated further to accuracies as low as 81.83% and 58.26%, respectively. The results suggest that a 2-mm sample depth is recommended for the screening/evaluation of milk powders using an online NIR hyperspectral imaging system similar to that used in this study. PMID:27023555

  3. Penetration Depth Measurement of Near-Infrared Hyperspectral Imaging Light for Milk Powder

    PubMed Central

    Huang, Min; Kim, Moon S.; Chao, Kuanglin; Qin, Jianwei; Mo, Changyeun; Esquerre, Carlos; Delwiche, Stephen; Zhu, Qibing

    2016-01-01

    The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging light for milk powder. Hyperspectral NIR reflectance images were collected for eight different milk powder products that included five brands of non-fat milk powder and three brands of whole milk powder. For each milk powder, five different powder depths ranging from 1 mm–5 mm were prepared on the top of a base layer of melamine, to test spectral-based detection of the melamine through the milk. A relationship was established between the NIR reflectance spectra (937.5–1653.7 nm) and the penetration depth was investigated by means of the partial least squares-discriminant analysis (PLS-DA) technique to classify pixels as being milk-only or a mixture of milk and melamine. With increasing milk depth, classification model accuracy was gradually decreased. The results from the 1-mm, 2-mm and 3-mm models showed that the average classification accuracy of the validation set for milk-melamine samples was reduced from 99.86% down to 94.93% as the milk depth increased from 1 mm–3 mm. As the milk depth increased to 4 mm and 5 mm, model performance deteriorated further to accuracies as low as 81.83% and 58.26%, respectively. The results suggest that a 2-mm sample depth is recommended for the screening/evaluation of milk powders using an online NIR hyperspectral imaging system similar to that used in this study. PMID:27023555

  4. Penetration Depth Measurement of Near-Infrared Hyperspectral Imaging Light for Milk Powder.

    PubMed

    Huang, Min; Kim, Moon S; Chao, Kuanglin; Qin, Jianwei; Mo, Changyeun; Esquerre, Carlos; Delwiche, Stephen; Zhu, Qibing

    2016-03-25

    The increasingly common application of the near-infrared (NIR) hyperspectral imaging technique to the analysis of food powders has led to the need for optical characterization of samples. This study was aimed at exploring the feasibility of quantifying penetration depth of NIR hyperspectral imaging light for milk powder. Hyperspectral NIR reflectance images were collected for eight different milk powder products that included five brands of non-fat milk powder and three brands of whole milk powder. For each milk powder, five different powder depths ranging from 1 mm-5 mm were prepared on the top of a base layer of melamine, to test spectral-based detection of the melamine through the milk. A relationship was established between the NIR reflectance spectra (937.5-1653.7 nm) and the penetration depth was investigated by means of the partial least squares-discriminant analysis (PLS-DA) technique to classify pixels as being milk-only or a mixture of milk and melamine. With increasing milk depth, classification model accuracy was gradually decreased. The results from the 1-mm, 2-mm and 3-mm models showed that the average classification accuracy of the validation set for milk-melamine samples was reduced from 99.86% down to 94.93% as the milk depth increased from 1 mm-3 mm. As the milk depth increased to 4 mm and 5 mm, model performance deteriorated further to accuracies as low as 81.83% and 58.26%, respectively. The results suggest that a 2-mm sample depth is recommended for the screening/evaluation of milk powders using an online NIR hyperspectral imaging system similar to that used in this study.

  5. Removing the depth-degeneracy in optical frequency domain imaging with frequency shifting

    PubMed Central

    Yun, S. H.; Tearney, G. J.; de Boer, J. F.; Bouma, B. E.

    2009-01-01

    A novel technique using an acousto-optic frequency shifter in optical frequency domain imaging (OFDI) is presented. The frequency shift eliminates the ambiguity between positive and negative differential delays, effectively doubling the interferometric ranging depth while avoiding image cross-talk. A signal processing algorithm is demonstrated to accommodate nonlinearity in the tuning slope of the wavelength-swept OFDI laser source. PMID:19484034

  6. Evaluation of optical imaging and spectroscopy approaches for cardiac tissue depth assessment

    SciTech Connect

    Lin, B; Matthews, D; Chernomordik, V; Gandjbakhche, A; Lane, S; Demos, S G

    2008-02-13

    NIR light scattering from ex vivo porcine cardiac tissue was investigated to understand how imaging or point measurement approaches may assist development of methods for tissue depth assessment. Our results indicate an increase of average image intensity as thickness increases up to approximately 2 mm. In a dual fiber spectroscopy configuration, sensitivity up to approximately 3 mm with an increase to 6 mm when spectral ratio between selected wavelengths was obtained. Preliminary Monte Carlo results provided reasonable fit to the experimental data.

  7. Depth elemental imaging of forensic samples by confocal micro-XRF method.

    PubMed

    Nakano, Kazuhiko; Nishi, Chihiro; Otsuki, Kazunori; Nishiwaki, Yoshinori; Tsuji, Kouichi

    2011-05-01

    Micro-XRF is a significant tool for the analysis of small regions. A micro-X-ray beam can be created in the laboratory by various focusing X-ray optics. Previously, nondestructive 3D-XRF analysis had not been easy because of the high penetration of fluorescent X-rays emitted into the sample. A recently developed confocal micro-XRF technique combined with polycapillary X-ray lenses enables depth-selective analysis. In this paper, we applied a new tabletop confocal micro-XRF system to analyze several forensic samples, that is, multilayered automotive paint fragments and leather samples, for use in the criminaliztics. Elemental depth profiles and mapping images of forensic samples were successfully obtained by the confocal micro-XRF technique. Multilayered structures can be distinguished in forensic samples by their elemental depth profiles. However, it was found that some leather sheets exhibited heterogeneous distribution. To confirm the validity, the result of a conventional micro-XRF of the cross section was compared with that of the confocal micro-XRF. The results obtained by the confocal micro-XRF system were in approximate agreement with those obtained by the conventional micro-XRF. Elemental depth imaging was performed on the paint fragments and leather sheets to confirm the homogeneity of the respective layers of the sample. The depth images of the paint fragment showed homogeneous distribution in each layer expect for Fe and Zn. In contrast, several components in the leather sheets were predominantly localized.

  8. Real-Depth imaging: a new (no glasses) 3D imaging technology with video/data projection applications

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1997-05-01

    Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths from any display, such as CRT and LCD, for television, computer, projection, and other formats. In addition, unlike stereoscopic 3D imaging, no glasses, headgear, or other viewing aids are used. And, unlike current autostereoscopic imaging technologies, there is virtually no restriction on where viewers can sit to view the images, with no 'bad' or 'dead' zones, flipping, or pseudoscopy. In addition to providing traditional depth cues such as perspective and background image occlusion, the new technology also provides both horizontal and vertical binocular parallax and accommodation which coincides with convergence. Since accommodation coincides with convergence, viewing these images doesn't produce headaches, fatigue, or eye-strain, regardless of how long they are viewed. The imagery must either be formatted for the Floating Images platform when written, or existing software can be reformatted without much difficult. The optical hardware system can be made to accommodate virtually any projection system to produce Floating Images for the Boardroom, video arcade, stage shows, or the classroom.

  9. Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera.

    PubMed

    Jeong, Youngmo; Kim, Jonghyun; Yeom, Jiwoon; Lee, Chang-Kun; Lee, Byoungho

    2015-12-10

    In this paper, we develop a real-time depth controllable integral imaging system. With a high-frame-rate camera and a focus controllable lens, light fields from various depth ranges can be captured. According to the image plane of the light field camera, the objects in virtual and real space are recorded simultaneously. The captured light field information is converted to the elemental image in real time without pseudoscopic problems. In addition, we derive characteristics and limitations of the light field camera as a 3D broadcasting capturing device with precise geometry optics. With further analysis, the implemented system provides more accurate light fields than existing devices without depth distortion. We adapt an f-number matching method at the capture and display stage to record a more exact light field and solve depth distortion, respectively. The algorithm allows the users to adjust the pixel mapping structure of the reconstructed 3D image in real time. The proposed method presents a possibility of a handheld real-time 3D broadcasting system in a cheaper and more applicable way as compared to the previous methods.

  10. Augmented depth perception visualization in 2D/3D image fusion.

    PubMed

    Wang, Jian; Kreiser, Matthias; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-12-01

    2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved.

  11. XPS for non-destructive depth profiling and 3D imaging of surface nanostructures.

    PubMed

    Hajati, Shaaker; Tougaard, Sven

    2010-04-01

    Depth profiling of nanostructures is of high importance both technologically and fundamentally. Therefore, many different methods have been developed for determination of the depth distribution of atoms, for example ion beam (e.g. O(2)(+) , Ar(+)) sputtering, low-damage C(60) cluster ion sputtering for depth profiling of organic materials, water droplet cluster ion beam depth profiling, ion-probing techniques (Rutherford backscattering spectroscopy (RBS), secondary-ion mass spectroscopy (SIMS) and glow-discharge optical emission spectroscopy (GDOES)), X-ray microanalysis using the electron probe variation technique combined with Monte Carlo calculations, angle-resolved XPS (ARXPS), and X-ray photoelectron spectroscopy (XPS) peak-shape analysis. Each of the depth profiling techniques has its own advantages and disadvantages. However, in many cases, non-destructive techniques are preferred; these include ARXPS and XPS peak-shape analysis. The former together with parallel factor analysis is suitable for giving an overall understanding of chemistry and morphology with depth. It works very well for flat surfaces but it fails for rough or nanostructured surfaces because of the shadowing effect. In the latter method shadowing effects can be avoided because only a single spectrum is used in the analysis and this may be taken at near normal emission angle. It is a rather robust means of determining atom depth distributions on the nanoscale both for large-area XPS analysis and for imaging. We critically discuss some of the techniques mentioned above and show that both ARXPS imaging and, particularly, XPS peak-shape analysis for 3D imaging of nanostructures are very promising techniques and open a gateway for visualizing nanostructures. PMID:20091159

  12. Analysis of airborne imaging spectrometer data for the Ruby Mountains, Montana, by use of absorption-band-depth images

    NASA Technical Reports Server (NTRS)

    Brickey, David W.; Crowley, James K.; Rowan, Lawrence C.

    1987-01-01

    Airborne Imaging Spectrometer-1 (AIS-1) data were obtained for an area of amphibolite grade metamorphic rocks that have moderate rangeland vegetation cover. Although rock exposures are sparse and patchy at this site, soils are visible through the vegetation and typically comprise 20 to 30 percent of the surface area. Channel averaged low band depth images for diagnostic soil rock absorption bands. Sets of three such images were combined to produce color composite band depth images. This relative simple approach did not require extensive calibration efforts and was effective for discerning a number of spectrally distinctive rocks and soils, including soils having high talc concentrations. The results show that the high spectral and spatial resolution of AIS-1 and future sensors hold considerable promise for mapping mineral variations in soil, even in moderately vegetated areas.

  13. Two-photon instant structured illumination microscopy improves the depth penetration of super-resolution imaging in thick scattering samples.

    PubMed

    Winter, Peter W; York, Andrew G; Nogare, Damian Dalle; Ingaramo, Maria; Christensen, Ryan; Chitnis, Ajay; Patterson, George H; Shroff, Hari

    2014-09-20

    Fluorescence imaging methods that achieve spatial resolution beyond the diffraction limit (super-resolution) are of great interest in biology. We describe a super-resolution method that combines two-photon excitation with structured illumination microscopy (SIM), enabling three-dimensional interrogation of live organisms with ~150 nm lateral and ~400 nm axial resolution, at frame rates of ~1 Hz. By performing optical rather than digital processing operations to improve resolution, our microscope permits super-resolution imaging with no additional cost in acquisition time or phototoxicity relative to the point-scanning two-photon microscope upon which it is based. Our method provides better depth penetration and inherent optical sectioning than all previously reported super-resolution SIM implementations, enabling super-resolution imaging at depths exceeding 100 μm from the coverslip surface. The capability of our system for interrogating thick live specimens at high resolution is demonstrated by imaging whole nematode embryos and larvae, and tissues and organs inside zebrafish embryos.

  14. Two-photon instant structured illumination microscopy improves the depth penetration of super-resolution imaging in thick scattering samples

    PubMed Central

    Winter, Peter W.; York, Andrew G.; Nogare, Damian Dalle; Ingaramo, Maria; Christensen, Ryan; Chitnis, Ajay; Patterson, George H.; Shroff, Hari

    2014-01-01

    Fluorescence imaging methods that achieve spatial resolution beyond the diffraction limit (super-resolution) are of great interest in biology. We describe a super-resolution method that combines two-photon excitation with structured illumination microscopy (SIM), enabling three-dimensional interrogation of live organisms with ~150 nm lateral and ~400 nm axial resolution, at frame rates of ~1 Hz. By performing optical rather than digital processing operations to improve resolution, our microscope permits super-resolution imaging with no additional cost in acquisition time or phototoxicity relative to the point-scanning two-photon microscope upon which it is based. Our method provides better depth penetration and inherent optical sectioning than all previously reported super-resolution SIM implementations, enabling super-resolution imaging at depths exceeding 100 μm from the coverslip surface. The capability of our system for interrogating thick live specimens at high resolution is demonstrated by imaging whole nematode embryos and larvae, and tissues and organs inside zebrafish embryos. PMID:25485291

  15. Improving Resolution and Depth of Astronomical Observations via Modern Mathematical Methods for Image Analysis

    NASA Astrophysics Data System (ADS)

    Castellano, M.; Ottaviani, D.; Fontana, A.; Merlin, E.; Pilo, S.; Falcone, M.

    2015-09-01

    In the past years modern mathematical methods for image analysis have led to a revolution in many fields, from computer vision to scientific imaging. However, some recently developed image processing techniques successfully exploited by other sectors have been rarely, if ever, experimented on astronomical observations. We present here tests of two classes of variational image enhancement techniques: "structure-texture decomposition" and "super-resolution" showing that they are effective in improving the quality of observations. Structure-texture decomposition allows to recover faint sources previously hidden by the background noise, effectively increasing the depth of available observations. Super-resolution yields an higher-resolution and a better sampled image out of a set of low resolution frames, thus mitigating problematics in data analysis arising from the difference in resolution/sampling between different instruments, as in the case of EUCLID VIS and NIR imagers.

  16. Ultrasonic camera automatic image depth and stitching modifications for monitoring aerospace composites

    NASA Astrophysics Data System (ADS)

    Regez, Brad; Kirikera, Goutham; Yuen, Martin Tan Hwai; Krishnaswamy, Sridhar; Lasser, Bob

    2009-03-01

    Two modifications to an ultrasonic camera system have been performed in an effort to reduce setup time and post inspection image processing. Current production ultrasonic cameras have image gates that are adjusted manually. The process to adjust them prior to each inspection consumes large amounts of time and requires a skilled operator. The authors have overcome this by integrating the A-Scan and image together such that the image gating is automatically adjusted using the A-Scan data. The system monitors the A-scan signal which is in the center of the camera's field of view (FOV) and adjusts the image gating accordingly. This integration will allow for defect detection at any depth of the inspected area. Ultrasonic camera operation requires the inspector to scan the surface manually while observing the cameras FOV in the monitor. If the monitor image indicates a defect the operator then stores that image manually and marks an index on the surface as to where the image has been acquired. The second modification automates this effort by employing a digital encoder and image capture card. The encoder is used to track movement of the camera on the structures surface, record positions, and trigger the image capture device. The images are stored real time in the buffer memory rather than on the hard drive. The storing of images in the buffer enables for a more rapid acquisition time compared to storing the images individually to the hard drive. Once the images are stored, an algorithm tracks the movement of the camera through the encoder and accordingly displays the image to the inspector. Upon completion of the scan, an algorithm digitally stitches all the images to create a single full field image. The modifications were tested on a aerospace composite laminate with known defects and the results are discussed.

  17. Development of Extended-Depth Swept Source Optical Coherence Tomography for Applications in Ophthalmic Imaging of the Anterior and Posterior Eye

    NASA Astrophysics Data System (ADS)

    Dhalla, Al-Hafeez Zahir

    Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for

  18. High contrast, depth-resolved thermoreflectance imaging using a Nipkow disk confocal microscope.

    PubMed

    Summers, J A; Yang, T; Tuominen, M T; Hudgings, J A

    2010-01-01

    We have developed a depth-resolved confocal thermal imaging technique that is capable of measuring the temperature distribution of an encapsulated or semi-obstructed device. The technique employs lock-in charge coupled device-based thermoreflectance imaging via a Nipkow disk confocal microscope, which is used to eliminate extraneous reflections from above or below the imaging plane. We use the confocal microscope to predict the decrease in contrast and dynamic range due to an obstruction for widefield thermoreflectance, and we demonstrate the ability of confocal thermoreflectance to maintain a high contrast and thermal sensitivity in the presence of large reflecting obstructions in the optical path.

  19. A reconfigurable 256 × 256 image sensor controller that is compatible for depth measurement

    NASA Astrophysics Data System (ADS)

    Zhe, Chen; Shan, Di; Cong, Shi; Liyuan, Liu; Nanjian, Wu

    2014-10-01

    This paper presents an image sensor controller that is compatible for depth measurement, which is based on the continuous-wave modulation time-of-flight technology. The image sensor controller is utilized to generate reconfigurable control signals for a 256 × 256 high speed CMOS image sensor with a conventional image sensing mode and a depth measurement mode. The image sensor controller generates control signals for the pixel array to realize the rolling exposure and the correlated double sampling functions. An refined circuit design technique in the logic level is presented to reduce chip area and power consumption. The chip, with a size of 700 × 3380 μm2, is fabricated in a standard 0.18 μm CMOS image sensor process. The power consumption estimated by the synthesis tool is 65 mW under a 1.8 V supply voltage and a 100 MHz clock frequency. Our test results show that the image sensor controller functions properly.

  20. Quantitative comparison of wavelength dependence on penetration depth and imaging contrast for ultrahigh-resolution optical coherence tomography using supercontinuum sources at five wavelength regions

    NASA Astrophysics Data System (ADS)

    Ishida, S.; Nishizawa, N.

    2012-01-01

    Optical coherence tomography (OCT) is a non invasive optical imaging technology for micron-scale cross-sectional imaging of biological tissue and materials. We have been investigating ultrahigh resolution optical coherence tomography (UHR-OCT) using fiber based supercontinuum sources. Although ultrahigh longitudinal resolution was achieved in several center wavelength regions, its low penetration depth is a serious limitation for other applications. To realize ultrahigh resolution and deep penetration depth simultaneously, it is necessary to choose the proper wavelength to maximize the light penetration and enhance the image contrast at deeper depths. Recently, we have demonstrated the wavelength dependence of penetration depth and imaging contrast for ultrahigh resolution OCT at 0.8 μm, 1.3 μm, and 1.7 μm wavelength ranges. In this paper, additionally we used SC sources at 1.06 μm and 1.55 μm, and we have investigated the wavelength dependence of UHR-OCT at five wavelength regions. The image contrast and penetration depth have been discussed in terms of the scattering coefficient and water absorption of samples. Almost the same optical characteristics in longitudinal and lateral resolution, sensitivity, and incident optical power at all wavelength regions were demonstrated. We confirmed the enhancement of image contrast and decreased ambiguity of deeper epithelioid structure at longer wavelength region.

  1. In-vivo full depth of eye imaging spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Dai, Cuixia; Zhou, Chuanqing; Jiao, Shuliang; Xi, Peng; Ren, Qiushi

    2011-09-01

    It is necessary to apply the spectral-domain optical coherence tomography (SD-OCT) to image the whole eye segment for practically iatrical application, but the imaging depth of SD-OCT is limited by the spectral resolution of the spectrometer. By now, no result about this research has been reported. In our study, a new dual channel dual focus OCT system is adopted to image the whole eye segment. The cornea and the crystalline lens are simultaneously imaged by using full range complex spectral-domain OCT in one channel, the retina is detected by the other. The new system was successfully tested in imaging of the volunteer' eye in vivo. The preliminary results presented in this paper demonstrated the feasibility of this approach.

  2. Full-range imaging of eye accommodation by high-speed long-depth range optical frequency domain imaging

    PubMed Central

    Furukawa, Hiroyuki; Hiro-Oka, Hideaki; Satoh, Nobuyuki; Yoshimura, Reiko; Choi, Donghak; Nakanishi, Motoi; Igarashi, Akihito; Ishikawa, Hitoshi; Ohbayashi, Kohji; Shimizu, Kimiya

    2010-01-01

    We describe a high-speed long-depth range optical frequency domain imaging (OFDI) system employing a long-coherence length tunable source and demonstrate dynamic full-range imaging of the anterior segment of the eye including from the cornea surface to the posterior capsule of the crystalline lens with a depth range of 12 mm without removing complex conjugate image ambiguity. The tunable source spanned from 1260 to 1360 nm with an average output power of 15.8 mW. The fast A-scan rate of 20,000 per second provided dynamic OFDI and dependence of the whole anterior segment change on time following abrupt relaxation from the accommodated to the relaxed status, which was measured for a healthy eye and that with an intraocular lens. PMID:21258564

  3. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    PubMed Central

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-01-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids. PMID:26576666

  4. High-resolution in-depth imaging of optically cleared thick samples using an adaptive SPIM

    NASA Astrophysics Data System (ADS)

    Masson, Aurore; Escande, Paul; Frongia, Céline; Clouvel, Grégory; Ducommun, Bernard; Lorenzo, Corinne

    2015-11-01

    Today, Light Sheet Fluorescence Microscopy (LSFM) makes it possible to image fluorescent samples through depths of several hundreds of microns. However, LSFM also suffers from scattering, absorption and optical aberrations. Spatial variations in the refractive index inside the samples cause major changes to the light path resulting in loss of signal and contrast in the deepest regions, thus impairing in-depth imaging capability. These effects are particularly marked when inhomogeneous, complex biological samples are under study. Recently, chemical treatments have been developed to render a sample transparent by homogenizing its refractive index (RI), consequently enabling a reduction of scattering phenomena and a simplification of optical aberration patterns. One drawback of these methods is that the resulting RI of cleared samples does not match the working RI medium generally used for LSFM lenses. This RI mismatch leads to the presence of low-order aberrations and therefore to a significant degradation of image quality. In this paper, we introduce an original optical-chemical combined method based on an adaptive SPIM and a water-based clearing protocol enabling compensation for aberrations arising from RI mismatches induced by optical clearing methods and acquisition of high-resolution in-depth images of optically cleared complex thick samples such as Multi-Cellular Tumour Spheroids.

  5. Large area and depth-profiling dislocation imaging and strain analysis in Si/SiGe/Si heterostructures.

    PubMed

    Chen, Xin; Zuo, Daniel; Kim, Seongwon; Mabon, James; Sardela, Mauro; Wen, Jianguo; Zuo, Jian-Min

    2014-10-01

    We demonstrate the combined use of large area depth-profiling dislocation imaging and quantitative composition and strain measurement for a strained Si/SiGe/Si sample based on nondestructive techniques of electron beam-induced current (EBIC) and X-ray diffraction reciprocal space mapping (XRD RSM). Depth and improved spatial resolution is achieved for dislocation imaging in EBIC by using different electron beam energies at a low temperature of ~7 K. Images recorded clearly show dislocations distributed in three regions of the sample: deep dislocation networks concentrated in the "strained" SiGe region, shallow misfit dislocations at the top Si/SiGe interface, and threading dislocations connecting the two regions. Dislocation densities at the top of the sample can be measured directly from the EBIC results. XRD RSM reveals separated peaks, allowing a quantitative measurement of composition and strain corresponding to different layers of different composition ratios. High-resolution scanning transmission electron microscopy cross-section analysis clearly shows the individual composition layers and the dislocation lines in the layers, which supports the EBIC and XRD RSM results.

  6. Multiplane imaging and depth-of-focus extending in digital holography by a single-shot digital hologram

    NASA Astrophysics Data System (ADS)

    Pan, Weiqing

    2013-01-01

    Limited depth of field is the main drawback of conventional microscopy that prevents observation of thick semi-transparent objects with all their features in focus, only a portion of the imaged volume along the optical axis is in good focus at once. The paper presents a novel reconstruction algorithm to image multiple planes at different depths simultaneously and realize extended focused imaging. A shift parameter that accounts for the coordinate system's transverse displacement of the image plane at different depths is introduced in the diffraction integral kernel. Combination of the diffraction integral kernel with different shift values and reconstruction depths yields multiplane imaging resolution in a single reconstruction. Moreover an extended depth-of-focus method is also presented through modifying the proposed multiplane imaging algorithm. Description of the method and experimental results are reported.

  7. Improving depth resolution in digital breast tomosynthesis by iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Roth, Erin G.; Kraemer, David N.; Sidky, Emil Y.; Reiser, Ingrid S.; Pan, Xiaochuan

    2015-03-01

    Digital breast tomosynthesis (DBT) is currently enjoying tremendous growth in its application to screening for breast cancer. This is because it addresses a major weakness of mammographic projection imaging; namely, a cancer can be hidden by overlapping fibroglandular tissue structures or the same normal structures can mimic a malignant mass. DBT addresses these issues by acquiring few projections over a limited angle scanning arc that provides some depth resolution. As DBT is a relatively new device, there is potential to improve its performance significantly with improved image reconstruction algorithms. Previously, we reported a variation of adaptive steepest descent - projection onto convex sets (ASD-POCS) for DBT, which employed a finite differencing filter to enhance edges for improving visibility of tissue structures and to allow for volume-of-interest reconstruction. In the present work we present a singular value decomposition (SVD) analysis to demonstrate the gain in depth resolution for DBT afforded by use of the finite differencing filter.

  8. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian; Kasprzyk, Jerzy

    2016-02-01

    The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.

  9. Underwater depth imaging using time-correlated single-photon counting.

    PubMed

    Maccarone, Aurora; McCarthy, Aongus; Ren, Ximing; Warburton, Ryan E; Wallace, Andy M; Moffat, James; Petillot, Yvan; Buller, Gerald S

    2015-12-28

    A depth imaging system, based on the time-of-flight approach and the time-correlated single-photon counting (TCSPC) technique, was investigated for use in highly scattering underwater environments. The system comprised a pulsed supercontinuum laser source, a monostatic scanning transceiver, with a silicon single-photon avalanche diode (SPAD) used for detection of the returned optical signal. Depth images were acquired in the laboratory at stand-off distances of up to 8 attenuation lengths, using per-pixel acquisition times in the range 0.5 to 100 ms, at average optical powers in the range 0.8 nW to 950 μW. In parallel, a LiDAR model was developed and validated using experimental data. The model can be used to estimate the performance of the system under a variety of scattering conditions and system parameters. PMID:26832050

  10. All-near-infrared multiphoton microscopy interrogates intact tissues at deeper imaging depths than conventional single- and two-photon near-infrared excitation microscopes.

    PubMed

    Sarder, Pinaki; Yazdanfar, Siavash; Akers, Walter J; Tang, Rui; Sudlow, Gail P; Egbulefu, Christopher; Achilefu, Samuel

    2013-10-01

    The era of molecular medicine has ushered in the development of microscopic methods that can report molecular processes in thick tissues with high spatial resolution. A commonality in deep-tissue microscopy is the use of near-infrared (NIR) lasers with single- or multiphoton excitations. However, the relationship between different NIR excitation microscopic techniques and the imaging depths in tissue has not been established. We compared such depth limits for three NIR excitation techniques: NIR single-photon confocal microscopy (NIR SPCM), NIR multiphoton excitation with visible detection (NIR/VIS MPM), and all-NIR multiphoton excitation with NIR detection (NIR/NIR MPM). Homologous cyanine dyes provided the fluorescence. Intact kidneys were harvested after administration of kidney-clearing cyanine dyes in mice. NIR SPCM and NIR/VIS MPM achieved similar maximum imaging depth of ∼100 μm. The NIR/NIR MPM enabled greater than fivefold imaging depth (>500 μm) using the harvested kidneys. Although the NIR/NIR MPM used 1550-nm excitation where water absorption is relatively high, cell viability and histology studies demonstrate that the laser did not induce photothermal damage at the low laser powers used for the kidney imaging. This study provides guidance on the imaging depth capabilities of NIR excitation-based microscopic techniques and reveals the potential to multiplex information using these platforms.

  11. Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy.

    PubMed

    Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S; Yuste, Rafael; Ahrens, Misha B

    2016-03-01

    Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning-removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416×832×160  μm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain. PMID:26974063

  12. Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy

    PubMed Central

    Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S.; Yuste, Rafael; Ahrens, Misha B.

    2016-01-01

    Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning—removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416 × 832 × 160 µm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain. PMID:26974063

  13. Depth-dependent swimbladder compression in herring Clupea harengus observed using magnetic resonance imaging.

    PubMed

    Fässler, S M M; Fernandes, P G; Semple, S I K; Brierley, A S

    2009-01-01

    Changes in swimbladder morphology in an Atlantic herring Clupea harengus with pressure were examined by magnetic resonance imaging of a dead fish in a purpose-built pressure chamber. Swimbladder volume changed with pressure according to Boyle's Law, but compression in the lateral aspect was greater than in the dorsal aspect. This uneven compression has a reduced effect on acoustic backscattering than symmetrical compression and would elicit less pronounced effects of depth on acoustic biomass estimates of C. harengus. PMID:20735542

  14. Three-dimensional passive millimeter-wave imaging and depth estimation

    NASA Astrophysics Data System (ADS)

    Yeom, Seokwon; Lee, Dong-Su; Lee, Hyoung; Son, Jung-Young; Guschin, Vladimir P.

    2010-04-01

    We address three-dimensional passive millimeter-wave imaging (MMW) and depth estimation for remote objects. The MMW imaging is very useful for the harsh environment such as fog, smoke, snow, sandstorm, and drizzle. Its penetrating property into clothing provides a great advantage to security and defense systems. In this paper, the featurebased passive MMW stereo-matching process is proposed to estimate the distance of the concealed object under clothing. It will be shown that the proposed method can estimate the distance of the concealed object.

  15. Noninvasive determination of burn depth in children by digital infrared thermal imaging

    NASA Astrophysics Data System (ADS)

    Medina-Preciado, Jose David; Kolosovas-Machuca, Eleazar Samuel; Velez-Gomez, Ezequiel; Miranda-Altamirano, Ariel; González, Francisco Javier

    2013-06-01

    Digital infrared thermal imaging is used to assess noninvasively the severity of burn wounds in 13 pediatric patients. A delta-T (ΔT) parameter obtained by subtracting the temperature of a healthy contralateral region from the temperature of the burn wound is compared with the burn depth measured histopathologically. Thermal imaging results show that superficial dermal burns (IIa) show increased temperature compared with their contralateral healthy region, while deep dermal burns (IIb) show a lower temperature than their contralateral healthy region. This difference in temperature is statistically significant (p<0.0001) and provides a way of distinguishing deep dermal from superficial dermal burns. These results show that digital infrared thermal imaging could be used as a noninvasive procedure to assess burn wounds. An additional advantage of using thermal imaging, which can image a large skin surface area, is that it can be used to identify regions with different burn depths and estimate the size of the grafts needed for deep dermal burns.

  16. 3D Sorghum Reconstructions from Depth Images Identify QTL Regulating Shoot Architecture1[OPEN

    PubMed Central

    2016-01-01

    Dissecting the genetic basis of complex traits is aided by frequent and nondestructive measurements. Advances in range imaging technologies enable the rapid acquisition of three-dimensional (3D) data from an imaged scene. A depth camera was used to acquire images of sorghum (Sorghum bicolor), an important grain, forage, and bioenergy crop, at multiple developmental time points from a greenhouse-grown recombinant inbred line population. A semiautomated software pipeline was developed and used to generate segmented, 3D plant reconstructions from the images. Automated measurements made from 3D plant reconstructions identified quantitative trait loci for standard measures of shoot architecture, such as shoot height, leaf angle, and leaf length, and for novel composite traits, such as shoot compactness. The phenotypic variability associated with some of the quantitative trait loci displayed differences in temporal prevalence; for example, alleles closely linked with the sorghum Dwarf3 gene, an auxin transporter and pleiotropic regulator of both leaf inclination angle and shoot height, influence leaf angle prior to an effect on shoot height. Furthermore, variability in composite phenotypes that measure overall shoot architecture, such as shoot compactness, is regulated by loci underlying component phenotypes like leaf angle. As such, depth imaging is an economical and rapid method to acquire shoot architecture phenotypes in agriculturally important plants like sorghum to study the genetic basis of complex traits. PMID:27528244

  17. Noninvasive determination of burn depth in children by digital infrared thermal imaging.

    PubMed

    Medina-Preciado, Jose David; Kolosovas-Machuca, Eleazar Samuel; Velez-Gomez, Ezequiel; Miranda-Altamirano, Ariel; González, Francisco Javier

    2013-06-01

    Digital infrared thermal imaging is used to assess noninvasively the severity of burn wounds in 13 pediatric patients. A delta-T (ΔT) parameter obtained by subtracting the temperature of a healthy contralateral region from the temperature of the burn wound is compared with the burn depth measured histopathologically. Thermal imaging results show that superficial dermal burns (IIa) show increased temperature compared with their contralateral healthy region, while deep dermal burns (IIb) show a lower temperature than their contralateral healthy region. This difference in temperature is statistically significant (p<0.0001) and provides a way of distinguishing deep dermal from superficial dermal burns. These results show that digital infrared thermal imaging could be used as a noninvasive procedure to assess burn wounds. An additional advantage of using thermal imaging, which can image a large skin surface area, is that it can be used to identify regions with different burn depths and estimate the size of the grafts needed for deep dermal burns.

  18. Depth-weighted Inverse and Imaging methods to study the Earth's Crust in Southern Italy

    NASA Astrophysics Data System (ADS)

    Fedi, M.

    2012-04-01

    Inversion means solving a set of geophysical equations for a spatial distribution of parameters (or functions) which could have produced an observed set of measurements. Imaging is instead a transformation of magnetometric data into a scaled 3D model resembling the true geometry of subsurface geologic features. While inversion theory allows many additional constraints, such as depth weighting, positivity, physical property bounds, smoothness, focusing, imaging methods of magnetic data derived under different theories are all found to reduce to either simple upward continuation or a depth-weighted upward continuation, with weights expressed in the general form of a power law of the altitude, with the half of the structural index as exponent. Note however that specifying the appropriate level of depth weighting is not just a problem in these imaging techniques but should also be considered in standard inversion methods. We will also investigate the relationship between imaging methods and multiscale methods. A multiscale analysis is well suitable to study potential fields because the way potential fields convey source information is strictly related to the scale of analysis. The stability of multiscale methods results from mixing, in a single operator, the wavenumber low-pass behaviour of the upward continuation transformation of the field with the enhancement high-pass properties of n-order derivative transformations. So, the complex reciprocal interference of several field components may be efficiently faced at several scales of the analysis and the depth to the sources may be estimated together with the homogeneity degrees of the field. We will describe the main aspects of both the kinds of interpretation under the study of multi-source models and apply either inversion or imaging techniques to the magnetic data of complex crustal areas of Southern Italy, such as the Campanian volcanic district and the Southern Apennines. The studied area includes a Pleistocene

  19. Pixel-based parametric source depth map for Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Altabella, L.; Boschi, F.; Spinelli, A. E.

    2016-01-01

    Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.

  20. Photoacoustic FT-IR depth imaging of polymeric surfaces: overcoming IR diffraction limits.

    PubMed

    Zhang, Ping; Urban, Marek W

    2004-11-23

    It is well established that the photoacoustic effect based on absorption of electromagnetic radiation into thermal waves allows surface depth profiling. However, limited knowledge exists concerning its spatial resolution. The spiral-stepwise (SSW) approach combined with phase rotational analysis is utilized to determine surface depth profiling of homogeneous and nonhomogeneous multilayered polymeric surfaces in a step-scan photoacoustic FT-IR experiment. In this approach, the thermal wave propagating to the surface is represented as the integral of all heat wave vectors propagating across the sampling depth xn, and the spiral function K'beta(lambda)e(-beta)(lambda)xne(-x)n/mu(th)e(i)(omegat-(xn/mu(th))) represents the amplitude and phase of the heat wave vector propagating to the surface. The SSW approach can be applied to heterogeneous surfaces by representing thermal waves propagating to the surface as the sum of the thermal waves propagating through homogeneous layers that are integrals of all heat vectors from a given sampling depth. The proposed model is tested on multilayered polymeric surfaces and shows that the SSW approach allows semiquantitative surface imaging with the spatial resolution ranging from micrometer to 500 nm levels, and the spatial resolution is a function of the penetration depth.

  1. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement. PMID:26928458

  2. Upconversion fluorescent nanoparticles as a potential tool for in-depth imaging

    NASA Astrophysics Data System (ADS)

    Nagarajan, Sounderya; Zhang, Yong

    2011-09-01

    Upconversion nanoparticles (UCNs) are nanoparticles that are excited in the near infrared (NIR) region with emission in the visible or NIR regions. This makes these particles attractive for use in biological imaging as the NIR light can penetrate the tissue better with minimal absorption/scattering. This paper discusses the study of the depth to which cells can be imaged using these nanoparticles. UCNs with NaYF4 nanocrystals doped with Yb3 + , Er3 + (visible emission)/Yb3 + , Tm3 + (NIR emission) were synthesized and modified with silica enabling their dispersion in water and conjugation of biomolecules to their surface. The size of the sample was characterized using transmission electron microscopy and the fluorescence measured using a fluorescence spectrometer at an excitation of 980 nm. Tissue phantoms were prepared by reported methods to mimic skin/muscle tissue and it was observed that the cells could be imaged up to a depth of 3 mm using the NIR emitting UCNs. Further, the depth of detection was evaluated for UCNs targeted to gap junctions formed between cardiac cells.

  3. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement.

  4. Long-range and depth-selective imaging of macroscopic targets using low-coherence and wide-field interferometry (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Choi, Wonshik

    2016-03-01

    With the advancement of 3D display technology, 3D imaging of macroscopic objects has drawn much attention as they provide the contents to display. The most widely used imaging methods include a depth camera, which measures time of flight for the depth discrimination, and various structured illumination techniques. However, these existing methods have poor depth resolution, which makes imaging complicated structures a difficult task. In order to resolve this issue, we propose an imaging system based upon low-coherence interferometry and off-axis digital holographic imaging. By using light source with coherence length of 200 micro, we achieved the depth resolution of 100 micro. In order to map the macroscopic objects with this high axial resolution, we installed a pair of prisms in the reference beam path for the long-range scanning of the optical path length. Specifically, one prism was fixed in position, and the other prism was mounted on a translation stage and translated in parallel to the first prism. Due to the multiple internal reflections between the two prisms, the overall path length was elongated by a factor of 50. In this way, we could cover a depth range more than 1 meter. In addition, we employed multiple speckle illuminations and incoherent averaging of the acquired holographic images for reducing the specular reflections from the target surface. Using this newly developed system, we performed imaging targets with multiple different layers and demonstrated imaging targets hidden behind the scattering layers. The method was also applied to imaging targets located around the corner.

  5. X-ray imaging using avalanche multiplication in amorphous selenium: Investigation of depth dependent avalanche noise

    SciTech Connect

    Hunt, D. C.; Tanioka, Kenkichi; Rowlands, J. A.

    2007-03-15

    The past decade has seen the swift development of the flat-panel detector (FPD), also known as the active matrix flat-panel imager, for digital radiography. This new technology is applicable to other modalities, such as fluoroscopy, which require the acquisition of multiple images, but could benefit from some improvements. In such applications where more than one image is acquired less radiation is available to form each image and amplifier noise becomes a serious problem. Avalanche multiplication in amorphous selenium (a-Se) can provide the necessary amplification prior to read out so as to reduce the effect of electronic noise of the FPD. However, in direct conversion detectors avalanche multiplication can lead to a new source of gain fluctuation noise called depth dependent avalanche noise. A theoretical model was developed to understand depth dependent avalanche noise. Experiments were performed on a direct imaging system implementing avalanche multiplication in a layer of a-Se to validate the theory. For parameters appropriate for a diagnostic imaging FPD for fluoroscopy the detective quantum efficiency (DQE) was found to drop by as much as 50% with increasing electric field, as predicted by the theoretical model. This drop in DQE can be eliminated by separating the collection and avalanche regions. For example by having a region of low electric field where x rays are absorbed and converted into charge that then drifts into a region of high electric field where the x-ray generated charge undergoes avalanche multiplication. This means quantum noise limited direct conversion FPD for low exposure imaging techniques are a possibility.

  6. A range/depth modulation transfer function (RMTF) framework for characterizing 3D imaging LADAR performance

    NASA Astrophysics Data System (ADS)

    Staple, Bevan; Earhart, R. P.; Slaymaker, Philip A.; Drouillard, Thomas F., II; Mahony, Thomas

    2005-05-01

    3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system"s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.

  7. Airborne imaging spectrometer data of the Ruby Mountains, Montana: Mineral discrimination using relative absorption band-depth images

    USGS Publications Warehouse

    Crowley, J.K.; Brickey, D.W.; Rowan, L.C.

    1989-01-01

    Airborne imaging spectrometer data collected in the near-infrared (1.2-2.4 ??m) wavelength range were used to study the spectral expression of metamorphic minerals and rocks in the Ruby Mountains of southwestern Montana. The data were analyzed by using a new data enhancement procedure-the construction of relative absorption band-depth (RBD) images. RBD images, like bandratio images, are designed to detect diagnostic mineral absorption features, while minimizing reflectance variations related to topographic slope and albedo differences. To produce an RBD image, several data channels near an absorption band shoulder are summed and then divided by the sum of several channels located near the band minimum. RBD images are both highly specific and sensitive to the presence of particular mineral absorption features. Further, the technique does not distort or subdue spectral features as sometimes occurs when using other data normalization methods. By using RBD images, a number of rock and soil units were distinguished in the Ruby Mountains including weathered quartz - feldspar pegmatites, marbles of several compositions, and soils developed over poorly exposed mica schists. The RBD technique is especially well suited for detecting weak near-infrared spectral features produced by soils, which may permit improved mapping of subtle lithologic and structural details in semiarid terrains. The observation of soils rich in talc, an important industrial commodity in the study area, also indicates that RBD images may be useful for mineral exploration. ?? 1989.

  8. Mobile phone imaging module with extended depth of focus based on axial irradiance equalization phase coding

    NASA Astrophysics Data System (ADS)

    Sung, Hsin-Yueh; Chen, Po-Chang; Chang, Chuan-Chung; Chang, Chir-Weei; Yang, Sidney S.; Chang, Horng

    2011-01-01

    This paper presents a mobile phone imaging module with extended depth of focus (EDoF) by using axial irradiance equalization (AIE) phase coding. From radiation energy transfer along optical axis with constant irradiance, the focal depth enhancement solution is acquired. We introduce the axial irradiance equalization phase coding to design a two-element 2-megapixel mobile phone lens for trade off focus-like aberrations such as field curvature, astigmatism and longitudinal chromatic defocus. The design results produce modulation transfer functions (MTF) and phase transfer functions (PTF) with substantially similar characteristics at different field and defocus positions within Nyquist pass band. Besides, the measurement results are shown. Simultaneously, the design results and measurement results are compared. Next, for the EDoF mobile phone camera imaging system, we present a digital decoding design method and calculate a minimum mean square error (MMSE) filter. Then, the filter is applied to correct the substantially similar blur image. Last, the blur and de-blur images are demonstrated.

  9. Practical imaging of complex geological structures using seismic prestack depth migration

    NASA Astrophysics Data System (ADS)

    Zhu, Jinming

    This thesis develops innovative procedures to address problems in imaging multi-channel reflection seismic data in regions of complex geology. Conventional common midpoint (CMP) based processing fails to produce adequate Earth images for complex geological structures with both vertical and lateral heterogeneities. Two powerful prestack depth migration techniques are developed through the integral and finite-difference solutions of the wave equation. I first develop a new, robust, and accurate traveltime calculation method which is essentially a wavefront tracing procedure. This is implemented as a combination of a finite-difference solution of the eikonal equation, an excitation of Huygens' secondary sources, and an application of Fermat's principle. This method is very general and can be directly applied to compute first arrival traveltimes of incident plane waves. These traveltimes are extensively used by the Kirchhoff integral method to determine the integral surface, and also by the reverse-time migration to determine imaging conditions. The prestack Kirchhoff integral migration of shot profiles which is developed using the WKBJ approximation to the Green's function is simply a summation of amplitudes of differential traces along an integral surface with amplitudes being modulated by certain geometrical functions. I demonstrate that this summation scheme along a general integral surface is the mathematically more rigorous extension of the summation scheme along diffraction surfaces and of the superposition scheme of aplanatic surfaces. In contrast to the Kirchhoff method, reverse-time migration is based on a direct solution of the wave equation by approximating the differential terms of the wave equation with finite differences. It is theoretically more accurate than the Kirchhoff method since it attempts to solve the wave equation without a high frequency approximation. In addition to such attractions as implicit static corrections and coherent noise

  10. Ultra-high resolution and long scan depth optical coherence tomography with full-phase detection for imaging the ocular surface

    PubMed Central

    Tao, Aizhu; Peterson, Kristen A; Jiang, Hong; Shao, Yilei; Zhong, Jianguang; Carey, Frank C; Rosen, Elias P; Wang, Jianhua

    2013-01-01

    We used a unique combination of four state-of-the-art technologies to achieve a high performance spectral domain optical coherence tomography system suitable for imaging the entire ocular surface. An ultra-high resolution, extended depth range, full-phase interferometry, and high-speed complementary metal-oxide semiconductor transistor camera detection provided unprecedented performance for the precise quantification of a wide range of the ocular surface. We demonstrated the feasibility of this approach by obtaining high-speed and high-resolution images of a model eye beyond the corneal–scleral junction. Surfaces determined from the images with a segmentation algorithm demonstrated excellent accuracy and precision. PMID:23976840

  11. Depth-resolved rhodopsin molecular contrast imaging for functional assessment of photoreceptors

    PubMed Central

    Liu, Tan; Wen, Rong; Lam, Byron L.; Puliafito, Carmen A.; Jiao, Shuliang

    2015-01-01

    Rhodopsin, the light-sensing molecule in the outer segments of rod photoreceptors, is responsible for converting light into neuronal signals in a process known as phototransduction. Rhodopsin is thus a functional biomarker for rod photoreceptors. Here we report a novel technology based on visible-light optical coherence tomography (VIS-OCT) for in vivo molecular imaging of rhodopsin. The depth resolution of OCT allows the visualization of the location where the change of optical absorption occurs and provides a potentially accurate assessment of rhodopsin content by segmentation of the image at the location. Rhodopsin OCT can be used to quantitatively image rhodopsin distribution and thus assess the distribution of functional rod photoreceptors in the retina. Rhodopsin OCT can bring significant impact into ophthalmic clinics by providing a tool for the diagnosis and severity assessment of a variety of retinal conditions. PMID:26358529

  12. Imaging with depth extension: where are the limits in fixed- focus cameras?

    NASA Astrophysics Data System (ADS)

    Bakin, Dmitry; Keelan, Brian

    2008-08-01

    The integration of novel optics designs, miniature CMOS sensors, and powerful digital processing into a single imaging module package is driving progress in handset camera systems in terms of performance, size (thinness) and cost. The miniature cameras incorporating high resolution sensors and fixed-focus Extended Depth of Field (EDOF) optics allow close-range reading of printed material (barcode patterns, business cards), while providing high quality imaging in more traditional applications. These cameras incorporate modified optics and digital processing to recover the soft-focus images and restore sharpness over a wide range of object distances. The effects a variety of parameters of the imaging module on the EDOF range were analyzed for a family of high resolution CMOS modules. The parameters include various optical properties of the imaging lens, and the characteristics of the sensor. The extension factors for the EDOF imaging module were defined in terms of an improved absolute resolution in object space while maintaining focus at infinity. This definition was applied for the purpose of identifying the minimally resolvable object details in mobile cameras with bar-code reading feature.

  13. Achieving high-value cardiac imaging: challenges and opportunities.

    PubMed

    Wiener, David H

    2014-01-01

    Cardiac imaging is under intense scrutiny as a contributor to health care costs, with multiple initiatives under way to reduce and eliminate inappropriate testing. Appropriate use criteria are valuable guides to selecting imaging studies but until recently have focused on the test rather than the patient. Patient-centered means are needed to define the true value of imaging for patients in specific clinical situations. This article provides a definition of high-value cardiac imaging. A paradigm to judge the efficacy of echocardiography in the absence of randomized controlled trials is presented. Candidate clinical scenarios are proposed in which echocardiography constitutes high-value imaging, as well as stratagems to increase the likelihood that high-value cardiac imaging takes place in those circumstances.

  14. Achieving molecular selectivity in imaging using multiphoton Raman spectroscopy techniques

    SciTech Connect

    Holtom, Gary R. ); Thrall, Brian D. ); Chin, Beek Yoke ); Wiley, H Steven ); Colson, Steven D. )

    2000-12-01

    In the case of most imaging methods, contrast is generated either by physical properties of the sample (Differential Image Contrast, Phase Contrast), or by fluorescent labels that are localized to a particular protein or organelle. Standard Raman and infrared methods for obtaining images are based upon the intrinsic vibrational properties of molecules, and thus obviate the need for attached flurophores. Unfortunately, they have significant limitations for live-cell imaging. However, an active Raman method, called Coherent Anti-Stokes Raman Scattering (CARS), is well suited for microscopy, and provides a new means for imaging specific molecules. Vibrational imaging techniques, such as CARS, avoid problems associated with photobleaching and photo-induced toxicity often associated with the use of fluorescent labels with live cells. Because the laser configuration needed to implement CARS technology is similar to that used in other multiphoton microscopy methods, such as two -photon fluorescence and harmonic generation, it is possible to combine imaging modalities, thus generating simultaneous CARS and fluorescence images. A particularly powerful aspect of CARS microscopy is its ability to selectively image deuterated compounds, thus allowing the visualization of molecules, such as lipids, that are chemically indistinguishable from the native species.

  15. Depth profiling and imaging capabilities of an ultrashort pulse laser ablation time of flight mass spectrometer

    PubMed Central

    Cui, Yang; Moore, Jerry F.; Milasinovic, Slobodan; Liu, Yaoming; Gordon, Robert J.; Hanley, Luke

    2012-01-01

    An ultrafast laser ablation time-of-flight mass spectrometer (AToF-MS) and associated data acquisition software that permits imaging at micron-scale resolution and sub-micron-scale depth profiling are described. The ion funnel-based source of this instrument can be operated at pressures ranging from 10−8 to ∼0.3 mbar. Mass spectra may be collected and stored at a rate of 1 kHz by the data acquisition system, allowing the instrument to be coupled with standard commercial Ti:sapphire lasers. The capabilities of the AToF-MS instrument are demonstrated on metal foils and semiconductor wafers using a Ti:sapphire laser emitting 800 nm, ∼75 fs pulses at 1 kHz. Results show that elemental quantification and depth profiling are feasible with this instrument. PMID:23020378

  16. Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging.

    PubMed

    Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho

    2004-12-01

    A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results. PMID:15605488

  17. Factors affecting ultimate imaging depth of two-photon fluorescence microscopy in scattering medium

    NASA Astrophysics Data System (ADS)

    Sergeeva, Ekaterina A.; Katichev, Aleksey R.

    2009-10-01

    Different aspects of multiple small-angle scattering effect on two-phonon fluorescence microscopy (2PFM) imaging ability are discussed in this paper. We focus on theoretical evaluation of the maximum accessible imaging depth. There are three main factors which potentially restrict imaging depth: i) decay of tightly focused excitation beam caused by scattering and accompanied by loss of diffraction-limited resolution; ii) out-of focus fluorescence originated from excessive illumination of the sample surface which is required to compensate for the lack of peak intensity inside scattering medium; iii) decrease of signal-to noise ratio of fluorescence signal due to Beer-Bouguer-Lambert law decrease of excitation intensity. Based on small-angle diffusive approximation of radiation transfer theory we compared the influence of these factors and found out that the first two factors define fundamental limitation of 2PEM potentialities in scattering medium while the last one provides principal instrumental limitation which prevails in state-of the-art commercial laser scanning microscopy systems.

  18. Factors affecting ultimate imaging depth of two-photon fluorescence microscopy in scattering medium

    NASA Astrophysics Data System (ADS)

    Sergeeva, Ekaterina A.; Katichev, Aleksey R.

    2010-02-01

    Different aspects of multiple small-angle scattering effect on two-phonon fluorescence microscopy (2PFM) imaging ability are discussed in this paper. We focus on theoretical evaluation of the maximum accessible imaging depth. There are three main factors which potentially restrict imaging depth: i) decay of tightly focused excitation beam caused by scattering and accompanied by loss of diffraction-limited resolution; ii) out-of focus fluorescence originated from excessive illumination of the sample surface which is required to compensate for the lack of peak intensity inside scattering medium; iii) decrease of signal-to noise ratio of fluorescence signal due to Beer-Bouguer-Lambert law decrease of excitation intensity. Based on small-angle diffusive approximation of radiation transfer theory we compared the influence of these factors and found out that the first two factors define fundamental limitation of 2PEM potentialities in scattering medium while the last one provides principal instrumental limitation which prevails in state-of the-art commercial laser scanning microscopy systems.

  19. Depth estimation of face images using the nonlinear least-squares model.

    PubMed

    Sun, Zhan-Li; Lam, Kin-Man; Gao, Qing-Wei

    2013-01-01

    In this paper, we propose an efficient algorithm to reconstruct the 3D structure of a human face from one or more of its 2D images with different poses. In our algorithm, the nonlinear least-squares model is first employed to estimate the depth values of facial feature points and the pose of the 2D face image concerned by means of the similarity transform. Furthermore, different optimization schemes are presented with regard to the accuracy levels and the training time required. Our algorithm also embeds the symmetrical property of the human face into the optimization procedure, in order to alleviate the sensitivities arising from changes in pose. In addition, the regularization term, based on linear correlation, is added in the objective function to improve the estimation accuracy of the 3D structure. Further, a model-integration method is proposed to improve the depth-estimation accuracy when multiple nonfrontal-view face images are available. Experimental results on the 2D and 3D databases demonstrate the feasibility and efficiency of the proposed methods. PMID:22711771

  20. Noninvasive measurement of burn wound depth applying infrared thermal imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jaspers, Mariëlle E.; Maltha, Ilse M.; Klaessens, John H.; Vet, Henrica C.; Verdaasdonk, Rudolf M.; Zuijlen, Paul P.

    2016-02-01

    In burn wounds early discrimination between the different depths plays an important role in the treatment strategy. The remaining vasculature in the wound determines its healing potential. Non-invasive measurement tools that can identify the vascularization are therefore considered to be of high diagnostic importance. Thermography is a non-invasive technique that can accurately measure the temperature distribution over a large skin or tissue area, the temperature is a measure of the perfusion of that area. The aim of this study was to investigate the clinimetric properties (i.e. reliability and validity) of thermography for measuring burn wound depth. In a cross-sectional study with 50 burn wounds of 35 patients, the inter-observer reliability and the validity between thermography and Laser Doppler Imaging were studied. With ROC curve analyses the ΔT cut-off point for different burn wound depths were determined. The inter-observer reliability, expressed by an intra-class correlation coefficient of 0.99, was found to be excellent. In terms of validity, a ΔT cut-off point of 0.96°C (sensitivity 71%; specificity 79%) differentiates between a superficial partial-thickness and deep partial-thickness burn. A ΔT cut-off point of -0.80°C (sensitivity 70%; specificity 74%) could differentiate between a deep partial-thickness and a full-thickness burn wound. This study demonstrates that thermography is a reliable method in the assessment of burn wound depths. In addition, thermography was reasonably able to discriminate among different burn wound depths, indicating its potential use as a diagnostic tool in clinical burn practice.

  1. Three-dimensional imaging characteristics and depth resolution in digital holographic three-dimensional imaging spectrometry

    NASA Astrophysics Data System (ADS)

    Obara, Masaki; Yoshimori, Kyu

    2015-07-01

    A four-dimensional impulse response function for the digital holographic three-dimensional imaging spectrometry has been fully derived in closed form. Due to its factorizing nature of the mathematical expression of four-dimensional impulse response function, three-dimensional spatial part of impulse response function directly corresponds to threedimensional point spread function of in-line digital holography with rectangular aperture. Based on these mathematical results, this paper focuses on the investigation of spectral resolution and three-dimensional spatial resolution in digital holographic three-dimensional imaging spectrometry and digital holography. We found that the theoretical prediction agree well with the experimental results. This work suggests a new criterion and estimate method regarding threedimensional spatial resolution of in-line digital holography.

  2. Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters

    NASA Technical Reports Server (NTRS)

    Bos, Brent; Memarsadeghi, Nargess; Kizhner, Semion; Antonille, Scott

    2013-01-01

    A large depth-of-field particle image velocimeter (PIV) is designed to characterize dynamic dust environments on planetary surfaces. This instrument detects lofted dust particles, and senses the number of particles per unit volume, measuring their sizes, velocities (both speed and direction), and shape factors when the particles are large. To measure these particle characteristics in-flight, the instrument gathers two-dimensional image data at a high frame rate, typically >4,000 Hz, generating large amounts of data for every second of operation, approximately 6 GB/s. To characterize a planetary dust environment that is dynamic, the instrument would have to operate for at least several minutes during an observation period, easily producing more than a terabyte of data per observation. Given current technology, this amount of data would be very difficult to store onboard a spacecraft, and downlink to Earth. Since 2007, innovators have been developing an autonomous image analysis algorithm architecture for the PIV instrument to greatly reduce the amount of data that it has to store and downlink. The algorithm analyzes PIV images and automatically reduces the image information down to only the particle measurement data that is of interest, reducing the amount of data that is handled by more than 10(exp 3). The state of development for this innovation is now fairly mature, with a functional algorithm architecture, along with several key pieces of algorithm logic, that has been proven through field test data acquired with a proof-of-concept PIV instrument.

  3. Probing depth and dynamic response of speckles in near infrared region for spectroscopic blood flow imaging

    NASA Astrophysics Data System (ADS)

    Yokoi, Naomichi; Aizu, Yoshihisa

    2016-04-01

    Imaging method based on bio-speckles is a useful means for blood flow visualization of living bodies and, it has been utilized for analyzing the condition or the health state of living bodies. Actually, the sensitivity of blood flow is influenced by tissue optical properties, which depend on the wavelength of illuminating laser light. In the present study, we experimentally investigate characteristics of the blood flow images obtained with two wavelengths of 780 nm and 830 nm in the near-infrared region. Experiments are conducted for sample models using a pork layer, horse blood layer and mirror, and for a human wrist and finger, to investigate optical penetration depth and dynamic response of speckles to the blood flow velocity for two wavelengths.

  4. Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding

    NASA Astrophysics Data System (ADS)

    Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito

    2015-02-01

    Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.

  5. Imaging the Juan de Fuca subduction plate using 3D Kirchoff Prestack Depth Migration

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Bodin, T.; Allen, R. M.; Tauzin, B.

    2014-12-01

    We propose a new Receiver Function migration method to image the subducting plate in the western US that utilizes the US array and regional network data. While the well-developed CCP (common conversion point) poststack migration is commonly used for such imaging; our method applies a 3D prestack depth migration approach. The traditional CCP and post-stack depth mapping approaches implement the ray tracing and moveout correction for the incoming teleseismic plane wave based on a 1D earth reference model and the assumption of horizontal discontinuities. Although this works well in mapping the reflection position of relatively flat discontinuities (such as the Moho or the LAB), CCP is known to give poor results in the presence of lateral volumetric velocity variations and dipping layers. Instead of making the flat layer assumption and 1D moveout correction, seismic rays are traced in a 3D tomographic model with the Fast Marching Method. With travel time information stored, our Kirchoff migration is done where the amplitude of the receiver function at a given time is distributed over all possible conversion points (i.e. along a semi-elipse) on the output migrated depth section. The migrated reflectors will appear where the semicircles constructively interfere, whereas destructive interference will cancel out noise. Synthetic tests show that in the case of a horizontal discontinuity, the prestack Kirchoff migration gives similar results to CCP, but without spurious multiples as this energy is stacked destructively and cancels out. For 45 degree and 60 degree dipping discontinuities, it also performs better in terms of imaging at the right boundary and dip angle. This is especially useful in the Western US case, beneath which the Juan de Fuca plate subducted to ~450km with a dipping angle that may exceed 50 degree. While the traditional CCP method will underestimate the dipping angle, our proposed imaging method will provide an accurate 3D subducting plate image without

  6. Depth and all-in-focus images obtained by multi-line-scan light-field approach

    NASA Astrophysics Data System (ADS)

    Štolc, Svorad; Huber-Mörk, Reinhold; Holländer, Branislav; Soukup, Daniel

    2014-03-01

    We present a light-field multi-line-scan image acquisition and processing system intended for the 2.5/3-D inspection of fine surface structures, such as small parts, security print, etc. in an industrial environment. The system consists of an area-scan camera, that allows for a small number of sensor lines to be extracted at high frame rates, and a mechanism for transporting the inspected object at a constant speed. During the acquisition, the object is moved orthogonally to the camera's optical axis as well as the orientation of the sensor lines. In each time step, a predefined subset of lines is read out from the sensor and stored. Afterward, by collecting all corresponding lines acquired over time, a 3-D light field is generated, which consists of multiple views of the object observed from different viewing angles while transported w.r.t. the acquisition device. This structure allows for the construction of so-called epipolar plane images (EPIs) and subsequent EPI-based analysis in order to achieve two main goals: (i) the reliable estimation of a dense depth model and (ii) the construction of an all-in-focus intensity image. Beside specifics of our hardware setup, we also provide a detailed description of algorithmic solutions for the mentioned tasks. Two alternative methods for EPI-based analysis are compared based on artificial and real-world data.

  7. Optical depth of the Martian atmosphere and surface albedo from high-resolution orbiter images

    NASA Astrophysics Data System (ADS)

    Petrova, E. V.; Hoekzema, N. M.; Markiewicz, W. J.; Thomas, N.; Stenzel, O. J.

    2012-01-01

    In this paper we describe and evaluate the so-called shadow method. This method can be used to estimate the optical depth of the Martian atmosphere from the differences in brightness between shadowed and sunlit regions observed from an orbiter. We present elaborate and simplified versions of the method and analyze the capabilities and the sources of errors. It proves essential to choose shadowed and sunlit comparison regions with similar surface properties. Accurate knowledge of the observing geometry, including the slopes of the observed region, is important as well, since the procedure should be corrected for the non-horizontal surface. Moreover, the elaborate version of the shadow method can be sensitive to (i) the optical model of aerosols and (ii) the assumed bi-directional reflectance function of the surface. To obtain reliable estimates, the analyzed images must have a high spatial resolution, which the HiRISE camera onboard the MRO provides. We tested the shadow method on two HiRISE images of Victoria crater (TRA_0873_1780 and PSP_001414_1780) that were taken while this crater was the exploration site of the Opportunity rover. While the rover measured optical depth τ approximately in the ranges from 0.43 to 0.53 and from 0.53 to 0.59 by imaging the sun, our shadow procedure yielded τ about 0.50 and 0.575, respectively (from the HiRISE's red images). Thus, the agreement is quite good. The obtained estimates of the surface albedo are about 0.20 and 0.17, respectively.

  8. Enhanced depth imaging optical coherence tomography and fundus autofluorescence findings in bilateral choroidal osteoma: a case report.

    PubMed

    Erol, Muhammet Kazim; Coban, Deniz Turgut; Ceran, Basak Bostanci; Bulut, Mehmet

    2013-01-01

    The authors present enhanced depth imaging optical coherence tomography (EDI OCT) and fundus autofluorescence (FAF) characteristics of a patient with bilateral choroidal osteoma and try to make a correlation between two imaging techniques. Two eyes of a patient with choroidal osteoma underwent complete ophthalmic examination. Enhanced depth imaging optical coherence tomography revealed a cage-like pattern, which corresponded to the calcified region of the tumor. Fundus autofluorescence imaging of the same area showed slight hyperautofluorescence. Three different reflectivity patterns in the decalcified area were defined. In the areas of subretinal fluid, outer segment elongations similar to central serous chorioretinopathy were observed. Hyperautofluorescent spots were evident in fundus autofluorescence in the same area. Calcified and decalcified portions of choroidal osteoma as well as the atrophy of choriocapillaris demonstrated different patterns with enhanced depth imaging and fundus autofluorescence imaging. Both techniques were found to be beneficial in the diagnosis and follow-up of choroidal osteoma.

  9. Quantification of lesion size, depth, and uptake using a dual-head molecular breast imaging system.

    PubMed

    Hruskaa, Carrie B; O'Connor, Michael K

    2008-04-01

    A method to perform quantitative lesion analysis in molecular breast imaging (MBI) was developed using the opposing views from a novel dual-head dedicated gamma camera. Monte Carlo simulations and phantom models were used to simulate MBI images with known lesion parameters. A relationship between the full widths at 25%, 35%, and 50% of the maximum of intensity profiles through lesions and the true lesion diameter as a function of compressed breast thickness was developed in order to measure lesion diameter. Using knowledge of compressed breast thickness and the attenuation of gamma rays in soft tissue, a method was developed to measure the depth of the lesion to the collimator face. Using the measured lesion diameter and measurements of counts in the lesion and background breast region, relative radiotracer uptake or tumor to background ratio (T/B ratio) was calculated. Validation of the methods showed that the size, depth, and T/B ratio can be accurately measured for a range of small breast lesions with T/B ratios between 10:1 and 40:1 in breasts with compressed thicknesses between 4 and 10 cm. Future applications of this work include providing information about lesion location in patients for performing a biopsy of site and the development of a threshold for the T/B ratio that can distinguish benign from malignant disease. PMID:18491531

  10. Imaging photoplethysmography for clinical assessment of cutaneous microcirculation at two different depths

    NASA Astrophysics Data System (ADS)

    Marcinkevics, Zbignevs; Rubins, Uldis; Zaharans, Janis; Miscuks, Aleksejs; Urtane, Evelina; Ozolina-Moll, Liga

    2016-03-01

    The feasibility of bispectral imaging photoplethysmography (iPPG) system for clinical assessment of cutaneous microcirculation at two different depths is proposed. The iPPG system has been developed and evaluated for in vivo conditions during various tests: (1) topical application of vasodilatory liniment on the skin, (2) skin local heating, (3) arterial occlusion, and (4) regional anesthesia. The device has been validated by the measurements of a laser Doppler imager (LDI) as a reference. The hardware comprises four bispectral light sources (530 and 810 nm) for uniform illumination of skin, video camera, and the control unit for triggering of the system. The PPG signals were calculated and the changes of perfusion index (PI) were obtained during the tests. The results showed convincing correlations for PI obtained by iPPG and LDI at (1) topical liniment (r=0.98) and (2) heating (r=0.98) tests. The topical liniment and local heating tests revealed good selectivity of the system for superficial microcirculation monitoring. It is confirmed that the iPPG system could be used for assessment of cutaneous perfusion at two different depths, morphologically and functionally different vascular networks, and thus utilized in clinics as a cost-effective alternative to the LDI.

  11. Quantification of lesion size, depth, and uptake using a dual-head molecular breast imaging system

    PubMed Central

    Hruska, Carrie B.; O’Connor, Michael K.

    2008-01-01

    A method to perform quantitative lesion analysis in molecular breast imaging (MBI) was developed using the opposing views from a novel dual-head dedicated gamma camera. Monte Carlo simulations and phantom models were used to simulate MBI images with known lesion parameters. A relationship between the full widths at 25%, 35%, and 50% of the maximum of intensity profiles through lesions and the true lesion diameter as a function of compressed breast thickness was developed in order to measure lesion diameter. Using knowledge of compressed breast thickness and the attenuation of gamma rays in soft tissue, a method was developed to measure the depth of the lesion to the collimator face. Using the measured lesion diameter and measurements of counts in the lesion and background breast region, relative radiotracer uptake or tumor to background ratio (T∕B ratio) was calculated. Validation of the methods showed that the size, depth, and T∕B ratio can be accurately measured for a range of small breast lesions with T∕B ratios between 10:1 and 40:1 in breasts with compressed thicknesses between 4 and 10 cm. Future applications of this work include providing information about lesion location in patients for performing a biopsy of site and the development of a threshold for the T∕B ratio that can distinguish benign from malignant disease. PMID:18491531

  12. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  13. Seismic Imaging of the San Jacinto Fault Zone Area From Seismogenic Depth to the Surface

    NASA Astrophysics Data System (ADS)

    Ben-Zion, Y.

    2015-12-01

    I review multi-scale multi-signal seismological results on structural properties within and around the San Jacinto Fault Zone (SJFZ). The results are based on data of the regional southern California and ANZA networks, additional near-fault seismometers and linear arrays with instrument spacing 25-50 m that cross the SJFZ at several locations, and a spatially-dense rectangular array with 1108 vertical-component sensors separated by 10-30 m centered on the fault. The studies utilize earthquake data to derive Vp and Vs velocity models with horizontal resolution of 1-2 km over the depth section 2-15 km, ambient noise with frequencies up to 1 Hz to image with similar horizontal resolution the depth section 0.5-7 km, and high-frequency seismic noise from the linear and rectangular arrays for high-resolution imaging of the top 0.5 km. Pronounced damage regions with low seismic velocities and anomalous Vp/Vs ratios are observed around the SJFZ, as well as the San Andreas and Elsinore faults. The damage zones follow generally a flower-shape with depth. The section of the SJFZ from Cajon pass to the San Jacinto basin has a faster SW side, while the section farther to the SE has an opposite velocity contrast with faster NE side. The damage zones and velocity contrasts produce at various locations fault zone trapped and head waves that are utilized to obtain high-resolution information on inner fault zone components (bimaterial interfaces, trapping structures). Analyses of high-frequency noise recorded by the fault zone arrays reveal complex shallow material with very low seismic velocities and strong lateral and vertical variations.

  14. The optimal polarizations for achieving maximum contrast in radar images

    NASA Technical Reports Server (NTRS)

    Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.

    1988-01-01

    There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.

  15. Extending depth of field for hybrid imaging systems via the use of both dark and dot point spread functions.

    PubMed

    Nhu, L V; Fan, Zhigang; Chen, Shouqian; Dang, Fanyang

    2016-09-10

    In this paper, we propose one method based on the use of both dark and dot point spread functions (PSFs) to extend depth of field in hybrid imaging systems. Two different phase modulations of two phase masks are used to generate both dark and dot PSFs. The quartic phase mask (QPM) is used to generate the dot PSF. A combined phase mask between the QPM and the angle for generating the dark PSF is investigated. The simulation images show that the proposed method can produce superior imaging performance of hybrid imaging systems in extending the depth of field. PMID:27661372

  16. Extending depth of field for hybrid imaging systems via the use of both dark and dot point spread functions.

    PubMed

    Nhu, L V; Fan, Zhigang; Chen, Shouqian; Dang, Fanyang

    2016-09-10

    In this paper, we propose one method based on the use of both dark and dot point spread functions (PSFs) to extend depth of field in hybrid imaging systems. Two different phase modulations of two phase masks are used to generate both dark and dot PSFs. The quartic phase mask (QPM) is used to generate the dot PSF. A combined phase mask between the QPM and the angle for generating the dark PSF is investigated. The simulation images show that the proposed method can produce superior imaging performance of hybrid imaging systems in extending the depth of field.

  17. Quantum dot imaging in the second near-infrared optical window: studies on reflectance fluorescence imaging depths by effective fluence rate and multiple image acquisition

    NASA Astrophysics Data System (ADS)

    Jung, Yebin; Jeong, Sanghwa; Nayoun, Won; Ahn, Boeun; Kwag, Jungheon; Geol Kim, Sang; Kim, Sungjee

    2015-04-01

    Quantum dot (QD) imaging capability was investigated by the imaging depth at a near-infrared second optical window (SOW; 1000 to 1400 nm) using time-modulated pulsed laser excitations to control the effective fluence rate. Various media, such as liquid phantoms, tissues, and in vivo small animals, were used and the imaging depths were compared with our predicted values. The QD imaging depth under excitation of continuous 20 mW/cm2 laser was determined to be 10.3 mm for 2 wt% hemoglobin phantom medium and 5.85 mm for 1 wt% intralipid phantom, which were extended by more than two times on increasing the effective fluence rate to 2000 mW/cm2. Bovine liver and porcine skin tissues also showed similar enhancement in the contrast-to-noise ratio (CNR) values. A QD sample was inserted into the abdomen of a mouse. With a higher effective fluence rate, the CNR increased more than twofold and the QD sample became clearly visualized, which was completely undetectable under continuous excitation. Multiple acquisitions of QD images and averaging process pixel by pixel were performed to overcome the thermal noise issue of the detector in SOW, which yielded significant enhancement in the imaging capability, showing up to a 1.5 times increase in the CNR.

  18. Broadband optical mammography instrument for depth-resolved imaging and local dynamic measurements

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Nishanth; Kainerstorfer, Jana M.; Sassaroli, Angelo; Anderson, Pamela G.; Fantini, Sergio

    2016-02-01

    We present a continuous-wave instrument for non-invasive diffuse optical imaging of the breast in a parallel-plate transmission geometry. The instrument measures continuous spectra in the wavelength range 650-1000 nm, with an intensity noise level <1.5% and a spatial sampling rate of 5 points/cm in the x- and y-directions. We collect the optical transmission at four locations, one collinear and three offset with respect to the illumination optical fiber, to recover the depth of optical inhomogeneities in the tissue. We imaged a tissue-like, breast shaped, silicone phantom (6 cm thick) with two embedded absorbing structures: a black circle (1.7 cm in diameter) and a black stripe (3 mm wide), designed to mimic a tumor and a blood vessel, respectively. The use of a spatially multiplexed detection scheme allows for the generation of on-axis and off-axis projection images simultaneously, as opposed to requiring multiple scans, thus decreasing scan-time and motion artifacts. This technique localizes detected inhomogeneities in 3D and accurately assigns their depth to within 1 mm in the ideal conditions of otherwise homogeneous tissue-like phantoms. We also measured induced hemodynamic changes in the breast of a healthy human subject at a selected location (no scanning). We applied a cyclic, arterial blood pressure perturbation by alternating inflation (to a pressure of 200 mmHg) and deflation of a pneumatic cuff around the subject's thigh at a frequency of 0.05 Hz, and measured oscillations with amplitudes up to 1 μM and 0.2 μM in the tissue concentrations of oxyhemoglobin and deoxyhemoglobin, respectively. These hemodynamic oscillations provide information about the vascular structure and functional integrity in tissue, and may be used to assess healthy or abnormal perfusion in a clinical setting.

  19. Optimized non-integer order phase mask to extend the depth of field of an imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Jiang; Miao, Erlong; Sui, Yongxin; Yang, Huaijiang

    2016-09-01

    Wavefront coding is an effective optical technique used to extend the depth of field for an incoherent imaging system. Through introducing an optimized phase mask to the pupil plane, the modulated optical transfer function is defocus-invariant. In this paper, we proposed a new form phase mask using non-integer order and signum function to extend the depth of field. The performance of the phase mask is evaluated by comparing defocused modulation transfer function invariant and Fisher information with other phase masks. Defocused imaging simulation is also carried out. The results demonstrate the advantages of non-integer order phase mask and its effectiveness on the depth of field extension.

  20. Anatomy of the western Java plate interface from depth-migrated seismic images

    USGS Publications Warehouse

    Kopp, H.; Hindle, D.; Klaeschen, D.; Oncken, O.; Reichert, C.; Scholl, D.

    2009-01-01

    Newly pre-stack depth-migrated seismic images resolve the structural details of the western Java forearc and plate interface. The structural segmentation of the forearc into discrete mechanical domains correlates with distinct deformation styles. Approximately 2/3 of the trench sediment fill is detached and incorporated into frontal prism imbricates, while the floor sequence is underthrust beneath the d??collement. Western Java, however, differs markedly from margins such as Nankai or Barbados, where a uniform, continuous d??collement reflector has been imaged. In our study area, the plate interface reveals a spatially irregular, nonlinear pattern characterized by the morphological relief of subducted seamounts and thicker than average patches of underthrust sediment. The underthrust sediment is associated with a low velocity zone as determined from wide-angle data. Active underplating is not resolved, but likely contributes to the uplift of the large bivergent wedge that constitutes the forearc high. Our profile is located 100 km west of the 2006 Java tsunami earthquake. The heterogeneous d??collement zone regulates the friction behavior of the shallow subduction environment where the earthquake occurred. The alternating pattern of enhanced frictional contact zones associated with oceanic basement relief and weak material patches of underthrust sediment influences seismic coupling and possibly contributed to the heterogeneous slip distribution. Our seismic images resolve a steeply dipping splay fault, which originates at the d??collement and terminates at the sea floor and which potentially contributes to tsunami generation during co-seismic activity. ?? 2009 Elsevier B.V.

  1. 50% duty cycle may be inappropriate to achieve a sufficient chest compression depth when cardiopulmonary resuscitation is performed by female or light rescuers

    PubMed Central

    Lee, Chang Jae; Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Choi, Sung Wook; Kim, Ok Jun

    2015-01-01

    Objective Current guidelines for cardiopulmonary resuscitation recommend chest compressions (CC) during 50% of the duty cycle (DC) in part because of the ease with which individuals may learn to achieve it with practice. However, no consideration has been given to a possible interaction between DC and depth of CC, which has been the subject of recent study. Our aim was to determine if 50% DC is inappropriate to achieve sufficient chest compression depth for female and light rescuers. Methods Previously collected CC data, performed by senior medical students guided by metronome sounds with various down-stroke patterns and rates, were included in the analysis. Multiple linear regression analysis was performed to determine the association between average compression depth (ACD) with average compression rate (ACR), DC, and physical characteristics of the performers. Expected ACD was calculated for various settings. Results DC, ACR, body weight, male sex, and self-assessed physical strength were significantly associated with ACD in multivariate analysis. Based on our calculations, with 50% of DC, only men with ACR of 140/min or faster or body weight over 74 kg with ACR of 120/min can achieve sufficient ACD. Conclusion A shorter DC is independently correlated with deeper CC during simulated cardiopulmonary resuscitation. The optimal DC recommended in current guidelines may be inappropriate for achieving sufficient CD, especially for female or lighter-weight rescuers.

  2. Subduction of European continental crust to 70 km depth imaged in the Western Alps

    NASA Astrophysics Data System (ADS)

    Paul, Anne; Zhao, Liang; Guillot, Stéphane; Solarino, Stefano

    2015-04-01

    The first conclusive evidence in support of the burial (and exhumation) of continental crust to depths larger than 90 km was provided by the discovery of coesite-bearing metamorphic rocks in the Dora Maira massif of the Western Alps (Chopin, 1984). Since then, even though similar outcrops of exhumed HP/UHP rocks have been recognized in a number of collisional belts, direct seismic evidences for subduction of continental crust in the mantle of the upper plate remain rare. In the Western Alps, the greatest depth ever recorded for the European Moho is 55 km by wide-angle seismic reflection (ECORS-CROP DSS Group, 1989). In an effort to image the European Moho at greater depth, and unravel the very complex lithospheric structure of the W-Alps, we have installed the CIFALPS temporary seismic array across the Southwestern Alps for 14 months (2012-2013). The almost linear array runs from the Rhône valley (France) to the Po plain (Italy) across the Dora Maira massif where exhumed HP/UHP metamorphic rocks of continental origin were first discovered. We used the receiver function processing technique that enhances P-to-S converted waves at velocity boundaries beneath the array. The receiver function records were migrated to depth using 4 different 1-D velocity models to account for the strongest structural changes along the profile. They were then stacked using the classical common-conversion point technique. Beneath the Southeast basin and the external zones, the obtained seismic section displays a clear converted phase on the European Moho, dipping gently to the ENE from ~35 km at the western end of the profile, to ~40 km beneath the Frontal Penninic thrust (FPT). The Moho dip then noticeably increases beneath the internal zones, while the amplitude of the converted phase weakens. The weak European Moho signal may be traced to 70-75 km depth beneath the eastern Dora Maira massif and the westernmost Po plain. At shallower level (20-40 km), we observe a set of strong

  3. Development of a large-angle pinhole gamma camera with depth-of-interaction capability for small animal imaging

    NASA Astrophysics Data System (ADS)

    Baek, C.-H.; An, S. J.; Kim, H.-I.; Choi, Y.; Chung, Y. H.

    2012-01-01

    A large-angle gamma camera was developed for imaging small animal models used in medical and biological research. The simulation study shows that a large field of view (FOV) system provides higher sensitivity with respect to a typical pinhole gamma cameras by reducing the distance between the pinhole and the object. However, this gamma camera suffers from the degradation of the spatial resolution at the periphery region due to parallax error by obliquely incident photons. We propose a new method to measure the depth of interaction (DOI) using three layers of monolithic scintillators to reduce the parallax error. The detector module consists of three layers of monolithic CsI(Tl) crystals with dimensions of 50.0 × 50.0 × 2.0 mm3, a Hamamatsu H8500 PSPMT and a large-angle pinhole collimator with an acceptance angle of 120°. The 3-dimensional event positions were determined by the maximum-likelihood position-estimation (MLPE) algorithm and the pre-generated look up table (LUT). The spatial resolution (FWHM) of a Co-57 point-like source was measured at different source position with the conventional method (Anger logic) and with DOI information. We proved that high sensitivity can be achieved without degradation of spatial resolution using a large-angle pinhole gamma camera: this system can be used as a small animal imaging tool.

  4. Erratum: The MACHO Project: Microlensing Optical Depth toward the Galactic Bulge from Difference Image Analysis

    NASA Astrophysics Data System (ADS)

    Alcock, C.; Allsman, R. A.; Alves, D. R.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Geha, M.; Griest, K.; Lehner, M. J.; Marshall, S. L.; Minniti, D.; Nelson, C. A.; Peterson, B. A.; Popowski, P.; Pratt, M. R.; Quinn, P. J.; Stubbs, C. W.; Sutherland, W.; Tomaney, A. B.; Vandehei, T.; Welch, D. L.

    2001-08-01

    In the paper ``The MACHO Project: Microlensing Optical Depth toward the Galactic Bulge from Difference Image Analysis'' by C. Alcock, R. A. Allsman, D. R. Alves, T. S. Axelrod, A. C. Becker, D. P. Bennett, K. H. Cook, A. J. Drake, K. C. Freeman, M. Geha, K. Griest, M. J. Lehner, S. L. Marshall, D. Minniti, C. A. Nelson, B. A. Peterson, P. Popowski, M. R. Pratt, P. J. Quinn, C. W. Stubbs, W. Sutherland, A. B. Tomaney, T. Vandehei, and D. L. Welch (ApJ, 541, 734 [2000]) an incorrect version of Table 3 was published. A second copy of Table 2 was given as Table 3. The correct version of Table 3 is available in the preprint version of the paper (astro-ph/0002510) and is printed below. This correction does not affect any of the results in the paper.

  5. Nanoscale β-nuclear magnetic resonance depth imaging of topological insulators.

    PubMed

    Koumoulis, Dimitrios; Morris, Gerald D; He, Liang; Kou, Xufeng; King, Danny; Wang, Dong; Hossain, Masrur D; Wang, Kang L; Fiete, Gregory A; Kanatzidis, Mercouri G; Bouchard, Louis-S

    2015-07-14

    Considerable evidence suggests that variations in the properties of topological insulators (TIs) at the nanoscale and at interfaces can strongly affect the physics of topological materials. Therefore, a detailed understanding of surface states and interface coupling is crucial to the search for and applications of new topological phases of matter. Currently, no methods can provide depth profiling near surfaces or at interfaces of topologically inequivalent materials. Such a method could advance the study of interactions. Herein, we present a noninvasive depth-profiling technique based on β-detected NMR (β-NMR) spectroscopy of radioactive (8)Li(+) ions that can provide "one-dimensional imaging" in films of fixed thickness and generates nanoscale views of the electronic wavefunctions and magnetic order at topological surfaces and interfaces. By mapping the (8)Li nuclear resonance near the surface and 10-nm deep into the bulk of pure and Cr-doped bismuth antimony telluride films, we provide signatures related to the TI properties and their topological nontrivial characteristics that affect the electron-nuclear hyperfine field, the metallic shift, and magnetic order. These nanoscale variations in β-NMR parameters reflect the unconventional properties of the topological materials under study, and understanding the role of heterogeneities is expected to lead to the discovery of novel phenomena involving quantum materials.

  6. Lidar Waveform-Based Analysis of Depth Images Constructed Using Sparse Single-Photon Data.

    PubMed

    Altmann, Yoann; Ren, Ximing; McCarthy, Aongus; Buller, Gerald S; McLaughlin, Steve

    2016-05-01

    This paper presents a new Bayesian model and algorithm used for depth and reflectivity profiling using full waveforms from the time-correlated single-photon counting measurement in the limit of very low photon counts. The proposed model represents each Lidar waveform as a combination of a known impulse response, weighted by the target reflectivity, and an unknown constant background, corrupted by Poisson noise. Prior knowledge about the problem is embedded through prior distributions that account for the different parameter constraints and their spatial correlation among the image pixels. In particular, a gamma Markov random field (MRF) is used to model the joint distribution of the target reflectivity, and a second MRF is used to model the distribution of the target depth, which are both expected to exhibit significant spatial correlations. An adaptive Markov chain Monte Carlo algorithm is then proposed to perform Bayesian inference. This algorithm is equipped with a stochastic optimization adaptation mechanism that automatically adjusts the parameters of the MRFs by maximum marginal likelihood estimation. Finally, the benefits of the proposed methodology are demonstrated through a series of experiments using real data.

  7. Lidar Waveform-Based Analysis of Depth Images Constructed Using Sparse Single-Photon Data.

    PubMed

    Altmann, Yoann; Ren, Ximing; McCarthy, Aongus; Buller, Gerald S; McLaughlin, Steve

    2016-05-01

    This paper presents a new Bayesian model and algorithm used for depth and reflectivity profiling using full waveforms from the time-correlated single-photon counting measurement in the limit of very low photon counts. The proposed model represents each Lidar waveform as a combination of a known impulse response, weighted by the target reflectivity, and an unknown constant background, corrupted by Poisson noise. Prior knowledge about the problem is embedded through prior distributions that account for the different parameter constraints and their spatial correlation among the image pixels. In particular, a gamma Markov random field (MRF) is used to model the joint distribution of the target reflectivity, and a second MRF is used to model the distribution of the target depth, which are both expected to exhibit significant spatial correlations. An adaptive Markov chain Monte Carlo algorithm is then proposed to perform Bayesian inference. This algorithm is equipped with a stochastic optimization adaptation mechanism that automatically adjusts the parameters of the MRFs by maximum marginal likelihood estimation. Finally, the benefits of the proposed methodology are demonstrated through a series of experiments using real data. PMID:26886984

  8. Nanoscale β-nuclear magnetic resonance depth imaging of topological insulators.

    PubMed

    Koumoulis, Dimitrios; Morris, Gerald D; He, Liang; Kou, Xufeng; King, Danny; Wang, Dong; Hossain, Masrur D; Wang, Kang L; Fiete, Gregory A; Kanatzidis, Mercouri G; Bouchard, Louis-S

    2015-07-14

    Considerable evidence suggests that variations in the properties of topological insulators (TIs) at the nanoscale and at interfaces can strongly affect the physics of topological materials. Therefore, a detailed understanding of surface states and interface coupling is crucial to the search for and applications of new topological phases of matter. Currently, no methods can provide depth profiling near surfaces or at interfaces of topologically inequivalent materials. Such a method could advance the study of interactions. Herein, we present a noninvasive depth-profiling technique based on β-detected NMR (β-NMR) spectroscopy of radioactive (8)Li(+) ions that can provide "one-dimensional imaging" in films of fixed thickness and generates nanoscale views of the electronic wavefunctions and magnetic order at topological surfaces and interfaces. By mapping the (8)Li nuclear resonance near the surface and 10-nm deep into the bulk of pure and Cr-doped bismuth antimony telluride films, we provide signatures related to the TI properties and their topological nontrivial characteristics that affect the electron-nuclear hyperfine field, the metallic shift, and magnetic order. These nanoscale variations in β-NMR parameters reflect the unconventional properties of the topological materials under study, and understanding the role of heterogeneities is expected to lead to the discovery of novel phenomena involving quantum materials. PMID:26124141

  9. A comparison of analytical depth of search metrics with mission simulations for exoplanet imagers

    NASA Astrophysics Data System (ADS)

    Savransky, Dmitry; Garrett, Daniel; Macintosh, Bruce A.

    2016-07-01

    While new, advanced, ground-based instrumentation continues to produce new exoplanet discoveries and provide further insights into exoplanet formation and evolution, our desire to discover and characterize planets of Earth size about stars of all types and ages necessitates dedicated, imaging space instruments. Given the high costs and complexities of space observatories, it is vital to build confidence in a proposed instrument's capabilities during its design phase, and much effort has been devoted to predicting the performance of various flavors of space- based exoplanet imagers. The fundamental problem with trying to answer the question of how many exoplanets a given instrument will discover is that the number of discoverable planets is unknown, and so all results are entirely dependent on the assumptions made about the population of planets being studied. Here, we explore an alternate approach, which involves explicitly separating instrumental and mission biasing from the assumptions made about planet distributions. This allows us to calculate a mission's `depth of search'-a metric independent of the planetary population and defined as the fraction of the contrast-projected separation space reached by a given instrument for a fixed planetary radius and semi-major axis. When multiplied by an assumed occurrence rate for planets at this radius and semi-major axis (derived from an assumed planetary population), this yields the expected number of detections by the instrument for that population. Integrating over the full ranges of semi-major axis and planetary radius provides estimates of planet yield for a full mission. We use this metric to evaluate the coronagraphs under development for the WFIRST mission under different operating assumptions. We also compare the results of convolving the depth of search with an assumed planetary population to those derived by running full mission simulations based on that same population.

  10. The Impact of New York's School Libraries on Student Achievement and Motivation: Phase II--In-Depth Study

    ERIC Educational Resources Information Center

    Small, Ruth V.; Snyder, Jaime

    2009-01-01

    This article reports the results of the second phase of a three-phase study on the impact of the New York State's school libraries' services and resources on student achievement and motivation. A representative sample of more than 1,600 classroom teachers, students, and school library media specialists (SMLSs) from 47 schools throughout New York…

  11. Depth discrimination in diffuse optical transmission imaging by planar scanning off-axis fibers: initial applications to optical mammography.

    PubMed

    Kainerstorfer, Jana M; Yu, Yang; Weliwitigoda, Geethika; Anderson, Pamela G; Sassaroli, Angelo; Fantini, Sergio

    2013-01-01

    We present a method for depth discrimination in parallel-plate, transmission mode, diffuse optical imaging. The method is based on scanning a set of detector pairs, where the two detectors in each pair are separated by a distance δDi along direction δ D i within the x-y scanning plane. A given optical inhomogeneity appears shifted by αi δ D i (with 0≤ αi ≤1) in the images collected with the two detection fibers of the i-th pair. Such a spatial shift can be translated into a measurement of the depth z of the inhomogeneity, and the depth measurements based on each detector pair are combined into a specially designed weighted average. This depth assessment is demonstrated on tissue-like phantoms for simple inhomogeneities such as straight rods in single-rod or multiple-rod configurations, and for more complex curved structures which mimic blood vessels in the female breast. In these phantom tests, the method has recovered the depth of single inhomogeneities in the central position of the phantom to within 4 mm of their actual value, and within 7 mm for more superficial inhomogeneities, where the thickness of the phantom was 65 mm. The application of this method to more complex images, such as optical mammograms, requires a robust approach to identify corresponding structures in the images collected with the two detectors of a given pair. To this aim, we propose an approach based on the inner product of the skeleton images collected with the two detectors of each pair, and we present an application of this approach to optical in vivo images of the female breast. This depth discrimination method can enhance the spatial information content of 2D projection images of the breast by assessing the depth of detected structures, and by allowing for 3D localization of breast tumors.

  12. Low-Achieving Readers, High Expectations: Image Theatre Encourages Critical Literacy

    ERIC Educational Resources Information Center

    Rozansky, Carol Lloyd; Aagesen, Colleen

    2010-01-01

    Students in an eighth-grade, urban, low-achieving reading class were introduced to critical literacy through engagement in Image Theatre. Developed by liberatory dramatist Augusto Boal, Image Theatre gives participants the opportunity to examine texts in the triple role of interpreter, artist, and sculptor (i.e., image creator). The researchers…

  13. In vivo volumetric depth-resolved vasculature imaging of human limbus and sclera with 1 μm swept source phase-variance optical coherence angiography

    NASA Astrophysics Data System (ADS)

    Poddar, Raju; Zawadzki, Robert J.; Cortés, Dennis E.; Mannis, Mark J.; Werner, John S.

    2015-06-01

    We present in vivo volumetric depth-resolved vasculature images of the anterior segment of the human eye acquired with phase-variance based motion contrast using a high-speed (100 kHz, 105 A-scans/s) swept source optical coherence tomography system (SSOCT). High phase stability SSOCT imaging was achieved by using a computationally efficient phase stabilization approach. The human corneo-scleral junction and sclera were imaged with swept source phase-variance optical coherence angiography and compared with slit lamp images from the same eyes of normal subjects. Different features of the rich vascular system in the conjunctiva and episclera were visualized and described. This system can be used as a potential tool for ophthalmological research to determine changes in the outflow system, which may be helpful for identification of abnormalities that lead to glaucoma.

  14. Peripapillary choroidal thickness in Chinese children using enhanced depth imaging optical coherence tomography

    PubMed Central

    Wu, Xi-Shi; Shen, Li-Jun; Chen, Ru-Ru; Lyu, Zhe

    2016-01-01

    AIM To evaluate the peripapillary choroidal thickness (PPCT) in Chinese children, and to analyze the influencing factors. METHODS PPCT was measured with enhanced depth imaging optical coherence tomography (EDI-OCT) in 70 children (53 myopes and 17 non-myopes) aged 7 to 18y, with spherical equivalent refractive errors between 0.50 and −5.87 diopters (D). Peripapillary choroidal imaging was performed using circular scans of a diameter of 3.4 mm around the optic disc. PPCT was measured by EDI-OCT in six sectors: nasal (N), superonasal (SN), superotemporal (ST), temporal (T), inferotemporal (IT) and inferonasal (IN), as well as global RNFL thickness (G). RESULTS The mean global PPCT was 165.49±33.76 µm. The temporal, inferonasal, inferotemporal PPCT were significantly thinner than the nasal, superonasal, superotemporal segments PPCT were significantly thinner in the myopic group at temporal, superotemporal and inferotemporal segments. The axial length was significantly associated with the average global (β=−0.419, P=0.014), superonasal (β=−2.009, P=0.049) and inferonasal (β= −2.000, P=0.049) PPCT. The other factors (gender, age, SE) were not significantly associated with PPCT. CONCLUSION PPCT was thinner in the myopic group at temporal, superotemporal and inferotemporal segments. The axial length was found to be negatively correlated to PPCT. We need more further studies about the relationship between PPCT and myopia. PMID:27803863

  15. Depth-selective imaging of macroscopic objects hidden behind a scattering layer using low-coherence and wide-field interferometry

    NASA Astrophysics Data System (ADS)

    Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Ko, Hakseok; Choi, Wonshik

    2016-08-01

    Imaging systems targeting macroscopic objects tend to have poor depth selectivity. In this Letter, we present a 3D imaging system featuring a depth resolution of 200 μm, depth scanning range of more than 1 m, and view field larger than 70×70 mm2. For depth selectivity, we set up an off-axis digital holographic imaging system using a light source with a coherence length of 400 μm. A prism pair was installed in the reference beam path for long-range depth scanning. We performed imaging macroscopic targets with multiple different layers and also demonstrated imaging targets hidden behind a scattering layer.

  16. Calibrating remotely sensed river bathymetry in the absence of field measurements: Flow REsistance Equation-Based Imaging of River Depths (FREEBIRD)

    NASA Astrophysics Data System (ADS)

    Legleiter, Carl J.

    2015-04-01

    Remote sensing could enable high-resolution mapping of long river segments, but realizing this potential will require new methods for inferring channel bathymetry from passive optical image data without using field measurements for calibration. As an alternative to regression-based approaches, this study introduces a novel framework for Flow REsistance Equation-Based Imaging of River Depths (FREEBIRD). This technique allows for depth retrieval in the absence of field data by linking a linear relation between an image-derived quantity X and depth d to basic equations of open channel flow: continuity and flow resistance. One FREEBIRD algorithm takes as input an estimate of the channel aspect (width/depth) ratio A and a series of cross-sections extracted from the image and returns the coefficients of the X versus d relation. A second algorithm calibrates this relation so as to match a known discharge Q. As an initial test of FREEBIRD, these procedures were applied to panchromatic satellite imagery and publicly available aerial photography of a clear-flowing gravel-bed river. Accuracy assessment based on independent field surveys indicated that depth retrieval performance was comparable to that achieved by direct, field-based calibration methods. Sensitivity analyses suggested that FREEBIRD output was not heavily influenced by misspecification of A or Q, or by selection of other input parameters. By eliminating the need for simultaneous field data collection, these methods create new possibilities for large-scale river monitoring and analysis of channel change, subject to the important caveat that the underlying relationship between X and d must be reasonably strong.

  17. Performance comparison between 8- and 14-bit-depth imaging in polarization-sensitive swept-source optical coherence tomography.

    PubMed

    Lu, Zenghai; Kasaragod, Deepa K; Matcher, Stephen J

    2011-03-04

    Recently the effects of reduced bit-depth acquisition on swept-source optical coherence tomography (SS-OCT) image quality have been evaluated by using simulations and empirical studies, showing that image acquisition at 8-bit depth allows high system sensitivity with only a minimal drop in the signal-to-noise ratio compared to higher bit-depth systems. However, in these studies the 8-bit data is actually 12- or 14-bit ADC data numerically truncated to 8 bits. In practice, a native 8-bit ADC could actually possess a true bit resolution lower than this due to the electronic jitter in the converter etc. We compare true 8- and 14-bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of equine tendon indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artifacts due to strong Fresnel reflection.

  18. Large field-of-view and depth-specific cortical microvascular imaging underlies regional differences in ischemic brain

    NASA Astrophysics Data System (ADS)

    Qin, Jia; Shi, Lei; Dziennis, Suzan; Wang, Ruikang K.

    2014-02-01

    Ability to non-invasively monitor and quantify of blood flow, blood vessel morphology, oxygenation and tissue morphology is important for improved diagnosis, treatment and management of various neurovascular disorders, e.g., stroke. Currently, no imaging technique is available that can satisfactorily extract these parameters from in vivo microcirculatory tissue beds, with large field of view and sufficient resolution at defined depth without any harm to the tissue. In order for more effective therapeutics, we need to determine the area of brain that is damaged but not yet dead after focal ischemia. Here we develop an integrated multi-functional imaging system, in which SDW-LSCI (synchronized dual wavelength laser speckle imaging) is used as a guiding tool for OMAG (optical microangiography) to investigate the fine detail of tissue hemodynamics, such as vessel flow, profile, and flow direction. We determine the utility of the integrated system for serial monitoring afore mentioned parameters in experimental stroke, middle cerebral artery occlusion (MCAO) in mice. For 90 min MCAO, onsite and 24 hours following reperfusion, we use SDW-LSCI to determine distinct flow and oxygenation variations for differentiation of the infarction, peri-infarct, reduced flow and contralateral regions. The blood volumes are quantifiable and distinct in afore mentioned regions. We also demonstrate the behaviors of flow and flow direction in the arterials connected to MCA play important role in the time course of MCAO. These achievements may improve our understanding of vascular involvement under pathologic and physiological conditions, and ultimately facilitate clinical diagnosis, monitoring and therapeutic interventions of neurovascular diseases, such as ischemic stroke.

  19. Coupling sky images with radiative transfer models: a new method to estimate cloud optical depth

    NASA Astrophysics Data System (ADS)

    Mejia, Felipe A.; Kurtz, Ben; Murray, Keenan; Hinkelman, Laura M.; Sengupta, Manajit; Xie, Yu; Kleissl, Jan

    2016-08-01

    A method for retrieving cloud optical depth (τc) using a UCSD developed ground-based sky imager (USI) is presented. The radiance red-blue ratio (RRBR) method is motivated from the analysis of simulated images of various τc produced by a radiative transfer model (RTM). From these images the basic parameters affecting the radiance and red-blue ratio (RBR) of a pixel are identified as the solar zenith angle (θ0), τc, solar pixel angle/scattering angle (ϑs), and pixel zenith angle/view angle (ϑz). The effects of these parameters are described and the functions for radiance, Iλτc, θ0, ϑs, ϑz, and RBRτc, θ0, ϑs, ϑz are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for τc, where RBR increases with τc up to about τc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Iλmeasϑs, ϑz, in addition to RBRmeasϑs, ϑz, to obtain a unique solution for τc. The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement (ARM) program site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min et al. (2003) method for overcast skies. τc values ranged from 0 to 80 with values over 80, being capped and registered as 80. A τc RMSE of 2.5 between the Min et al. (2003) method and the USI are observed. The MWR and USI have an RMSE of 2.2, which is well within the uncertainty of the MWR. The procedure developed here provides a foundation to test and develop other cloud detection algorithms.

  20. Pre-stack depth migration for improved imaging under seafloor canyons: 2D case study of Browse Basin, Australia*

    NASA Astrophysics Data System (ADS)

    Debenham, Helen 124Westlake, Shane

    2014-06-01

    In the Browse Basin, as in many areas of the world, complex seafloor topography can cause problems with seismic imaging. This is related to complex ray paths, and sharp lateral changes in velocity. This paper compares ways in which 2D Kirchhoff imaging can be improved below seafloor canyons, using both time and depth domain processing. In the time domain, to improve on standard pre-stack time migration (PSTM) we apply removable seafloor static time shifts in order to reduce the push down effect under seafloor canyons before migration. This allows for better event continuity in the seismic imaging. However this approach does not fully solve the problem, still giving sub-optimal imaging, leaving amplitude shadows and structural distortion. Only depth domain processing with a migration algorithm that honours the paths of the seismic energy as well as a detailed velocity model can provide improved imaging under these seafloor canyons, and give confidence in the structural components of the exploration targets in this area. We therefore performed depth velocity model building followed by pre-stack depth migration (PSDM), the result of which provided a step change improvement in the imaging, and provided new insights into the area.

  1. Three-dimensional image cytometer based on widefield structured light microscopy and high-speed remote depth scanning.

    PubMed

    Choi, Heejin; Wadduwage, Dushan N; Tu, Ting Yuan; Matsudaira, Paul; So, Peter T C

    2015-01-01

    A high throughput 3D image cytometer have been developed that improves imaging speed by an order of magnitude over current technologies. This imaging speed improvement was realized by combining several key components. First, a depth-resolved image can be rapidly generated using a structured light reconstruction algorithm that requires only two wide field images, one with uniform illumination and the other with structured illumination. Second, depth scanning is implemented using the high speed remote depth scanning. Finally, the large field of view, high NA objective lens and the high pixelation, high frame rate sCMOS camera enable high resolution, high sensitivity imaging of a large cell population. This system can image at 800 cell/sec in 3D at submicron resolution corresponding to imaging 1 million cells in 20 min. The statistical accuracy of this instrument is verified by quantitatively measuring rare cell populations with ratio ranging from 1:1 to 1:10(5) . © 2014 International Society for Advancement of Cytometry. PMID:25352187

  2. Three-dimensional image cytometer based on widefield structured light microscopy and high-speed remote depth scanning.

    PubMed

    Choi, Heejin; Wadduwage, Dushan N; Tu, Ting Yuan; Matsudaira, Paul; So, Peter T C

    2015-01-01

    A high throughput 3D image cytometer have been developed that improves imaging speed by an order of magnitude over current technologies. This imaging speed improvement was realized by combining several key components. First, a depth-resolved image can be rapidly generated using a structured light reconstruction algorithm that requires only two wide field images, one with uniform illumination and the other with structured illumination. Second, depth scanning is implemented using the high speed remote depth scanning. Finally, the large field of view, high NA objective lens and the high pixelation, high frame rate sCMOS camera enable high resolution, high sensitivity imaging of a large cell population. This system can image at 800 cell/sec in 3D at submicron resolution corresponding to imaging 1 million cells in 20 min. The statistical accuracy of this instrument is verified by quantitatively measuring rare cell populations with ratio ranging from 1:1 to 1:10(5) . © 2014 International Society for Advancement of Cytometry.

  3. Detailed imaging of flowing structures at depth using microseismicity: a tool for site investigation?

    NASA Astrophysics Data System (ADS)

    Pytharouli, S.; Lunn, R. J.; Shipton, Z. K.

    2011-12-01

    Field evidence shows that faults and fractures can act as focused pathways or barriers for fluid migration. This is an important property for modern engineering problems, e.g., CO2 sequestration, geological radioactive waste disposal, geothermal energy exploitation, land reclamation and remediation. For such applications the detailed characterization of the location, orientation and hydraulic properties of existing fractures is necessary. These investigations are expensive, requiring the hire of expensive equipment (excavator or drill rigs), which incur standing charges when not in use. In addition, they only provide information for discrete sample 'windows'. Non-intrusive methods have the ability to gather information across an entire area. Methods including electrical resistivity/conductivity and ground penetrating radar (GRP), have been used as tools for site investigations. Their imaging ability is often restricted due to unfavourable on-site conditions e.g. GRP is not useful in cases where a layer of clay or reinforced concrete is present. Our research has shown that high quality seismic data can be successfully used in the detailed imaging of sub-surface structures at depth; using induced microseismicity data recorded beneath the Açu reservoir in Brazil we identified orientations and values of average permeability of open shear fractures at depths up to 2.5km. Could microseismicity also provide information on the fracture width in terms of stress drops? First results from numerical simulations showed that higher stress drop values correspond to narrower fractures. These results were consistent with geological field observations. This study highlights the great potential of using microseismicity data as a supplementary tool for site investigation. Individual large-scale shear fractures in large rock volumes cannot currently be identified by any other geophysical dataset. The resolution of the method is restricted by the detection threshold of the local

  4. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine.

    PubMed

    Xia, Tian; Patel, Shriji N; Szirth, Ben C; Kolomeyer, Anton M; Khouri, Albert S

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  5. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine

    PubMed Central

    Xia, Tian; Patel, Shriji N.; Szirth, Ben C.

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  6. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2010-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  7. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2008-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  8. Multistep joint bilateral depth upsampling

    NASA Astrophysics Data System (ADS)

    Riemens, A. K.; Gangwal, O. P.; Barenbrug, B.; Berretty, R.-P. M.

    2009-01-01

    Depth maps are used in many applications, e.g. 3D television, stereo matching, segmentation, etc. Often, depth maps are available at a lower resolution compared to the corresponding image data. For these applications, depth maps must be upsampled to the image resolution. Recently, joint bilateral filters are proposed to upsample depth maps in a single step. In this solution, a high-resolution output depth is computed as a weighted average of surrounding low-resolution depth values, where the weight calculation depends on spatial distance function and intensity range function on the related image data. Compared to that, we present two novel ideas. Firstly, we apply anti-alias prefiltering on the high-resolution image to derive an image at the same low resolution as the input depth map. The upsample filter uses samples from both the high-resolution and the low-resolution images in the range term of the bilateral filter. Secondly, we propose to perform the upsampling in multiple stages, refining the resolution by a factor of 2×2 at each stage. We show experimental results on the consequences of the aliasing issue, and we apply our method to two use cases: a high quality ground-truth depth map and a real-time generated depth map of lower quality. For the first use case a relatively small filter footprint is applied; the second use case benefits from a substantially larger footprint. These experiments show that the dual image resolution range function alleviates the aliasing artifacts and therefore improves the temporal stability of the output depth map. On both use cases, we achieved comparable or better image quality with respect to upsampling with the joint bilateral filter in a single step. On the former use case, we feature a reduction of a factor of 5 in computational cost, whereas on the latter use case, the cost saving is a factor of 50.

  9. Depth-resolved imaging of colon tumor using optical coherence tomography and fluorescence laminar optical tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tang, Qinggong; Frank, Aaron; Wang, Jianting; Chen, Chao-wei; Jin, Lily; Lin, Jon; Chan, Joanne M.; Chen, Yu

    2016-03-01

    Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is unable to detect those subsurface lesions. Since cancer development is associated with both morphological and molecular alterations, imaging technologies that can quantitative image tissue's morphological and molecular biomarkers and assess the depth extent of a lesion in real time, without the need for tissue excision, would be a major advance in GI cancer diagnostics and therapy. In this research, we investigated the feasibility of multi-modal optical imaging including high-resolution optical coherence tomography (OCT) and depth-resolved high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. APC (adenomatous polyposis coli) mice model were imaged using OCT and FLOT and the correlated histopathological diagnosis was obtained. Quantitative structural (the scattering coefficient) and molecular imaging parameters (fluorescence intensity) from OCT and FLOT images were developed for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 87.4% (87.3%) for sensitivity (specificity) which gives the most optimal diagnosis (the largest area under receiver operating characteristic (ROC) curve). This project results in a new non-invasive multi-modal imaging platform for improved GI cancer detection, which is expected to have a major impact on detection, diagnosis, and characterization of GI cancers, as well as a wide range of epithelial cancers.

  10. Wavelet image processing applied to optical and digital holography: past achievements and future challenges

    NASA Astrophysics Data System (ADS)

    Jones, Katharine J.

    2005-08-01

    The link between wavelets and optics goes back to the work of Dennis Gabor who both invented holography and developed Gabor decompositions. Holography involves 3-D images. Gabor decompositions involves 1-D signals. Gabor decompositions are the predecessors of wavelets. Wavelet image processing of holography, both optical holography and digital holography, will be examined with respect to past achievements and future challenges.

  11. The Relationship between University Students' Academic Achievement and Perceived Organizational Image

    ERIC Educational Resources Information Center

    Polat, Soner

    2011-01-01

    The purpose of present study was to determine the relationship between university students' academic achievement and perceived organizational image. The sample of the study was the senior students at the faculties and vocational schools in Umuttepe Campus at Kocaeli University. Because the development of organizational image is a long process, the…

  12. Using computer aided system to determine the maximum depth of visualization of B-Mode diagnostic ultrasound image

    NASA Astrophysics Data System (ADS)

    Maslebu, G.; Adi, K.; Suryono

    2016-03-01

    In the service unit of radiology, ultrasound modality is widely used because it has advantages over other modalities, such as relatively inexpensive, non-invasive, does not use ionizing radiation, and portable. Until now, the method for determining the depth visualization on quality control program is through the visual observation of ultrasound image on the monitor. The purpose of this study is to develop a computer-aided system to determine maximum depth of visualization. Data acquisition was done by using B-Mode Diagnostic Ultrasound machine and Multi-purpose Multi-tissue Ultrasound Phantom model 040GSE. Phantom was scanned at fixed frequency of 1,8 MHz, 2,2 MHz, 3,6 MHz and 5,0 MHz with a gain variation of 30 dB, 45 dB, and 60 dB. Global thresholding and Euclidean distance method were used to determine maximum visualization depth. From this study, it is proved that the visualization depth using computer aided provide deeper visualization than visual interpretation. The differences between expert verification and the result of image processing are <6%. Thus, computer aided system can be used for the purpose of quality control in determining maximum visualization depth of B-Mode diagnostic ultrasound image.

  13. Application of Depth of Investigation index method to process resistivity imaging models from glacier forfield

    NASA Astrophysics Data System (ADS)

    Glazer, Michał; Dobinski, Wojciech; Grabiec, Mariusz

    2015-04-01

    At the end of August 2014 ERT measurements were carried out at the Storglaciären glacier forefield (Tarfala Valley, Northern Sweden) to study permafrost occurrence. This glacier has been retreating since 1910. It is one of the most well studied mountain glaciers in the world due to initiation of the first continuous glacier mass balance research program. Near the vicinity of its frontal margin three perpendicular and two parallel resistivity profile lines were located. They varied in terms of number of roll-along extensions and used electrode spacing. At least Schlumberger and dipole-dipole protocols were utilized on every measurement site. Surface of glacier forefield is characterized by occurrence of large moraine deposits which consists of rock blocks with air voids on one hand and voids filled with clay material on the other. It caused large variations of electrodes contact resistance on profile line. Furthermore, possibility of using only weak currents in the research, and presence of high resistivity contrast structures in geological medium made inversion process and interpretation of received resistivity models demanding. To stabilize inversion process efforts were made to erase most noisy and systematic error data. In order to assess the reliability of resistivity models at depth and in terms of the presence of artifacts left by the inversion process Depth of Investigation (DOI) index was applied. It describes accuracy of prepared model with respect to variable parameters of inversion. For preparing DOI maps two inversions on the same data set using different reference models are necessary. Then the results are compared to each other. In regions where the model depend strongly on data DOI will take values near zero, while in regions where resistivity values depend more on inversion parameters DOI will rise. Additionally several synthetic models were made which led to better understanding of resistivity images of some geological structures observed on the

  14. Stimulated emission reduced fluorescence microscopy: a concept for extending the fundamental depth limit of two-photon fluorescence imaging.

    PubMed

    Wei, Lu; Chen, Zhixing; Min, Wei

    2012-06-01

    Two-photon fluorescence microscopy has become an indispensable tool for imaging scattering biological samples by detecting scattered fluorescence photons generated from a spatially confined excitation volume. However, this optical sectioning capability breaks down eventually when imaging much deeper, as the out-of-focus fluorescence gradually overwhelms the in-focal signal in the scattering samples. The resulting loss of image contrast defines a fundamental imaging-depth limit, which cannot be overcome by increasing excitation efficiency. Herein we propose to extend this depth limit by performing stimulated emission reduced fluorescence (SERF) microscopy in which the two-photon excited fluorescence at the focus is preferentially switched on and off by a modulated and focused laser beam that is capable of inducing stimulated emission of the fluorophores from the excited states. The resulting image, constructed from the reduced fluorescence signal, is found to exhibit a significantly improved signal-to-background contrast owing to its overall higher-order nonlinear dependence on the incident laser intensity. We demonstrate this new concept by both analytical theory and numerical simulations. For brain tissues, SERF is expected to extend the imaging depth limit of two-photon fluorescence microscopy by a factor of more than 1.8.

  15. Stimulated emission reduced fluorescence microscopy: a concept for extending the fundamental depth limit of two-photon fluorescence imaging

    PubMed Central

    Wei, Lu; Chen, Zhixing; Min, Wei

    2012-01-01

    Two-photon fluorescence microscopy has become an indispensable tool for imaging scattering biological samples by detecting scattered fluorescence photons generated from a spatially confined excitation volume. However, this optical sectioning capability breaks down eventually when imaging much deeper, as the out-of-focus fluorescence gradually overwhelms the in-focal signal in the scattering samples. The resulting loss of image contrast defines a fundamental imaging-depth limit, which cannot be overcome by increasing excitation efficiency. Herein we propose to extend this depth limit by performing stimulated emission reduced fluorescence (SERF) microscopy in which the two-photon excited fluorescence at the focus is preferentially switched on and off by a modulated and focused laser beam that is capable of inducing stimulated emission of the fluorophores from the excited states. The resulting image, constructed from the reduced fluorescence signal, is found to exhibit a significantly improved signal-to-background contrast owing to its overall higher-order nonlinear dependence on the incident laser intensity. We demonstrate this new concept by both analytical theory and numerical simulations. For brain tissues, SERF is expected to extend the imaging depth limit of two-photon fluorescence microscopy by a factor of more than 1.8. PMID:22741091

  16. Performance comparison between 8 and 14 bit-depth imaging in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.

    2011-03-01

    We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.

  17. High-resolution, dual-depth spectral-domain optical coherence tomography with interlaced detection for whole-eye imaging.

    PubMed

    Kim, Hyung-Jin; Kim, Pil Un; Hyeon, Min Gyu; Choi, Youngwoon; Kim, Jeehyun; Kim, Beop-Min

    2016-09-10

    Dual-depth spectral-domain optical coherence tomography (SD-OCT) enables high-resolution in vivo whole-eye imaging. Two orthogonally polarized beams from a source are focused simultaneously on two axial positions of the anterior segment and the retina. For the detector arm, a 1×2 ultrafast optical switch sequentially delivers two spectral interference signals to a single spectrometer, which extends the in-air axial depth range up to 9.44 mm. An off-pivot complex conjugate removal technique doubles the depth range for all anterior segment imaging. The graphics-processing-unit-based parallel signal processing algorithm supports fast two- and three-dimensional image displays. The obtained high-resolution anterior and retinal images are measured biometrically. The dual-depth SD-OCT system has an axial resolution of ∼6.4  μm in air, and the sensitivity is 91.79 dB at 150 μm from the zero-delay line. PMID:27661354

  18. Depths, Diameters, and Profiles of Small Lunar Craters From LROC NAC Stereo Images

    NASA Astrophysics Data System (ADS)

    Stopar, J. D.; Robinson, M.; Barnouin, O. S.; Tran, T.

    2010-12-01

    Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images (pixel scale ~0.5 m) provide new 3-D views of small craters (40m>D>200m). We extracted topographic profiles from 85 of these craters in mare and highland terrains between 18.1-19.1°N and 5.2-5.4°E to investigate relationships among crater shape, age, and target. Obvious secondary craters (e.g., clustered) and moderately- to heavily-degraded craters were excluded. The freshest craters included in the study have crisp rims, bright ejecta, and no superposed craters. The depth, diameter, and profiles of each crater were determined from a NAC-derived DTM (M119808916/M119815703) tied to LOLA topography with better than 1 m vertical resolution (see [1]). Depth/diameter ratios for the selected craters are generally between 0.12 and 0.2. Crater profiles were classified into one of 3 categories: V-shaped, U-shaped, or intermediate (craters on steep slopes were excluded). Craters were then morphologically classified according to [2], where crater shape is determined by changes in material strength between subsurface layers, resulting in bowl-shaped, flat-bottomed, concentric, or central-mound crater forms. In this study, craters with U-shaped profiles tend to be small (<60 m) and flat-bottomed, while V-shaped craters have steep slopes (~20°), little to no floor, and a range of diameters. Both fresh and relatively degraded craters display the full range of profile shapes (from U to V and all stages in between). We found it difficult to differentiate U-shaped craters from V-shaped craters without the DTM, and we saw no clear correlation between morphologic and profile classification. Further study is still needed to increase our crater statistics and expand on the relatively small population of craters included here. For the craters in this study, we found that block abundances correlate with relative crater degradation state as defined by [3], where abundant blocks signal fresher craters; however

  19. Repeatability and Reproducibility of Manual Choroidal Volume Measurements Using Enhanced Depth Imaging Optical Coherence Tomography

    PubMed Central

    Chhablani, Jay; Barteselli, Giulio; Wang, Haiyan; El-Emam, Sharif; Kozak, Igor; Doede, Aubrey L.; Bartsch, Dirk-Uwe; Cheng, Lingyun; Freeman, William R.

    2012-01-01

    Purpose To evaluate the repeatability and reproducibility of manual choroidal volume (CV) measurements by spectral domain- optical coherence tomography (SD-OCT) using enhanced depth imaging (EDI). Methods Sixty eyes of 32 patients with or without any ocular chorioretinal diseases were enrolled prospectively. Thirty-one choroidal scans were performed on each eye, centered at the fovea, using a raster protocol. Two masked observers demarcated choroidal boundaries by using built-in automated retinal segmentation software on two separate sessions. Observers were masked to each other's and their own previous readings. A standardized grid centered on the fovea was positioned automatically by OCT software, and values for average CVs and total CVs in three concentric rings were noted. The agreement between the intraobserver measurements or interobserver measurements was assessed using the concordance correlation coefficient (CCC). Bland-Altman plots were used to assess the clinically relevant magnitude of differences between inter- and intraobserver measurements. Results The interobserver CCC for the overall average CV was very high, 0.9956 (95% confidence interval [CI], 0.991–0.9968). CCCs for all three Early Treatment Diabetic Retinopathy Study concentric rings between two graders was 0.98 to 0.99 (95% CI, 0.97–0.98). Similarly intraobserver repeatability of two graders also ranged from 0.98 to 0.99. The interobserver coefficient of reproducibility was approximately 0.42 (95% CI, 0.34–0.5 mm3) for the average CV. Conclusions CV measurement by manual segmentation using built-in automated retinal segmentation software on EDI-SD-OCT is highly reproducible and repeatable and has a very small range of variability. PMID:22427584

  20. Images of interlayer vortices and c -axis penetration depth of high- Tc YBa2Cu3O7-y superconductor

    NASA Astrophysics Data System (ADS)

    Iguchi, Ienari; Takeda, Tomohiro; Uchiyama, Tetsuji; Sugimoto, Akira; Hatano, Takeshi

    2006-06-01

    The measurements on the magnetic image of interlayer vortices are performed for the high- Tc YBa2Cu3O7-y(110) thin film using a high sensitive scanning SQUID microscopy. Clear images of aligned giant interlayer vortices are observable. For the majority of vortices, using the London model, the c -axis penetration depth is estimated to be about 20μm at 3K . The temperature dependence of λc is obtained from the observed vortex images at different temperatures, whose behavior is in good agreement with those of the microwave cavity measurement.

  1. Review of spectral imaging technology in biomedical engineering: achievements and challenges.

    PubMed

    Li, Qingli; He, Xiaofu; Wang, Yiting; Liu, Hongying; Xu, Dongrong; Guo, Fangmin

    2013-10-01

    Spectral imaging is a technology that integrates conventional imaging and spectroscopy to get both spatial and spectral information from an object. Although this technology was originally developed for remote sensing, it has been extended to the biomedical engineering field as a powerful analytical tool for biological and biomedical research. This review introduces the basics of spectral imaging, imaging methods, current equipment, and recent advances in biomedical applications. The performance and analytical capabilities of spectral imaging systems for biological and biomedical imaging are discussed. In particular, the current achievements and limitations of this technology in biomedical engineering are presented. The benefits and development trends of biomedical spectral imaging are highlighted to provide the reader with an insight into the current technological advances and its potential for biomedical research.

  2. Depth-resolved imaging and detection of micro-retroreflectors within biological tissue using Optical Coherence Tomography

    PubMed Central

    Ivers, Steven N.; Baranov, Stephan A.; Sherlock, Tim; Kourentzi, Katerina; Ruchhoeft, Paul; Willson, Richard; Larin, Kirill V.

    2010-01-01

    A new approach to in vivo biosensor design is introduced, based on the use of an implantable micron-sized retroreflector-based platform and non-invasive imaging of its surface reflectivity by Optical Coherence Tomography (OCT). The possibility of using OCT for the depth-resolved imaging and detection of micro-retroreflectors in highly turbid media, including tissue, is demonstrated. The maximum imaging depth for the detection of the micro-retroreflector-based platform within the surrounding media was found to be 0.91 mm for porcine tissue and 1.65 mm for whole milk. With further development, it may be possible to utilize OCT and micro-retroreflectors as a tool for continuous monitoring of analytes in the subcutaneous tissue. PMID:21258473

  3. Implementation of wireless 3D stereo image capture system and synthesizing the depth of region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Kwon, Hyeokjae; Badarch, Luubaatar

    2014-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We send the raw captured image data to the host computer by using WiFi wireless communication and then use GPU hardware and CUDA programming for implementation of real time three-dimensional stereo image by synthesizing the depth of ROI(region of interest). We also try to find and declare the mechanism of deblurring of CMOS camera module based on the Kirchhoff diffraction formula and propose a deblurring model. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  4. Extended depth from focus reconstruction using NIH ImageJ plugins: quality and resolution of elevation maps.

    PubMed

    Hein, Luis Rogerio De Oliveira; De Oliveira, José Alberto; De Campos, Kamila Amato; Caltabiano, Pietro Carelli Reis De Oliveira

    2012-11-01

    In this work, NIH ImageJ plugins for extended depth-from-focus reconstructions (EDFR) based on spatial domain operations were compared and tested for usage optimization. Also, some preprocessing solutions for light microscopy image stacks were evaluated, suggesting a general routine for the ImageJ user to get reliable elevation maps from grayscale image stacks. Two reflected light microscope image stacks were used to test the EDFR plugins: one bright-field image stack for the fracture of carbon-epoxy composite and its darkfield corresponding stack at same (x,y,z) spatial coordinates. Image quality analysis consisted of the comparison of signal-to-noise ratio and resolution parameters with the consistence of elevation maps, based on roughness and fractal measurements. Darkfield illumination contributed to enhance the homogeneity of images in stack and resulting height maps, reducing the influence of digital image processing choices on the dispersion of topographic measurements. The subtract background filter, as a preprocessing tool, contributed to produce sharper focused images. In general, the increasing of kernel size for EDFR spatial domain-based solutions will produce smooth height maps. Finally, this work has the main objective to establish suitable guidelines to generate elevation maps by light microscopy.

  5. Predictive models of turbidity and water depth in the Doñana marshes using Landsat TM and ETM+ images.

    PubMed

    Bustamante, Javier; Pacios, Fernando; Díaz-Delgado, Ricardo; Aragonés, David

    2009-05-01

    We have used Landsat-5 TM and Landsat-7 ETM+ images together with simultaneous ground-truth data at sample points in the Doñana marshes to predict water turbidity and depth from band reflectance using Generalized Additive Models. We have point samples for 12 different dates simultaneous with 7 Landsat-5 and 5 Landsat-7 overpasses. The best model for water turbidity in the marsh explained 38% of variance in ground-truth data and included as predictors band 3 (630-690 nm), band 5 (1550-1750 nm) and the ratio between bands 1 (450-520 nm) and 4 (760-900 nm). Water turbidity is easier to predict for water bodies like the Guadalquivir River and artificial ponds that are deep and not affected by bottom soil reflectance and aquatic vegetation. For the latter, a simple model using band 3 reflectance explains 78.6% of the variance. Water depth is easier to predict than turbidity. The best model for water depth in the marsh explains 78% of the variance and includes as predictors band 1, band 5, the ratio between band 2 (520-600 nm) and band 4, and bottom soil reflectance in band 4 in September, when the marsh is dry. The water turbidity and water depth models have been developed in order to reconstruct historical changes in Doñana wetlands during the last 30 years using the Landsat satellite images time series.

  6. Enhanced depth imaging optical coherence tomography of choroidal osteoma with secondary neovascular membranes: report of two cases.

    PubMed

    Mello, Patrícia Correa de; Berensztejn, Patricia; Brasil, Oswaldo Ferreira Moura

    2016-01-01

    We report enhanced depth imaging optical coherence tomography (EDI-OCT) features based on clinical and imaging data from two newly diagnosed cases of choroidal osteoma presenting with recent visual loss secondary to choroidal neovascular membranes. The features described in the two cases, compression of the choriocapillaris and disorganization of the medium and large vessel layers, are consistent with those of previous reports. We noticed a sponge-like pattern previously reported, but it was subtle. Both lesions had multiple intralesional layers and a typical intrinsic transparency with visibility of the sclerochoroidal junction. PMID:27463635

  7. Enhanced depth imaging optical coherence tomography of choroidal osteoma with secondary neovascular membranes: report of two cases.

    PubMed

    Mello, Patrícia Correa de; Berensztejn, Patricia; Brasil, Oswaldo Ferreira Moura

    2016-01-01

    We report enhanced depth imaging optical coherence tomography (EDI-OCT) features based on clinical and imaging data from two newly diagnosed cases of choroidal osteoma presenting with recent visual loss secondary to choroidal neovascular membranes. The features described in the two cases, compression of the choriocapillaris and disorganization of the medium and large vessel layers, are consistent with those of previous reports. We noticed a sponge-like pattern previously reported, but it was subtle. Both lesions had multiple intralesional layers and a typical intrinsic transparency with visibility of the sclerochoroidal junction.

  8. Penetration depth in tissue-mimicking phantoms from hyperspectral imaging in SWIR in transmission and reflection geometry

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Salo, Daniel; Kim, David M.; Berezin, Mikhail Y.

    2016-03-01

    We explored the depth penetration in tissue-mimicking intralipid-based phantoms in SWIR (800-1650 nm) using a hyperspectral imaging system composed from a 2D CCD camera coupled to a microscope. Hyperspectral images in transmission and reflection geometries were collected with a spectral resolution of 5.27 nm and a total acquisition time of 3 minutes or less that minimized artifacts from sample drying. Michelson spatial contrast was used as a metric to evaluate light penetration. Results from both transmission and reflection geometries consistently revealed the highest spatial contrast in the wavelength range of 1300 to 1350 nm.

  9. Enhanced 3D prestack depth imaging of broadband data from the South China Sea: a case study

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Xu, Jincheng; Li, Jinbo

    2016-08-01

    We present a case study of prestack depth imaging for data from the South China Sea using an enhanced work flow with cutting edge technologies. In the survey area, the presence of complex geologies such as carbonate pinnacles and gas pockets creates challenges for processing and imaging: the complex geometry of carbonates exhibits 3D effect for wave propagation; deriving velocity inside carbonates and gas pockets is difficult and laborious; and localised strong attenuation effect from gas pockets may lead to absorption and dispersion problems. In the course of developing the enhanced work flow to tackle these issues, the following processing steps have the most significant impact on improving the imaging quality: (1) 3D ghost wavefield attenuation, in particular to remove the ghost energy associated with complex structures; (2) 3D surface-related multiple elimination (SRME) to remove multiples, in particular multiples related to complex carbonate structures; (3) full waveform inversion (FWI) and tomography-based velocity model building, to derive a geologically plausible velocity model for imaging; (4) Q-tomography to estimate the Q model which describes the intrinsic attenuation of the subsurface media; (5) de-absorption prestack depth migration (Q-PSDM) to compensate the earth absorption and dispersion effect during imaging especially for the area below gas pockets. The case study with the data from the South China Sea shows that the enhanced work flow consisting of cutting edge technologies is effective when the complex geologies are present.

  10. Depth-correction algorithm that improves optical quantification of large breast lesions imaged by diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Tavakoli, Behnoosh; Zhu, Quing

    2011-05-01

    Optical quantification of large lesions imaged with diffuse optical tomography in reflection geometry is depth dependence due to the exponential decay of photon density waves. We introduce a depth-correction method that incorporates the target depth information provided by coregistered ultrasound. It is based on balancing the weight matrix, using the maximum singular values of the target layers in depth without changing the forward model. The performance of the method is evaluated using phantom targets and 10 clinical cases of larger malignant and benign lesions. The results for the homogenous targets demonstrate that the location error of the reconstructed maximum absorption coefficient is reduced to the range of the reconstruction mesh size for phantom targets. Furthermore, the uniformity of absorption distribution inside the lesions improve about two times and the median of the absorption increases from 60 to 85% of its maximum compared to no depth correction. In addition, nonhomogenous phantoms are characterized more accurately. Clinical examples show a similar trend as the phantom results and demonstrate the utility of the correction method for improving lesion quantification.

  11. Multi-Mode Electromagnetic Ultrasonic Lamb Wave Tomography Imaging for Variable-Depth Defects in Metal Plates.

    PubMed

    Huang, Songling; Zhang, Yu; Wang, Shen; Zhao, Wei

    2016-05-02

    This paper proposes a new cross-hole tomography imaging (CTI) method for variable-depth defects in metal plates based on multi-mode electromagnetic ultrasonic Lamb waves (LWs). The dispersion characteristics determine that different modes of LWs are sensitive to different thicknesses of metal plates. In this work, the sensitivities to thickness variation of A0- and S0-mode LWs are theoretically studied. The principles and procedures for the cooperation of A0- and S0-mode LW CTI are proposed. Moreover, the experimental LW imaging system on an aluminum plate with a variable-depth defect is set up, based on A0- and S0-mode EMAT (electromagnetic acoustic transducer) arrays. For comparison, the traditional single-mode LW CTI method is used in the same experimental platform. The imaging results show that the computed thickness distribution by the proposed multi-mode method more accurately reflects the actual thickness variation of the defect, while neither the S0 nor the A0 single-mode method was able to distinguish thickness variation in the defect region. Moreover, the quantification of the defect's thickness variation is more accurate with the multi-mode method. Therefore, theoretical and practical results prove that the variable-depth defect in metal plates can be successfully quantified and visualized by the proposed multi-mode electromagnetic ultrasonic LW CTI method.

  12. Multi-Mode Electromagnetic Ultrasonic Lamb Wave Tomography Imaging for Variable-Depth Defects in Metal Plates.

    PubMed

    Huang, Songling; Zhang, Yu; Wang, Shen; Zhao, Wei

    2016-01-01

    This paper proposes a new cross-hole tomography imaging (CTI) method for variable-depth defects in metal plates based on multi-mode electromagnetic ultrasonic Lamb waves (LWs). The dispersion characteristics determine that different modes of LWs are sensitive to different thicknesses of metal plates. In this work, the sensitivities to thickness variation of A0- and S0-mode LWs are theoretically studied. The principles and procedures for the cooperation of A0- and S0-mode LW CTI are proposed. Moreover, the experimental LW imaging system on an aluminum plate with a variable-depth defect is set up, based on A0- and S0-mode EMAT (electromagnetic acoustic transducer) arrays. For comparison, the traditional single-mode LW CTI method is used in the same experimental platform. The imaging results show that the computed thickness distribution by the proposed multi-mode method more accurately reflects the actual thickness variation of the defect, while neither the S0 nor the A0 single-mode method was able to distinguish thickness variation in the defect region. Moreover, the quantification of the defect's thickness variation is more accurate with the multi-mode method. Therefore, theoretical and practical results prove that the variable-depth defect in metal plates can be successfully quantified and visualized by the proposed multi-mode electromagnetic ultrasonic LW CTI method. PMID:27144571

  13. Multi-Mode Electromagnetic Ultrasonic Lamb Wave Tomography Imaging for Variable-Depth Defects in Metal Plates

    PubMed Central

    Huang, Songling; Zhang, Yu; Wang, Shen; Zhao, Wei

    2016-01-01

    This paper proposes a new cross-hole tomography imaging (CTI) method for variable-depth defects in metal plates based on multi-mode electromagnetic ultrasonic Lamb waves (LWs). The dispersion characteristics determine that different modes of LWs are sensitive to different thicknesses of metal plates. In this work, the sensitivities to thickness variation of A0- and S0-mode LWs are theoretically studied. The principles and procedures for the cooperation of A0- and S0-mode LW CTI are proposed. Moreover, the experimental LW imaging system on an aluminum plate with a variable-depth defect is set up, based on A0- and S0-mode EMAT (electromagnetic acoustic transducer) arrays. For comparison, the traditional single-mode LW CTI method is used in the same experimental platform. The imaging results show that the computed thickness distribution by the proposed multi-mode method more accurately reflects the actual thickness variation of the defect, while neither the S0 nor the A0 single-mode method was able to distinguish thickness variation in the defect region. Moreover, the quantification of the defect’s thickness variation is more accurate with the multi-mode method. Therefore, theoretical and practical results prove that the variable-depth defect in metal plates can be successfully quantified and visualized by the proposed multi-mode electromagnetic ultrasonic LW CTI method. PMID:27144571

  14. New Fast Fall Detection Method Based on Spatio-Temporal Context Tracking of Head by Using Depth Images

    PubMed Central

    Yang, Lei; Ren, Yanyun; Hu, Huosheng; Tian, Bo

    2015-01-01

    In order to deal with the problem of projection occurring in fall detection with two-dimensional (2D) grey or color images, this paper proposed a robust fall detection method based on spatio-temporal context tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor. In the pre-processing procedure, the parameters of the Single-Gauss-Model (SGM) are estimated and the coefficients of the floor plane equation are extracted from the background images. Once human subject appears in the scene, the silhouette is extracted by SGM and the foreground coefficient of ellipses is used to determine the head position. The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image. When the distance is lower than an adaptive threshold, the centroid height of the human will be used as the second judgment criteria to decide whether a fall incident happened. Lastly, four groups of experiments with different falling directions are performed. Experimental results show that the proposed method can detect fall incidents that occurred in different orientations, and they only need a low computation complexity. PMID:26378540

  15. New Fast Fall Detection Method Based on Spatio-Temporal Context Tracking of Head by Using Depth Images.

    PubMed

    Yang, Lei; Ren, Yanyun; Hu, Huosheng; Tian, Bo

    2015-09-11

    In order to deal with the problem of projection occurring in fall detection with two-dimensional (2D) grey or color images, this paper proposed a robust fall detection method based on spatio-temporal context tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor. In the pre-processing procedure, the parameters of the Single-Gauss-Model (SGM) are estimated and the coefficients of the floor plane equation are extracted from the background images. Once human subject appears in the scene, the silhouette is extracted by SGM and the foreground coefficient of ellipses is used to determine the head position. The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image. When the distance is lower than an adaptive threshold, the centroid height of the human will be used as the second judgment criteria to decide whether a fall incident happened. Lastly, four groups of experiments with different falling directions are performed. Experimental results show that the proposed method can detect fall incidents that occurred in different orientations, and they only need a low computation complexity.

  16. Compton back scatter imaging for mild steel rebar detection and depth characterization embedded in concrete

    NASA Astrophysics Data System (ADS)

    Margret, M.; Menaka, M.; Venkatraman, B.; Chandrasekaran, S.

    2015-01-01

    A novel non-destructive Compton scattering technique is described to ensure the feasibility, reliability and applicability of detecting the reinforcing steel bar in concrete. The indigenously developed prototype system presented in this paper is capable of detecting the reinforcement of varied diameters embedded in the concrete and as well as up to 60 mm depth, with the aid of Caesium-137(137Cs) radioactive source and a high resolution HPGe detector. The technique could also detect the inhomogeneities present in the test specimen by interpreting the material density variation caused due to the count rate. The experimental results are correlated using established techniques such as radiography and rebar locators. The results obtained from its application to locate the rebars are quite promising and also been successfully used for reinforcement mapping. This method can be applied, especially when the intrusion is located underneath the cover of the concrete or considerably at larger depths and where two sided access is restricted.

  17. Tunable semiconductor laser at 1025-1095 nm range for OCT applications with an extended imaging depth

    NASA Astrophysics Data System (ADS)

    Shramenko, Mikhail V.; Chamorovskiy, Alexander; Lyu, Hong-Chou; Lobintsov, Andrei A.; Karnowski, Karol; Yakubovich, Sergei D.; Wojtkowski, Maciej

    2015-03-01

    Tunable semiconductor laser for 1025-1095 nm spectral range is developed based on the InGaAs semiconductor optical amplifier and a narrow band-pass acousto-optic tunable filter in a fiber ring cavity. Mode-hop-free sweeping with tuning speeds of up to 104 nm/s was demonstrated. Instantaneous linewidth is in the range of 0.06-0.15 nm, side-mode suppression is up to 50 dB and polarization extinction ratio exceeds 18 dB. Optical power in output single mode fiber reaches 20 mW. The laser was used in OCT system for imaging a contact lens immersed in a 0.5% intra-lipid solution. The cross-section image provided the imaging depth of more than 5mm.

  18. Examination of Optical Depth Effects on Fluorescence Imaging of Cardiac Propagation

    PubMed Central

    Bray, Mark-Anthony; Wikswo, John P.

    2003-01-01

    Optical mapping with voltage-sensitive dyes provides a high-resolution technique to observe cardiac electrodynamic behavior. Although most studies assume that the fluorescent signal is emitted from the surface layer of cells, the effects of signal attenuation with depth on signal interpretation are still unclear. This simulation study examines the effects of a depth-weighted signal on epicardial activation patterns and filament localization. We simulated filament behavior using a detailed cardiac model, and compared the signal obtained from the top (epicardial) layer of the spatial domain with the calculated weighted signal. General observations included a prolongation of the action upstroke duration, early upstroke initiation, and reduction in signal amplitude in the weighted signal. A shallow filament was found to produce a dual-humped action potential morphology consistent with previously reported observations. Simulated scroll wave breakup exhibited effects such as the false appearance of graded potentials, apparent supramaximal conduction velocities, and a spatially blurred signal with the local amplitude dependent upon the immediate subepicardial activity; the combination of these effects produced a corresponding change in the accuracy of filament localization. Our results indicate that the depth-dependent optical signal has significant consequences on the interpretation of epicardial activation dynamics. PMID:14645100

  19. Achievable spatial resolution of time-resolved transillumination imaging systems which utilize multiply scattered light

    NASA Astrophysics Data System (ADS)

    Moon, J. A.; Battle, P. R.; Bashkansky, M.; Mahon, R.; Duncan, M. D.; Reintjes, J.

    1996-01-01

    We describe theoretically and measure experimentally the best achievable time-dependent point-spread-function of light in the presence of strong turbidity. We employ the rescaled isotropic-scattering solution to the time-dependent radiative transfer equation to examine three mathematically distinct limits of photonic transport: the ballistic, quasidiffuse, and diffuse limits. In all cases we follow the constraint that a minimum fractional number of launched photons must be received before the time-integrating detector is turned off. We show how the achievable ballistic resolution maps into the diffusion-limited achievable resolution, and verify this behavior experimentally by using a coherently amplified Raman polarization gate imaging system. We are able to quantitatively fit the measured best achievable resolution by empirically rescaling the scattering length in the model.

  20. Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: Analysis and comparison

    NASA Astrophysics Data System (ADS)

    Kazmi, Wajahat; Foix, Sergi; Alenyà, Guillem; Andersen, Hans Jørgen

    2014-02-01

    In this article we analyze the response of Time-of-Flight (ToF) cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. ToF cameras are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposure times of the sensors. Performance of three different ToF cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancelation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. Stereo vision is comparatively more robust to ambient illumination and provides high resolution depth data but is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of ToF cameras for a scene involving both shadow and sunlight exposures at the same time by taking advantage of camera flags (PMD) or confidence matrix (SwissRanger).

  1. Endoscopic diagnosis of invasion depth for early colorectal carcinomas: a prospective comparative study of narrow-band imaging, acetic acid, and crystal violet.

    PubMed

    Zhang, Jing-Jing; Gu, Li-Yang; Chen, Xiao-Yu; Gao, Yun-Jie; Ge, Zhi-Zheng; Li, Xiao-Bo

    2015-02-01

    Several studies have validated the effectiveness of narrow-band imaging (NBI) in estimating invasion depth of early colorectal cancers. However, comparative diagnostic accuracy between NBI and chromoendoscopy remains unclear. Other than crystal violet, use of acetic acid as a new staining method to diagnose deep submucosal invasive (SM-d) carcinomas has not been extensively evaluated. We aimed to assess the diagnostic accuracy and interobserver agreement of NBI, acetic acid enhancement, and crystal violet staining in predicting invasion depth of early colorectal cancers. A total of 112 early colorectal cancers were prospectively observed by NBI, acetic acid, and crystal violet staining in sequence by 1 expert colonoscopist. All endoscopic images of each technique were stored and reassessed. Finally, 294 images of 98 lesions were selected for evaluation by 3 less experienced endoscopists. The accuracy of NBI, acetic acid, and crystal violet for real-time diagnosis was 85.7%, 86.6%, and 92.9%, respectively. For image evaluation by novices, NBI achieved the highest accuracy of 80.6%, compared with that of 72.4% by acetic acid, and 75.8% by crystal violet. The kappa values of NBI, acetic acid, and crystal violet among the 3 trainees were 0.74 (95% CI 0.65-0.83), 0.68 (95% CI 0.59-0.77), and 0.70 (95% CI 0.61-0.79), respectively. For diagnosis of SM-d carcinoma, NBI was slightly inferior to crystal violet staining, when performed by the expert endoscopist. However, NBI yielded higher accuracy than crystal violet staining, in terms of less experienced endoscopists. Acetic acid enhancement with pit pattern analysis was capable of predicting SM-d carcinoma, comparable to the traditional crystal violet staining.

  2. Common-path depth-filtered digital holography for high resolution imaging of buried semiconductor structures

    NASA Astrophysics Data System (ADS)

    Finkeldey, Markus; Schellenberg, Falk; Gerhardt, Nils C.; Paar, Christof; Hofmann, Martin R.

    2016-03-01

    We investigate digital holographic microscopy (DHM) in reflection geometry for non-destructive 3D imaging of semiconductor devices. This technique provides high resolution information of the inner structure of a sample while maintaining its integrity. To illustrate the performance of the DHM, we use our setup to localize the precise spots for laser fault injection, in the security related field of side-channel attacks. While digital holographic microscopy techniques easily offer high resolution phase images of surface structures in reflection geometry, they are typically incapable to provide high quality phase images of buried structures due to the interference of reflected waves from different interfaces inside the structure. Our setup includes a sCMOS camera for image capture, arranged in a common-path interferometer to provide very high phase stability. As a proof of principle, we show sample images of the inner structure of a modern microcontroller. Finally, we compare our holographic method to classic optical beam induced current (OBIC) imaging to demonstrate its benefits.

  3. Relative capacities of time-gated versus continuous-wave imaging to localize tissue embedded vessels with increasing depth

    NASA Astrophysics Data System (ADS)

    Patel, Nimit L.; Lin, Zi-Jing; Rathore, Yajuvendra; Livingston, Edward H.; Liu, Hanli; Alexandrakis, George

    2010-01-01

    Surgeons often cannot see major vessels embedded in adipose tissue and inadvertently injure them. One such example occurs during surgical removal of the gallbladder, where injury of the nearby common bile duct leads to life-threatening complications. Near-infrared imaging of the intraoperative field may help surgeons localize such critical tissue-embedded vessels. We have investigated how continuous-wave (CW) imaging performs relative to time-gated wide-field imaging, presently a rather costly technology, under broad Gaussian beam-illumination conditions. We have studied the simplified case of an isolated cylinder having bile-duct optical properties, embedded at different depths within a 2-cm slab of adipose tissue. Monte Carlo simulations were preformed for both reflectance and transillumination geometries. The relative performance of CW versus time-gated imaging was compared in terms of spatial resolution and contrast-to-background ratio in the resulting simulated images. It was found that time-gated imaging offers superior spatial resolution and vessel-detection sensitivity in most cases, though CW transillumination measurements may also offer satisfactory performance for this tissue geometry at lower cost. Experiments were performed in reflectance geometry to validate simulation results, and potential challenges in the translation of this technology to the clinic are discussed.

  4. Multimodal adaptive optics for depth-enhanced high-resolution ophthalmic imaging

    NASA Astrophysics Data System (ADS)

    Hammer, Daniel X.; Mujat, Mircea; Iftimia, Nicusor V.; Lue, Niyom; Ferguson, R. Daniel

    2010-02-01

    We developed a multimodal adaptive optics (AO) retinal imager for diagnosis of retinal diseases, including glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and retinitis pigmentosa (RP). The development represents the first ever high performance AO system constructed that combines AO-corrected scanning laser ophthalmoscopy (SLO) and swept source Fourier domain optical coherence tomography (SSOCT) imaging modes in a single compact clinical prototype platform. The SSOCT channel operates at a wavelength of 1 μm for increased penetration and visualization of the choriocapillaris and choroid, sites of major disease activity for DR and wet AMD. The system is designed to operate on a broad clinical population with a dual deformable mirror (DM) configuration that allows simultaneous low- and high-order aberration correction. The system also includes a wide field line scanning ophthalmoscope (LSO) for initial screening, target identification, and global orientation; an integrated retinal tracker (RT) to stabilize the SLO, OCT, and LSO imaging fields in the presence of rotational eye motion; and a high-resolution LCD-based fixation target for presentation to the subject of stimuli and other visual cues. The system was tested in a limited number of human subjects without retinal disease for performance optimization and validation. The system was able to resolve and quantify cone photoreceptors across the macula to within ~0.5 deg (~100-150 μm) of the fovea, image and delineate ten retinal layers, and penetrate to resolve targets deep into the choroid. In addition to instrument hardware development, analysis algorithms were developed for efficient information extraction from clinical imaging sessions, with functionality including automated image registration, photoreceptor counting, strip and montage stitching, and segmentation. The system provides clinicians and researchers with high-resolution, high performance adaptive optics imaging to help

  5. Depth and all-in-focus imaging by a multi-line-scan light-field camera

    NASA Astrophysics Data System (ADS)

    Štolc, Svorad; Soukup, Daniel; Holländer, Branislav; Huber-Mörk, Reinhold

    2014-09-01

    We present a multi-line-scan light-field image acquisition and processing system designed for 2.5/3-D inspection of fine surface structures in industrial environments. The acquired three-dimensional light field is composed of multiple observations of an object viewed from different angles. The acquisition system consists of an area-scan camera that allows for a small number of sensor lines to be extracted at high frame rates, and a mechanism for transporting an inspected object at a constant speed and direction. During acquisition, an object is moved orthogonally to the camera's optical axis as well as the orientation of the sensor lines and a predefined subset of lines is read out from the sensor at each time step. This allows for the construction of so-called epipolar plane images (EPIs) and subsequent EPI-based depth estimation. We compare several approaches based on testing a set of slope hypotheses in the EPI domain. Hypotheses are derived from block matching, namely the sum of absolute differences, modified sum of absolute differences, normalized cross correlation, census transform, and modified census transform. Results for depth estimation and all-in-focus image generation are presented for synthetic and real data.

  6. Vegetation Height Estimation Near Power transmission poles Via satellite Stereo Images using 3D Depth Estimation Algorithms

    NASA Astrophysics Data System (ADS)

    Qayyum, A.; Malik, A. S.; Saad, M. N. M.; Iqbal, M.; Abdullah, F.; Rahseed, W.; Abdullah, T. A. R. B. T.; Ramli, A. Q.

    2015-04-01

    Monitoring vegetation encroachment under overhead high voltage power line is a challenging problem for electricity distribution companies. Absence of proper monitoring could result in damage to the power lines and consequently cause blackout. This will affect electric power supply to industries, businesses, and daily life. Therefore, to avoid the blackouts, it is mandatory to monitor the vegetation/trees near power transmission lines. Unfortunately, the existing approaches are more time consuming and expensive. In this paper, we have proposed a novel approach to monitor the vegetation/trees near or under the power transmission poles using satellite stereo images, which were acquired using Pleiades satellites. The 3D depth of vegetation has been measured near power transmission lines using stereo algorithms. The area of interest scanned by Pleiades satellite sensors is 100 square kilometer. Our dataset covers power transmission poles in a state called Sabah in East Malaysia, encompassing a total of 52 poles in the area of 100 km. We have compared the results of Pleiades satellite stereo images using dynamic programming and Graph-Cut algorithms, consequently comparing satellites' imaging sensors and Depth-estimation Algorithms. Our results show that Graph-Cut Algorithm performs better than dynamic programming (DP) in terms of accuracy and speed.

  7. Thermal Images of Seeds Obtained at Different Depths by Photoacoustic Microscopy (PAM)

    NASA Astrophysics Data System (ADS)

    Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.

    2015-06-01

    The objective of the present study was to obtain thermal images of a broccoli seed ( Brassica oleracea) by photoacoustic microscopy, at different modulation frequencies of the incident light beam ((0.5, 1, 5, and 20) Hz). The thermal images obtained in the amplitude of the photoacoustic signal vary with each applied frequency. In the lowest light frequency modulation, there is greater thermal wave penetration in the sample. Likewise, the photoacoustic signal is modified according to the structural characteristics of the sample and the modulation frequency of the incident light. Different structural components could be seen by photothermal techniques, as shown in the present study.

  8. Lidar Waveform-Based Analysis of Depth Images Constructed Using Sparse Single-Photon Data

    NASA Astrophysics Data System (ADS)

    Altmann, Yoann; Ren, Ximing; McCarthy, Aongus; Buller, Gerald S.; McLaughlin, Steve

    2016-05-01

    This paper presents a new Bayesian model and algorithm used for depth and intensity profiling using full waveforms from the time-correlated single photon counting (TCSPC) measurement in the limit of very low photon counts. The model proposed represents each Lidar waveform as a combination of a known impulse response, weighted by the target intensity, and an unknown constant background, corrupted by Poisson noise. Prior knowledge about the problem is embedded in a hierarchical model that describes the dependence structure between the model parameters and their constraints. In particular, a gamma Markov random field (MRF) is used to model the joint distribution of the target intensity, and a second MRF is used to model the distribution of the target depth, which are both expected to exhibit significant spatial correlations. An adaptive Markov chain Monte Carlo algorithm is then proposed to compute the Bayesian estimates of interest and perform Bayesian inference. This algorithm is equipped with a stochastic optimization adaptation mechanism that automatically adjusts the parameters of the MRFs by maximum marginal likelihood estimation. Finally, the benefits of the proposed methodology are demonstrated through a serie of experiments using real data.

  9. Burn Depth Estimation Based on Infrared Imaging of Thermally Excited Tissue

    SciTech Connect

    Dickey, F.M.; Hoswade, S.C.; Yee, M.L.

    1999-03-05

    Accurate estimation of the depth of partial-thickness burns and the early prediction of a need for surgical intervention are difficult. A non-invasive technique utilizing the difference in thermal relaxation time between burned and normal skin may be useful in this regard. In practice, a thermal camera would record the skin's response to heating or cooling by a small amount-roughly 5 C for a short duration. The thermal stimulus would be provided by a heat lamp, hot or cold air, or other means. Processing of the thermal transients would reveal areas that returned to equilibrium at different rates, which should correspond to different burn depths. In deeper thickness burns, the outside layer of skin is further removed from the constant-temperature region maintained through blood flow. Deeper thickness areas should thus return to equilibrium more slowly than other areas. Since the technique only records changes in the skin's temperature, it is not sensitive to room temperature, the burn's location, or the state of the patient. Preliminary results are presented for analysis of a simulated burn, formed by applying a patch of biosynthetic wound dressing on top of normal skin tissue.

  10. Nanoscale β-nuclear magnetic resonance depth imaging of topological insulators

    PubMed Central

    Koumoulis, Dimitrios; Morris, Gerald D.; He, Liang; Kou, Xufeng; King, Danny; Wang, Dong; Hossain, Masrur D.; Wang, Kang L.; Fiete, Gregory A.; Kanatzidis, Mercouri G.; Bouchard, Louis-S.

    2015-01-01

    Considerable evidence suggests that variations in the properties of topological insulators (TIs) at the nanoscale and at interfaces can strongly affect the physics of topological materials. Therefore, a detailed understanding of surface states and interface coupling is crucial to the search for and applications of new topological phases of matter. Currently, no methods can provide depth profiling near surfaces or at interfaces of topologically inequivalent materials. Such a method could advance the study of interactions. Herein, we present a noninvasive depth-profiling technique based on β-detected NMR (β-NMR) spectroscopy of radioactive 8Li+ ions that can provide “one-dimensional imaging” in films of fixed thickness and generates nanoscale views of the electronic wavefunctions and magnetic order at topological surfaces and interfaces. By mapping the 8Li nuclear resonance near the surface and 10-nm deep into the bulk of pure and Cr-doped bismuth antimony telluride films, we provide signatures related to the TI properties and their topological nontrivial characteristics that affect the electron–nuclear hyperfine field, the metallic shift, and magnetic order. These nanoscale variations in β-NMR parameters reflect the unconventional properties of the topological materials under study, and understanding the role of heterogeneities is expected to lead to the discovery of novel phenomena involving quantum materials. PMID:26124141

  11. Matters of Light & Depth: Creating Memorable Images for Video, Film, & Stills through Lighting.

    ERIC Educational Resources Information Center

    Lowell, Ross

    Written for students, professionals with limited experience, and professionals who encounter lighting difficulties, this book encourages sensitivity to light in its myriad manifestations: it offers advice in creating memorable images for video, film, and stills through lighting. Chapters in the book are: (1) "Lights of Passage: Basic Theory and…

  12. The self-image in borderline personality disorder: an in-depth qualitative research study.

    PubMed

    Dammann, Gerhard; Hügli, Claudia; Selinger, Joseph; Gremaud-Heitz, Daniela; Sollberger, Daniel; Wiesbeck, Gerhard A; Küchenhoff, Joachim; Walter, Marc

    2011-08-01

    Patients with borderline personality disorder (BPD) suffer from affective instability, impulsivity, and identity disturbance which particularly manifest in an unstable or insecure self-image. One main problem for studies of core psychopathology in BPD is the complex subject of identity disturbance and self-image. The purpose of this study was to investigate the self-image of BPD patients with a qualitative research approach. Twelve patients with BPD were compared to 12 patients with remitted major depressive disorder (MDD) without personality disorder, using the Structured Interview of Personality Organization (STIPO). The transcribed interviews were analyzed using a combination of content analysis and grounded theory. BPD patients described themselves predominantly as helpful and sensitive; reported typical emotions were sadness, anger, and anxiety. MDD patients on the other hand reported numerous and various characteristics and emotions, including happiness, as well as sadness and anxiety. Other persons were characterized by the BPD group as egoistic and satisfied, while the MDD group described others as being balanced and secretive. BPD patients displayed an altruistic, superficial, and suffering self-image. Aggressive tendencies were only seen in other persons. Our findings support the concept of a self and relationship disturbance in BPD which is highly relevant for psychotherapy treatment. PMID:21838566

  13. Single-pixel three-dimensional imaging with time-based depth resolution

    NASA Astrophysics Data System (ADS)

    Sun, Ming-Jie; Edgar, Matthew P.; Gibson, Graham M.; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J.

    2016-07-01

    Time-of-flight three-dimensional imaging is an important tool for applications such as object recognition and remote sensing. Conventional time-of-flight three-dimensional imaging systems frequently use a raster scanned laser to measure the range of each pixel in the scene sequentially. Here we show a modified time-of-flight three-dimensional imaging system, which can use compressed sensing techniques to reduce acquisition times, whilst distributing the optical illumination over the full field of view. Our system is based on a single-pixel camera using short-pulsed structured illumination and a high-speed photodiode, and is capable of reconstructing 128 × 128-pixel resolution three-dimensional scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, by using a compressive sampling strategy, we demonstrate continuous real-time three-dimensional video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost three-dimensional imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  14. Photothermal optical coherence tomography for depth-resolved imaging of mesenchymal stem cells via single wall carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Subhash, Hrebesh M.; Connolly, Emma; Murphy, Mary; Barron, Valerie; Leahy, Martin

    2014-03-01

    The progress in stem cell research over the past decade holds promise and potential to address many unmet clinical therapeutic needs. Tracking stem cell with modern imaging modalities are critically needed for optimizing stem cell therapy, which offers insight into various underlying biological processes such as cell migration, engraftment, homing, differentiation, and functions etc. In this study we report the feasibility of photothermal optical coherence tomography (PT-OCT) to image human mesenchymal stem cells (hMSCs) labeled with single-walled carbon nanotubes (SWNTs) for in vitro cell tracking in three dimensional scaffolds. PT-OCT is a functional extension of conventional OCT with extended capability of localized detection of absorbing targets from scattering background to provide depth-resolved molecular contrast imaging. A 91 kHz line rate, spectral domain PT-OCT system at 1310nm was developed to detect the photothermal signal generated by 800nm excitation laser. In general, MSCs do not have obvious optical absorption properties and cannot be directly visualized using PT-OCT imaging. However, the optical absorption properties of hMSCs can me modified by labeling with SWNTs. Using this approach, MSC were labeled with SWNT and the cell distribution imaged in a 3D polymer scaffold using PT-OCT.

  15. Quantitative, depth-resolved determination of particle motion using multi-exposure, spatial frequency domain laser speckle imaging

    PubMed Central

    Rice, Tyler B.; Kwan, Elliott; Hayakawa, Carole K.; Durkin, Anthony J.; Choi, Bernard; Tromberg, Bruce J.

    2013-01-01

    Laser Speckle Imaging (LSI) is a simple, noninvasive technique for rapid imaging of particle motion in scattering media such as biological tissue. LSI is generally used to derive a qualitative index of relative blood flow due to unknown impact from several variables that affect speckle contrast. These variables may include optical absorption and scattering coefficients, multi-layer dynamics including static, non-ergodic regions, and systematic effects such as laser coherence length. In order to account for these effects and move toward quantitative, depth-resolved LSI, we have developed a method that combines Monte Carlo modeling, multi-exposure speckle imaging (MESI), spatial frequency domain imaging (SFDI), and careful instrument calibration. Monte Carlo models were used to generate total and layer-specific fractional momentum transfer distributions. This information was used to predict speckle contrast as a function of exposure time, spatial frequency, layer thickness, and layer dynamics. To verify with experimental data, controlled phantom experiments with characteristic tissue optical properties were performed using a structured light speckle imaging system. Three main geometries were explored: 1) diffusive dynamic layer beneath a static layer, 2) static layer beneath a diffuse dynamic layer, and 3) directed flow (tube) submerged in a dynamic scattering layer. Data fits were performed using the Monte Carlo model, which accurately reconstructed the type of particle flow (diffusive or directed) in each layer, the layer thickness, and absolute flow speeds to within 15% or better. PMID:24409388

  16. Relative capacities of time-gated versus CW imaging to localize tissue embedded vessels with increasing depth

    NASA Astrophysics Data System (ADS)

    Alexandrakis, George; Patel, Nimit L.; Lin, Zi-Jing; Livingston, Edward H.; Liu, Hanli

    2009-02-01

    The clinical motivation for our work was to help surgeons see vessels through non-translucent intraoperative tissues during laparoscopic removal of the gallbladder. Our main focus was to answer the question of how CW imaging performs relative to ICCD (Intensified Charge-Coupled Device) based time-gated imaging, which is a lot more costly, under broad Gaussian beam illumination conditions. We have studied the simplified case of an isolated bile duct embedded at different depths within a 2 cm slab of adipose tissue. Monte Carlo simulations were preformed for both reflectance and trans-illumination geometries. The relative performance of CW versus time-gated imaging was compared in terms of spatial resolution and vessel detection sensitivity in the resulting simulated images. Experiments were performed in reflectance geometry to validate simulation results. It was found that time-gated imaging offers superior spatial resolution and vessel detection sensitivity in all cases though CW trans-illumination measurements may also offer satisfactory performance for this tissue geometry at a lower cost.

  17. Development of a remanence measurement-based SQUID system with in-depth resolution for nanoparticle imaging

    PubMed Central

    Ge, Song; Shi, Xiangyang; Baker, James R; Banaszak Holl, Mark M; Orr, Bradford G

    2009-01-01

    We present a remanence measurement method using a superconducting quantum interference device (SQUID) to detect trace amount of magnetic nanoparticles (MNPs). Based on this method, a one-dimensional scanning system was established for imaging by utilizing superparamagnetic iron oxide nanoparticles (NPs) as contrast agents. The system was calibrated with 25 nm diameter Fe2O3 NPs, and the sensitivity of the NPs was found to be 10 ng at a distance of 1.7 cm and the spatial resolution was ∼1 cm. A theoretical model of this system was developed and applied to the deconvolution of scanned images of phantoms with two NP injection spots. Using the developed SQUID system, we were able to determine not only the amount and horizontal positions of the injections, but also their depths in the phantoms. PMID:19398816

  18. Depth-resolved optical imaging of transmural electrical propagation in perfused heart

    PubMed Central

    Hillman, Elizabeth M. C.; Bernus, Olivier; Pease, Emily; Bouchard, Matthew B.; Pertsov, Arkady

    2008-01-01

    We present a study of the 3-dimensional (3D) propagation of electrical waves in the heart wall using Laminar Optical Tomography (LOT). Optical imaging contrast is provided by a voltage sensitive dye whose fluorescence reports changes in membrane potential. We examined the transmural propagation dynamics of electrical waves in the right ventricle of Langendorf perfused rat hearts, initiated either by endo-cardial or epi-cardial pacing. 3D images were acquired at an effective frame rate of 667Hz. We compare our experimental results to a mathematical model of electrical transmural propagation. We demonstrate that LOT can clearly resolve the direction of propagation of electrical waves within the cardiac wall, and that the dynamics observed agree well with the model of electrical propagation in rat ventricular tissue. PMID:18592044

  19. Enhanced contrast and depth resolution in polarization imaging using elliptically polarized light

    NASA Astrophysics Data System (ADS)

    Sridhar, Susmita; Da Silva, Anabela

    2016-07-01

    Polarization gating is a popular and widely used technique in biomedical optics to sense superficial tissues (colinear detection), deeper volumes (crosslinear detection), and also selectively probe subsuperficial volumes (using elliptically polarized light). As opposed to the conventional linearly polarized illumination, we propose a new protocol of polarization gating that combines coelliptical and counter-elliptical measurements to selectively enhance the contrast of the images. This new method of eliminating multiple-scattered components from the images shows that it is possible to retrieve a greater signal and a better contrast for subsurface structures. In vivo experiments were performed on skin abnormalities of volunteers to confirm the results of the subtraction method and access subsurface information.

  20. Depth-resolved optical imaging of hemodynamic response in mouse brain with microcirculatory beds

    NASA Astrophysics Data System (ADS)

    Jia, Yali; Nettleton, Rosemary; Rosenberg, Mara; Boudreau, Eilis; Wang, Ruikang K.

    2011-03-01

    Optical hemodynamic imaging employed in pre-clinical studies with high spatial and temporal resolution is significant to unveil the functional activities of brain and the mechanism of internal or external stimulus effects in diverse pathological conditions and treatments. Most current optical systems only resolve hemodynamic changes within superficial macrocirculatory beds, such as laser speckle contrast imaging; or only provide vascular structural information within microcirculatory beds, such as multi-photon microscopy. In this study, we introduce a hemodynamic imaging system based on Optical Micro-angiography (OMAG) which is capable of resolving and quantifying 3D dynamic blood perfusion down to microcirculatory level. This system can measure the optical phase shifts caused by moving blood cells in microcirculation. Here, the utility of OMAG was demonstrated by monitoring the hemodynamic response to alcohol administration in mouse prefrontal cortex. Our preliminary results suggest that the spatiotemporal tracking of cerebral micro-hemodynamic using OMAG can be successfully applied to the mouse brain and reliably distinguish between vehicle and alcohol stimulation experiment.

  1. Orientation and depth estimation for femoral components using image sensor, magnetometer and inertial sensors in THR surgeries.

    PubMed

    Jiyang Gao; Shaojie Su; Hong Chen; Zhihua Wang

    2015-08-01

    Malposition of the acetabular and femoral component has long been recognized as an important cause of dislocation after total hip replacement (THR) surgeries. In order to help surgeons improve the positioning accuracy of the components, a visual-aided system for THR surgeries that could estimate orientation and depth of femoral component is proposed. The sensors are fixed inside the femoral prosthesis trial and checkerboard patterns are printed on the internal surface of the acetabular prosthesis trial. An extended Kalman filter is designed to fuse the data from inertial sensors and the magnetometer orientation estimation. A novel image processing algorithm for depth estimation is developed. The algorithms have been evaluated under the simulation with rotation quaternion and translation vector and the experimental results shows that the root mean square error (RMSE) of the orientation estimation is less then 0.05 degree and the RMSE for depth estimation is 1mm. Finally, the femoral head is displayed in 3D graphics in real time to help surgeons with the component positioning. PMID:26736858

  2. Orientation and depth estimation for femoral components using image sensor, magnetometer and inertial sensors in THR surgeries.

    PubMed

    Jiyang Gao; Shaojie Su; Hong Chen; Zhihua Wang

    2015-08-01

    Malposition of the acetabular and femoral component has long been recognized as an important cause of dislocation after total hip replacement (THR) surgeries. In order to help surgeons improve the positioning accuracy of the components, a visual-aided system for THR surgeries that could estimate orientation and depth of femoral component is proposed. The sensors are fixed inside the femoral prosthesis trial and checkerboard patterns are printed on the internal surface of the acetabular prosthesis trial. An extended Kalman filter is designed to fuse the data from inertial sensors and the magnetometer orientation estimation. A novel image processing algorithm for depth estimation is developed. The algorithms have been evaluated under the simulation with rotation quaternion and translation vector and the experimental results shows that the root mean square error (RMSE) of the orientation estimation is less then 0.05 degree and the RMSE for depth estimation is 1mm. Finally, the femoral head is displayed in 3D graphics in real time to help surgeons with the component positioning.

  3. Learning the missing values in depth maps

    NASA Astrophysics Data System (ADS)

    Yin, Xuanwu; Wang, Guijin; Zhang, Chun; Liao, Qingmin

    2013-12-01

    In this paper, we consider the task of hole filling in depth maps, with the help of an associated color image. We take a supervised learning approach to solve this problem. The model is learnt from the training set, which contain pixels that have depth values. Then we apply supervised learning to predict the depth values in the holes. Our model uses a regional Markov Random Field (MRF) that incorporates multiscale absolute and relative features (computed from the color image), and models depths not only at individual points but also between adjacent points. The experiments show that the proposed approach is able to recover fairly accurate depth values and achieve a high quality depth map.

  4. Real-time imaging systems' combination of methods to achieve automatic target recognition

    NASA Astrophysics Data System (ADS)

    Maraviglia, Carlos G.; Williams, Elmer F.; Pezzulich, Alan Z.

    1998-03-01

    Using a combination of strategies real time imaging weapons systems are achieving their goals of detecting their intended targets. The demands of acquiring a target in a cluttered environment in a timely manner with a high degree of confidence demands compromise be made as to having a truly automatic system. A combination of techniques such as dedicated image processing hardware, real time operating systems, mixes of algorithmic methods, and multi-sensor detectors are a forbearance of the unleashed potential of future weapons system and their incorporation in truly autonomous target acquisition. Elements such as position information, sensor gain controls, way marks for mid course correction, and augmentation with different imaging spectrums as well as future capabilities such as neural net expert systems and decision processors over seeing a fusion matrix architecture may be considered tools for a weapon system's achievement of its ultimate goal. Currently, acquiring a target in a cluttered environment in a timely manner with a high degree of confidence demands compromises be made as to having a truly automatic system. It is now necessary to include a human in the track decision loop, a system feature that may be long lived. Automatic Track Recognition will still be the desired goal in future systems due to the variability of military missions and desirability of an expendable asset. Furthermore, with the increasing incorporation of multi-sensor information into the track decision the human element's real time contribution must be carefully engineered.

  5. Imaging widespread seismicity at midlower crustal depths beneath Long Beach, CA, with a dense seismic array: Evidence for a depth-dependent earthquake size distribution

    NASA Astrophysics Data System (ADS)

    Inbal, Asaf; Clayton, Robert W.; Ampuero, Jean-Paul

    2015-08-01

    We use a dense seismic array composed of 5200 vertical geophones to monitor microseismicity in Long Beach, California. Poor signal-to-noise ratio due to anthropogenic activity is mitigated via downward-continuation of the recorded wavefield. The downward-continued data are continuously back projected to search for coherent arrivals from sources beneath the array, which reveals numerous, previously undetected events. The spatial distribution of seismicity is uncorrelated with the mapped fault traces, or with activity in the nearby oil-fields. Many events are located at depths larger than 20 km, well below the commonly accepted seismogenic depth for that area. The seismicity exhibits temporal clustering consistent with Omori's law, and its size distribution obeys the Gutenberg-Richter relation above 20 km but falls off exponentially at larger depths. The dense array allows detection of earthquakes two magnitude units smaller than the permanent seismic network in the area. Because the event size distribution above 20 km depth obeys a power law whose exponent is near one, this improvement yields a hundred-fold decrease in the time needed for effective characterization of seismicity in Long Beach.

  6. Depth profiling the optical absorption and thermal reflection coefficient via an analysis based on the method of images (abstract)

    NASA Astrophysics Data System (ADS)

    Power, J. F.

    2003-01-01

    The problem of depth profiling optical absorption in a thermally depth variable solid is a problem of direct interest for the analysis of complex structured materials. In this work, we introduce a new algorithm to solve this problem in a planar layered sample which is impulse irradiated. The sample is comprised of "N" model layers of thickness Δx, of constant diffusivity α, where the conductivity varies depth wise with each layer. This derivation extends to the general case of a depth variable thermal reflection coefficient with depth variable optical source density. In such a sample, at finite time, t, past excitation, thermal energy can only significantly penetrate NL model layers NL≈√4αt[-ln(ɛ)] /2Δx, where ɛ is a small error (ɛ⩽10-6) and a double transit through each layer is assumed. The depth profile of optical absorption in each layer, i, is approximated by δ(x-iΔx), weighted by the optical source density Si. The temperature at x=0- just inside a front medium contacting the sample is given by T(x=0,t)= ∑ i=12NL SiṡGR(x,x0=iΔx,t)]x=0, where GR(x,x0,t) represents an effective Green's function for optical absorption at the depth x0=iΔx in the sample. The method of images1 gives GR(x,x0=iΔx,t) in the following form: [GR(x,0Δx,t)GR(x,2Δx,t)…GR(x,2NLΔx,t)]=[A10A12 A14 A16 …..A1,2NL0A32A34 A36 …..A3,2NL….0……A2NL-1,2NL][G(x-0Δx,t)G(x-2Δx,t)……G(x-2NLΔx,t)]. The G(x-nΔx,t) are shifted image fields obtained from the infinite domain Green's function for one-dimensional heat conduction. They account for thermal wave reflection/transmission over the path length nΔx from the source (at interface i) to the surface (x=0). The Ain are lumped coefficients giving the efficiency of heat transmission from the ith source to the surface for each path order n. They are determined by a mapping procedure that identifies all propagation paths of each order, n, and computes the individual and lumped reflection coefficients. Equation (2) is

  7. Advanced ultrasound activated lockin-thermography for defect selective depth-resolved imaging

    NASA Astrophysics Data System (ADS)

    Gleiter, A.; Riegert, G.; Zweschper, Th.; Degenhardt, R.; Busse, G.

    2006-04-01

    Ultrasound activated Lockin-Thermography ("ultrasound attenuation mapping") is a defect selective NDT-technique. Its main advantage is a high probability of defect detection ("POD") since only defects produce a signal while all other features are suppressed. The mechanism involved is local sound absorption which turns a variably loaded defect into a heat source. Thermographic monitoring of elastic wave attenuation in defects was reported for the first time in 1979 by Henneke and colleagues for continuous and pulsed ultrasound injection. Later, amplitude modulated ultrasound was used to derive frequency coded phase angle images combining defect-selectivity with robustness of measurement. With mono-frequent ultrasound excitation a standing wave pattern might hide defects. With additional modulation of the ultrasound frequency such a misleading pattern can be minimized. Applications related to quality maintenance (aerospace, automotive industry) will be presented in order to illustrate the potential of frequency modulated ultrasound excitation and its applications.

  8. Nanoscopy—imaging life at the nanoscale: a Nobel Prize achievement with a bright future

    NASA Astrophysics Data System (ADS)

    Blom, Hans; Bates, Mark

    2015-10-01

    A grand scientific prize was awarded last year to three pioneering scientists, for their discovery and development of molecular ‘ON-OFF’ switching which, when combined with optical imaging, can be used to see the previously invisible with light microscopy. The Royal Swedish Academy of Science announced on October 8th their decision and explained that this achievement—rooted in physics and applied in biology and medicine—was awarded with the Nobel Prize in Chemistry for controlling fluorescent molecules to create images of specimens smaller than anything previously observed with light. The story of how this noble switch in optical microscopy was achieved and how it was engineered to visualize life at the nanoscale is highlighted in this invited comment.

  9. On evaluation of depth accuracy in consumer depth sensors

    NASA Astrophysics Data System (ADS)

    Abd Aziz, Azim Zaliha; Wei, Hong; Ferryman, James

    2015-12-01

    This paper presents an experimental study of different depth sensors. The aim is to answer the question, whether these sensors give accurate data for general depth image analysis. The study examines the depth accuracy between three popularly used depth sensors; ASUS Xtion Prolive, Kinect Xbox 360 and Kinect for Windows v2. The main attention is to study on the stability of pixels in the depth image captured at several different sensor-object distances by measuring the depth returned by the sensors within specified time intervals. The experimental results show that the fluctuation (mm) of the random selected pixels within the target area, increases with increasing distance to the sensor, especially on the Kinect for Xbox 360 and the Asus Xtion Prolive. Both of these sensors provide pixels fluctuation between 20mm and 30mm at a sensor-object distance beyond 1500mm. However, the pixel's stability of the Kinect for Windows v2 not affected much with the distance between the sensor and the object. The maximum fluctuation for all the selected pixels of Kinect for Windows v2 is approximately 5mm at sensor-object distance of between 800mm and 3000mm. Therefore, in the optimal distance, the best stability achieved.

  10. Ultrasound assessed thickness of burn scars in association with laser Doppler imaging determined depth of burns in paediatric patients.

    PubMed

    Wang, Xue-Qing; Mill, Julie; Kravchuk, Olena; Kimble, Roy M

    2010-12-01

    This study describes the ultrasound assessment of burn scars in paediatric patients and the association of these scar thickness with laser Doppler imaging (LDI) determined burn depth. A total of 60 ultrasound scar assessments were conducted on 33 scars from 21 paediatric burn patients at 3, 6 and 9 months after-burn. The mean of peak scar thickness was 0.39±0.032 cm, with the thickest at 6 months (0.40±0.036 cm). There were 17 scald burn scars (0.34±0.045 cm), 4 contact burn scars (0.61±0.092 cm), and 10 flame burn scars (0.42±0.058 cm). Each group of scars followed normal distributions. Twenty-three scars had original burns successfully scanned by LDI and various depths of burns were presented by different colours according to blood perfusion units (PU), with dark blue <125, light blue 125-250, and green 250-440 PU. The thickness of these scars was significantly different between the predominant colours of burns, with the thinnest scars for green coloured burns and the thickest for dark blue coloured burns. Within light blue burns, grafted burns healed with significantly thinner scars than non-grafted burns. This study indicates that LDI can be used for predicting the risk of hypertrophic scarring and for guiding burn care. To our knowledge, this is the first study to correlate the thickness of burns scars by ultrasound scan with burn depth determined by LDI.

  11. Linear Dispersion Relation and Depth Sensitivity to Swell Parameters: Application to Synthetic Aperture Radar Imaging and Bathymetry

    PubMed Central

    Boccia, Valentina; Renga, Alfredo; Rufino, Giancarlo; D'Errico, Marco; Moccia, Antonio; Aragno, Cesare; Zoffoli, Simona

    2015-01-01

    Long gravity waves or swell dominating the sea surface is known to be very useful to estimate seabed morphology in coastal areas. The paper reviews the main phenomena related to swell waves propagation that allow seabed morphology to be sensed. The linear dispersion is analysed and an error budget model is developed to assess the achievable depth accuracy when Synthetic Aperture Radar (SAR) data are used. The relevant issues and potentials of swell-based bathymetry by SAR are identified and discussed. This technique is of particular interest for characteristic regions of the Mediterranean Sea, such as in gulfs and relatively close areas, where traditional SAR-based bathymetric techniques, relying on strong tidal currents, are of limited practical utility. PMID:25789333

  12. Linear dispersion relation and depth sensitivity to swell parameters: application to synthetic aperture radar imaging and bathymetry.

    PubMed

    Boccia, Valentina; Renga, Alfredo; Rufino, Giancarlo; D'Errico, Marco; Moccia, Antonio; Aragno, Cesare; Zoffoli, Simona

    2015-01-01

    Long gravity waves or swell dominating the sea surface is known to be very useful to estimate seabed morphology in coastal areas. The paper reviews the main phenomena related to swell waves propagation that allow seabed morphology to be sensed. The linear dispersion is analysed and an error budget model is developed to assess the achievable depth accuracy when Synthetic Aperture Radar (SAR) data are used. The relevant issues and potentials of swell-based bathymetry by SAR are identified and discussed. This technique is of particular interest for characteristic regions of the Mediterranean Sea, such as in gulfs and relatively close areas, where traditional SAR-based bathymetric techniques, relying on strong tidal currents, are of limited practical utility. PMID:25789333

  13. Linear dispersion relation and depth sensitivity to swell parameters: application to synthetic aperture radar imaging and bathymetry.

    PubMed

    Boccia, Valentina; Renga, Alfredo; Rufino, Giancarlo; D'Errico, Marco; Moccia, Antonio; Aragno, Cesare; Zoffoli, Simona

    2015-01-01

    Long gravity waves or swell dominating the sea surface is known to be very useful to estimate seabed morphology in coastal areas. The paper reviews the main phenomena related to swell waves propagation that allow seabed morphology to be sensed. The linear dispersion is analysed and an error budget model is developed to assess the achievable depth accuracy when Synthetic Aperture Radar (SAR) data are used. The relevant issues and potentials of swell-based bathymetry by SAR are identified and discussed. This technique is of particular interest for characteristic regions of the Mediterranean Sea, such as in gulfs and relatively close areas, where traditional SAR-based bathymetric techniques, relying on strong tidal currents, are of limited practical utility.

  14. Magnetic Resonance Imaging (MRI) Analysis of Fibroid Location in Women Achieving Pregnancy After Uterine Artery Embolization

    SciTech Connect

    Walker, Woodruff J.; Bratby, Mark John

    2007-09-15

    The purpose of this study was to evaluate the fibroid morphology in a cohort of women achieving pregnancy following treatment with uterine artery embolization (UAE) for symptomatic uterine fibroids. A retrospective review of magnetic resonance imaging (MRI) of the uterus was performed to assess pre-embolization fibroid morphology. Data were collected on fibroid size, type, and number and included analysis of follow-up imaging to assess response. There have been 67 pregnancies in 51 women, with 40 live births. Intramural fibroids were seen in 62.7% of the women (32/48). Of these the fibroids were multiple in 16. A further 12 women had submucosal fibroids, with equal numbers of types 1 and 2. Two of these women had coexistent intramural fibroids. In six women the fibroids could not be individually delineated and formed a complex mass. All subtypes of fibroid were represented in those subgroups of women achieving a live birth versus those who did not. These results demonstrate that the location of uterine fibroids did not adversely affect subsequent pregnancy in the patient population investigated. Although this is only a small qualitative study, it does suggest that all types of fibroids treated with UAE have the potential for future fertility.

  15. Effect of Uveal Melanocytes on Choroidal Morphology in Rhesus Macaques and Humans on Enhanced-Depth Imaging Optical Coherence Tomography

    PubMed Central

    Yiu, Glenn; Vuong, Vivian S.; Oltjen, Sharon; Cunefare, David; Farsiu, Sina; Garzel, Laura; Roberts, Jeffrey; Thomasy, Sara M.

    2016-01-01

    Purpose To compare cross-sectional choroidal morphology in rhesus macaque and human eyes using enhanced-depth imaging optical coherence tomography (EDI-OCT) and histologic analysis. Methods Enhanced-depth imaging–OCT images from 25 rhesus macaque and 30 human eyes were evaluated for choriocapillaris and choroidal–scleral junction (CSJ) visibility in the central macula based on OCT reflectivity profiles, and compared with age-matched histologic sections. Semiautomated segmentation of the choriocapillaris and CSJ was used to measure choriocapillary and choroidal thickness, respectively. Multivariate regression was performed to determine the association of age, refractive error, and race with choriocapillaris and CSJ visibility. Results Rhesus macaques exhibit a distinct hyporeflective choriocapillaris layer on EDI-OCT, while the CSJ cannot be visualized. In contrast, humans show variable reflectivities of the choriocapillaris, with a distinct CSJ seen in many subjects. Histologic sections demonstrate large, darkly pigmented melanocytes that are densely distributed in the macaque choroid, while melanocytes in humans are smaller, less pigmented, and variably distributed. Optical coherence tomography reflectivity patterns of the choroid appear to correspond to the density, size, and pigmentation of choroidal melanocytes. Mean choriocapillary thickness was similar between the two species (19.3 ± 3.4 vs. 19.8 ± 3.4 μm, P = 0.615), but choroidal thickness may be lower in macaques than in humans (191.2 ± 43.0 vs. 266.8 ± 78.0 μm, P < 0.001). Racial differences in uveal pigmentation also appear to affect the visibility of the choriocapillaris and CSJ on EDI-OCT. Conclusions Pigmented uveal melanocytes affect choroidal morphology on EDI-OCT in rhesus macaque and human eyes. Racial differences in pigmentation may affect choriocapillaris and CSJ visibility, and may influence the accuracy of choroidal thickness measurements. PMID:27792810

  16. Use of 2D images of depth and integrated reflectivity to represent the severity of demineralization in cross-polarization optical coherence tomography

    PubMed Central

    Chan, Kenneth H.; Chan, Andrew C.; Fried, William A.; Simon, Jacob C.; Darling, Cynthia L.; Fried, Daniel

    2015-01-01

    Several studies have demonstrated the potential of cross-polarization optical coherence tomography (CP-OCT) to quantify the severity of early caries lesions (tooth decay) on tooth surfaces. The purpose of this study is to show that 2D images of the lesion depth and the integrated reflectivity can be used to accurately represent the severity of early lesions. Simulated early lesions of varying severity were produced on tooth samples using simulated lesion models. Methods were developed to convert the 3D CP-OCT images of the samples to 2D images of the lesion depth and lesion integrated reflectivity. Calculated lesion depths from OCT were compared with lesion depths measured from histological sections examined using polarized light microscopy. The 2D images of the lesion depth and integrated reflectivity are well suited for visualization of early demineralization. Polarized light micrographs (PLM) of one of the histological sections from a tooth exposed to demineralization for 48 hrs. (A) PLM image of entire thin section (B) magnified PLM image of region of interest. PMID:24307350

  17. Seismic imaging of the Waltham Canyon fault, California: comparison of ray‐theoretical and Fresnel volume prestack depth migration

    USGS Publications Warehouse

    Bauer, Klaus; Ryberg, Trond; Fuis, Gary S.; Lüth, Stefan

    2013-01-01

    Near‐vertical faults can be imaged using reflected refractions identified in controlled‐source seismic data. Often theses phases are observed on a few neighboring shot or receiver gathers, resulting in a low‐fold data set. Imaging can be carried out with Kirchhoff prestack depth migration in which migration noise is suppressed by constructive stacking of large amounts of multifold data. Fresnel volume migration can be used for low‐fold data without severe migration noise, as the smearing along isochrones is limited to the first Fresnel zone around the reflection point. We developed a modified Fresnel volume migration technique to enhance imaging of steep faults and to suppress noise and undesired coherent phases. The modifications include target‐oriented filters to separate reflected refractions from steep‐dipping faults and reflections with hyperbolic moveout. Undesired phases like multiple reflections, mode conversions, direct P and S waves, and surface waves are suppressed by these filters. As an alternative approach, we developed a new prestack line‐drawing migration method, which can be considered as a proxy to an infinite frequency approximation of the Fresnel volume migration. The line‐drawing migration is not considering waveform information but requires significantly shorter computational time. Target‐oriented filters were extended by dip filters in the line‐drawing migration method. The migration methods were tested with synthetic data and applied to real data from the Waltham Canyon fault, California. The two techniques are applied best in combination, to design filters and to generate complementary images of steep faults.

  18. An Evaluation of Effects of Different Mydriatics on Choroidal Thickness by Examining Anterior Chamber Parameters: The Scheimpflug Imaging and Enhanced Depth Imaging-OCT Study

    PubMed Central

    Yuvacı, İsa; Pangal, Emine; Yuvacı, Sümeyra; Bayram, Nurettin; Ataş, Mustafa; Başkan, Burhan; Demircan, Süleyman; Akal, Ali

    2015-01-01

    Aim. To assess the effects of mydriatics commonly used in clinical practice on choroidal thickness and anterior chamber change. Methods. This was a prospective, randomized, controlled, double-blinded study including a single eye of the participants. The subjects were assigned into 4 groups to receive tropicamide 1%, phenylephrine 2.5%, cyclopentolate 1%, and artificial tears. At the baseline, anterior chamber parameters were assessed using a Pentacam Scheimpflug camera system, and choroidal thickness (CT) was measured using a spectral-domain OCT with Enhanced Depth Imaging (EDI) modality. All measurements were repeated again after drug administration. Results. Increases in pupil diameter, volume, and depth of anterior chamber were found to be significant (p = 0.000, p = 0.000, and p = 0.000, resp.), while decreases in the choroidal thickness were found to be significant in subjects receiving mydriatics (p < 0.05). Conclusions. The study has shown that while cyclopentolate, tropicamide, and phenylephrine cause a decrease in choroidal thickness, they also lead to an increase in the volume and depth of anterior chamber. However, no correlation was detected between anterior chamber parameters and choroidal changes after drug administration. These findings suggest that the mydriatics may affect the choroidal thickness regardless of anterior chamber parameters. This study was registered with trial registration number 2014/357. PMID:26509080

  19. Layered compression for high-precision depth data.

    PubMed

    Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen

    2015-12-01

    With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm. PMID:26415171

  20. Visualizing the Subsurface of Soft Matter: Simultaneous Topographical Imaging, Depth Modulation, and Compositional Mapping with Triple Frequency Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Solares, Santiago; Ebeling, Daniel; Eslami, Babak

    2014-03-01

    Characterization of subsurface morphology and mechanical properties with nanoscale resolution and depth control is of significant interest in soft matter fields like biology and polymer science, where buried structural and compositional features can be important. However, controllably ``feeling'' the subsurface is a challenging task for which the available imaging tools are relatively limited. This presentation describes a trimodal atomic force microscopy (AFM) imaging scheme, whereby three eigenmodes of the microcantilever probe are used as separate control ``knobs'' to simultaneously measure the topography, modulate sample indentation by the tip during tip-sample impact, and map compositional contrast, respectively. This method is illustrated through computational simulation and experiments conducted on ultrathin polymer films with embedded glass nanoparticles. By actively increasing the tip-sample indentation using a higher eigenmode of the cantilever, one is able to gradually and controllably reveal glass nanoparticles that are buried tens of nanometers deep under the surface, while still being able to refocus on the surface. The authors gratefully acknowledge support from the U.S. Department of Energy (conceptual method development and experimental work, award DESC-0008115) and the U.S. National Science Foundation (computational work, award CMMI-0841840).

  1. Coupling sky images with three-dimensional radiative transfer models: a new method to estimate cloud optical depth

    NASA Astrophysics Data System (ADS)

    Mejia, F. A.; Kurtz, B.; Murray, K.; Hinkelman, L. M.; Sengupta, M.; Xie, Y.; Kleissl, J.

    2015-10-01

    A method for retrieving cloud optical depth (τc) using a ground-based sky imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various τc produced by a 3-D Radiative Transfer Model (3DRTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (θ0), τc, solar pixel angle/scattering angle (ϑs), and pixel zenith angle/view angle (ϑz). The effects of these parameters are described and the functions for radiance, Iλ(τc, θ0, ϑs, ϑz) and the red-blue ratio, RBR(τc, θ0, ϑs, ϑz) are retrieved from the 3DRTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for τc, where RBR increases with τc up to about τc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Iλmeas(ϑs, ϑz), in addition to RBRmeas(ϑs, ϑz) to obtain a unique solution for τc. The RRBR method is applied to images taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and validated against measurements from a microwave radiometer (MWR); output from the Min method for overcast skies, and τc retrieved by Beer's law from direct normal irradiance (DNI) measurements. A τc RMSE of 5.6 between the Min method and the USI are observed. The MWR and USI have an RMSE of 2.3 which is well within the uncertainty of the MWR. An RMSE of 0.95 between the USI and DNI retrieved τc is observed. The procedure developed here provides a foundation to test and develop other cloud detection algorithms.

  2. Achieving quality in cardiovascular imaging: proceedings from the American College of Cardiology-Duke University Medical Center Think Tank on Quality in Cardiovascular Imaging.

    PubMed

    Douglas, Pamela; Iskandrian, Ami E; Krumholz, Harlan M; Gillam, Linda; Hendel, Robert; Jollis, James; Peterson, Eric; Chen, Jersey; Masoudi, Frederick; Mohler, Emile; McNamara, Robert L; Patel, Manesh R; Spertus, John

    2006-11-21

    Cardiovascular imaging has enjoyed both rapid technological advances and sustained growth, yet less attention has been focused on quality than in other areas of cardiovascular medicine. To address this deficit, representatives from cardiovascular imaging societies, private payers, government agencies, the medical imaging industry, and experts in quality measurement met, and this report provides an overview of the discussions. A consensus definition of quality in imaging and a convergence of opinion on quality measures across imaging modalities was achieved and are intended to be the start of a process culminating in the development, dissemination, and adoption of quality measures for all cardiovascular imaging modalities.

  3. Validation of snow depth reconstruction from lapse-rate webcam images against terrestrial laser scanner measurements in centrel Pyrenees

    NASA Astrophysics Data System (ADS)

    Revuelto, Jesús; Jonas, Tobias; López-Moreno, Juan Ignacio

    2015-04-01

    Snow distribution in mountain areas plays a key role in many processes as runoff dynamics, ecological cycles or erosion rates. Nevertheless, the acquisition of high resolution snow depth data (SD) in space-time is a complex task that needs the application of remote sensing techniques as Terrestrial Laser Scanning (TLS). Such kind of techniques requires intense field work for obtaining high quality snowpack evolution during a specific time period. Combining TLS data with other remote sensing techniques (satellite images, photogrammetry…) and in-situ measurements could represent an improvement of the available information of a variable with rapid topographic changes. The aim of this study is to reconstruct daily SD distribution from lapse-rate images from a webcam and data from two to three TLS acquisitions during the snow melting periods of 2012, 2013 and 2014. This information is obtained at Izas Experimental catchment in Central Spanish Pyrenees; a catchment of 33ha, with an elevation ranging from 2050 to 2350m a.s.l. The lapse-rate images provide the Snow Covered Area (SCA) evolution at the study site, while TLS allows obtaining high resolution information of SD distribution. With ground control points, lapse-rate images are georrectified and their information is rasterized into a 1-meter resolution Digital Elevation Model. Subsequently, for each snow season, the Melt-Out Date (MOD) of each pixel is obtained. The reconstruction increases the estimated SD lose for each time step (day) in a distributed manner; starting the reconstruction for each grid cell at the MOD (note the reverse time evolution). To do so, the reconstruction has been previously adjusted in time and space as follows. Firstly, the degree day factor (SD lose/positive average temperatures) is calculated from the information measured at an automatic weather station (AWS) located in the catchment. Afterwards, comparing the SD lose at the AWS during a specific time period (i.e. between two TLS

  4. Quantitative comparison of contrast and imaging depth of ultrahigh-resolution optical coherence tomography images in 800–1700 nm wavelength region

    PubMed Central

    Ishida, Shutaro; Nishizawa, Norihiko

    2012-01-01

    We investigated the wavelength dependence of imaging depth and clearness of structure in ultrahigh-resolution optical coherence tomography over a wide wavelength range. We quantitatively compared the optical properties of samples using supercontinuum sources at five wavelengths, 800 nm, 1060 nm, 1300 nm, 1550 nm, and 1700 nm, with the same system architecture. For samples of industrially used homogeneous materials with low water absorption, the attenuation coefficients of the samples were fitted using Rayleigh scattering theory. We confirmed that the systems with the longer-wavelength sources had lower scattering coefficients and less dependence on the sample materials. For a biomedical sample, we observed wavelength dependence of the attenuation coefficient, which can be explained by absorption by water and hemoglobin. PMID:22312581

  5. The Role of Self-Image on Reading Rate and Comprehension Achievement.

    ERIC Educational Resources Information Center

    Brown, James I.; McDowell, Earl E.

    1979-01-01

    Reports that students in a college reading efficiency course who had high self-images read significantly faster than those with low self-images, that students with initially high self-images did not maintain those images, that males had higher self-images and read faster than did females, and that there were negative relationships between speed…

  6. Adaptive Neuro-Fuzzy Inference System (ANFIS)-Based Models for Predicting the Weld Bead Width and Depth of Penetration from the Infrared Thermal Image of the Weld Pool

    NASA Astrophysics Data System (ADS)

    Subashini, L.; Vasudevan, M.

    2012-02-01

    Type 316 LN stainless steel is the major structural material used in the construction of nuclear reactors. Activated flux tungsten inert gas (A-TIG) welding has been developed to increase the depth of penetration because the depth of penetration achievable in single-pass TIG welding is limited. Real-time monitoring and control of weld processes is gaining importance because of the requirement of remoter welding process technologies. Hence, it is essential to develop computational methodologies based on an adaptive neuro fuzzy inference system (ANFIS) or artificial neural network (ANN) for predicting and controlling the depth of penetration and weld bead width during A-TIG welding of type 316 LN stainless steel. In the current work, A-TIG welding experiments have been carried out on 6-mm-thick plates of 316 LN stainless steel by varying the welding current. During welding, infrared (IR) thermal images of the weld pool have been acquired in real time, and the features have been extracted from the IR thermal images of the weld pool. The welding current values, along with the extracted features such as length, width of the hot spot, thermal area determined from the Gaussian fit, and thermal bead width computed from the first derivative curve were used as inputs, whereas the measured depth of penetration and weld bead width were used as output of the respective models. Accurate ANFIS models have been developed for predicting the depth of penetration and the weld bead width during TIG welding of 6-mm-thick 316 LN stainless steel plates. A good correlation between the measured and predicted values of weld bead width and depth of penetration were observed in the developed models. The performance of the ANFIS models are compared with that of the ANN models.

  7. Determination of hydrogen diffusion coefficients in F82H by hydrogen depth profiling with a tritium imaging plate technique

    SciTech Connect

    Higaki, M.; Otsuka, T.; Hashizume, K.; Tokunaga, K.; Ezato, K.; Suzuki, S.; Enoeda, M.; Akiba, M.

    2015-03-15

    Hydrogen diffusion coefficients in a reduced activation ferritic/martensitic steel (F82H) and an oxide dispersion strengthened F82H (ODS-F82H) have been determined from depth profiles of plasma-loaded hydrogen with a tritium imaging plate technique (TIPT) in the temperature range from 298 K to 523 K. Data on hydrogen diffusion coefficients, D, in F82H, are summarized as D [m{sup 2}*s{sup -1}] =1.1*10{sup -7}exp(-16[kJ mol{sup -1}]/RT). The present data indicate almost no trapping effect on hydrogen diffusion due to an excess entry of energetic hydrogen by the plasma loading, which results in saturation of the trapping sites at the surface and even in the bulk. In the case of ODS-F82H, data of hydrogen diffusion coefficients are summarized as D [m{sup 2}*s{sup -1}] =2.2*10{sup -7}exp(-30[kJ mol{sup -1}]/RT) indicating a remarkable trapping effect on hydrogen diffusion caused by tiny oxide particles (Y{sub 2}O{sub 3}) in the bulk of F82H. Such oxide particles introduced in the bulk may play an effective role not only on enhancement of mechanical strength but also on suppression of hydrogen penetration by plasma loading.

  8. Non-invasive depth profile imaging of the stratum corneum using confocal Raman microscopy: first insights into the method.

    PubMed

    Ashtikar, Mukul; Matthäus, Christian; Schmitt, Michael; Krafft, Christoph; Fahr, Alfred; Popp, Jürgen

    2013-12-18

    The stratum corneum is a strong barrier that must be overcome to achieve successful transdermal delivery of a pharmaceutical agent. Many strategies have been developed to enhance the permeation through this barrier. Traditionally, drug penetration through the stratum corneum is evaluated by employing tape-stripping protocols and measuring the content of the analyte. Although effective, this method cannot provide a detailed information regarding the penetration pathways. To address this issue various microscopic techniques have been employed. Raman microscopy offers the advantage of label free imaging and provides spectral information regarding the chemical integrity of the drug as well as the tissue. In this paper we present a relatively simple method to obtain XZ-Raman profiles of human stratum corneum using confocal Raman microscopy on intact full thickness skin biopsies. The spectral datasets were analysed using a spectral unmixing algorithm. The spectral information obtained, highlights the different components of the tissue and the presence of drug. We present Raman images of untreated skin and diffusion patterns for deuterated water and beta-carotene after Franz-cell diffusion experiment.

  9. Femininity, Masculinity, and Body Image Issues among College-Age Women: An In-Depth and Written Interview Study of the Mind-Body Dichotomy

    ERIC Educational Resources Information Center

    Leavy, Patricia; Gnong, Andrea; Ross, Lauren Sardi

    2009-01-01

    In this article we investigate college-age women's body image issues in the context of dominant femininity and its polarization of the mind and body. We use original data collected through seven in-depth interviews and 32 qualitative written interviews with college-age women and men. We coded the data thematically applying feminist approaches to…

  10. Evaluation of choroidal thickness via enhanced depth-imaging optical coherence tomography in patients with systemic hypertension

    PubMed Central

    Gök, Mustafa; Karabaş, V Levent; Emre, Ender; Akşar, Arzu Toruk; Aslan, Mehmet Ş; Ural, Dilek

    2015-01-01

    Purpose: The purpose was to evaluate choroidal thickness via spectral domain optical coherence tomography (SD-OCT) and to compare the data with those of 24-h blood pressure monitoring, elastic features of the aorta, and left ventricle systolic functions, in patients with systemic hypertension. Materials and Methods: This was a case-control, cross-sectional prospective study. A total of 116 patients with systemic hypertension, and 116 healthy controls over 45 years of age, were included. Subfoveal choroidal thickness (SFCT) was measured using a Heidelberg SD-OCT platform operating in the enhanced depth imaging mode. Patients were also subjected to 24-h ambulatory blood pressure monitoring (ABPM) and standard transthoracic echocardiography (STTE). Patients were divided into dippers and nondippers using ABPM data and those with or without left ventricular hypertrophy (LVH+ and LVH-) based on STTE data. The elastic parameters of the aorta, thus aortic strain (AoS), the beta index (BI), aortic distensibility (AoD), and the left ventricular mass index (LVMI), were calculated from STTE data. Results: No significant difference in SFCT was evident between patients and controls (P ≤ 0.611). However, a significant negative correlation was evident between age and SFCT in both groups (r = −0.66/−0.56, P ≤ 0.00). No significant SFCT difference was evident between the dipper and nondipper groups (P ≤ 0.67), or the LVH (+) and LVH (-) groups (P ≤ 0.84). No significant correlation was evident between SFCT and any of AoS, BI, AoD, or LVMI. Discussion: The choroid is affected by atrophic changes associated with aging. Even in the presence of comorbid risk factors including LVH and arterial stiffness, systemic hypertension did not affect SFCT. PMID:25971169

  11. Operational Retrieval of aerosol optical depth over Indian subcontinent and Indian Ocean using INSAT-3D/Imager product validation

    NASA Astrophysics Data System (ADS)

    Mishra, M. K.; Rastogi, G.; Chauhan, P.

    2014-11-01

    Aerosol optical depth (AOD) over Indian subcontinent and Indian Ocean region is derived operationally for the first time from the geostationary earth orbit (GEO) satellite INSAT-3D Imager data at 0.65 μm wavelength. Single visible channel algorithm based on clear sky composites gives larger retrieval error in AOD than other multiple channel algorithms due to errors in estimating surface reflectance and atmospheric property. However, since MIR channel signal is insensitive to the presence of most aerosols, therefore in present study, AOD retrieval algorithm employs both visible (centred at 0.65 μm) and mid-infrared (MIR) band (centred at 3.9 μm) measurements, and allows us to monitor transport of aerosols at higher temporal resolution. Comparisons made between INSAT-3D derived AOD (τI) and MODIS derived AOD (τM) co-located in space (at 1° resolution) and time during January, February and March (JFM) 2014 encompasses 1165, 1052 and 900 pixels, respectively. Good agreement found between τI and τM during JFM 2014 with linear correlation coefficients (R) of 0.87, 0.81 and 0.76, respectively. The extensive validation made during JFM 2014 encompasses 215 co-located AOD in space and time derived by INSAT 3D (τI) and 10 sun-photometers (τA) that includes 9 AERONET (Aerosol Robotic Network) and 1 handheld sun-photometer site. INSAT-3D derived AOD i.e. τI, is found within the retrieval errors of τI = ±0.07 ±0.15τA with linear correlation coefficient (R) of 0.90 and root mean square error equal (RMSE) to 0.06. Present work shows that INSAT-3D aerosol products can be used quantitatively in many applications with caution for possible residual clouds, snow/ice, and water contamination.

  12. Exploring the effects of landscape structure on aerosol optical depth (AOD) patterns using GIS and HJ-1B images.

    PubMed

    Ye, Luping; Fang, Linchuan; Tan, Wenfeng; Wang, Yunqiang; Huang, Yu

    2016-02-01

    A GIS approach and HJ-1B images were employed to determine the effect of landscape structure on aerosol optical depth (AOD) patterns. Landscape metrics, fractal analysis and contribution analysis were proposed to quantitatively illustrate the impact of land use on AOD patterns. The high correlation between the mean AOD and landscape metrics indicates that both the landscape composition and spatial structure affect the AOD pattern. Additionally, the fractal analysis demonstrated that the densities of built-up areas and bare land decreased from the high AOD centers to the outer boundary, but those of water and forest increased. These results reveal that the built-up area is the main positive contributor to air pollution, followed by bare land. Although bare land had a high AOD, it made a limited contribution to regional air pollution due to its small spatial extent. The contribution analysis further elucidated that built-up areas and bare land can increase air pollution more strongly in spring than in autumn, whereas forest and water have a completely opposite effect. Based on fractal and contribution analyses, the different effects of cropland are ascribed to the greater vegetation coverage from farming activity in spring than in autumn. The opposite effect of cropland on air pollution reveals that green coverage and human activity also influence AOD patterns. Given that serious concerns have been raised regarding the effects of built-up areas, bare land and agricultural air pollutant emissions, this study will add fundamental knowledge of the understanding of the key factors influencing urban air quality.

  13. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.; Green, Richard; Bian, Fuyan; Strauss, Michael A.; Buck, Zoë; Annis, James; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-07-01

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)

  14. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    SciTech Connect

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan; McGreer, Ian D.; Strauss, Michael A.; Annis, James; Buck, Zoë; Green, Richard; Hodge, Jacqueline A.; Myers, Adam D.; Rafiee, Alireza; Richards, Gordon

    2014-06-25

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).

  15. Efficient multiview depth video coding using depth synthesis prediction

    NASA Astrophysics Data System (ADS)

    Lee, Cheon; Choi, Byeongho; Ho, Yo-Sung

    2011-07-01

    The view synthesis prediction (VSP) method utilizes interview correlations between views by generating an additional reference frame in the multiview video coding. This paper describes a multiview depth video coding scheme that incorporates depth view synthesis and additional prediction modes. In the proposed scheme, we exploit the reconstructed neighboring depth frame to generate an additional reference depth image for the current viewpoint to be coded using the depth image-based-rendering technique. In order to generate high-quality reference depth images, we used pre-processing on depth, depth image warping, and two types of hole filling methods depending on the number of available reference views. After synthesizing the additional depth image, we encode the depth video using the proposed additional prediction modes named VSP modes; those additional modes refer to the synthesized depth image. In particular, the VSP_SKIP mode refers to the co-located block of the synthesized frame without the coding motion vectors and residual data, which gives most of the coding gains. Experimental results demonstrate that the proposed depth view synthesis method provides high-quality depth images for the current view and the proposed VSP modes provide high coding gains, especially on the anchor frames.

  16. Latest achievements on MCT IR detectors for space and science imaging

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Rothman, J.; Castelein, P.; Cervera, C.; Baier, N.; Lobre, C.; De Borniol, E.; Zanatta, J. P.; Boulade, O.; Moreau, V.; Fieque, B.; Chorier, P.

    2016-05-01

    HgCdTe (MCT) is a very versatile material for IR detection. Indeed, the ability to tailor the cutoff frequency as close as possible to the detection needs makes it a perfect candidate for high performance detection in a wide range of applications and spectral ranges. Moreover, the high quality material available today, either by liquid phase epitaxy (LPE) or molecular beam epitaxy (MBE) allows for very low dark currents at low temperatures and make it suitable for very low flux detection application such as science imaging. MCT has also demonstrated its robustness to aggressive space environment and faces therefore a large demand for space application such as staring at the outer space for science purposes in which case, the detected photon number is very low This induces very strong constrains onto the detector: low dark current, low noise, low persistence, (very) large focal plane arrays. The MCT diode structure adapted to fulfill those requirements is naturally the p/n photodiode. Following the developments of this technology made at DEFIR and transferred to Sofradir in MWIR and LWIR ranges for tactical applications, our laboratory has consequently investigated its adaptation for ultra-low flux in different spectral bands, in collaboration with the CEA Astrophysics lab. Another alternative for ultra low flux applications in SWIR range, has also been investigated with low excess noise MCT n/p avalanche photodiodes (APD). Those APDs may in some cases open the gate to sub electron noise IR detection.. This paper will review the latest achievements obtained on this matter at DEFIR (CEA-LETI and Sofradir common laboratory) from the short wave (SWIR) band detection for classical astronomical needs, to the long wave (LWIR) band for exoplanet transit spectroscopy, up to the very long waves (VLWIR) band.

  17. HgCdTe Detectors for Space and Science Imaging: General Issues and Latest Achievements

    NASA Astrophysics Data System (ADS)

    Gravrand, O.; Rothman, J.; Cervera, C.; Baier, N.; Lobre, C.; Zanatta, J. P.; Boulade, O.; Moreau, V.; Fieque, B.

    2016-09-01

    HgCdTe (MCT) is a very versatile material system for infrared (IR) detection, suitable for high performance detection in a wide range of applications and spectral ranges. Indeed, the ability to tailor the cutoff frequency as close as possible to the needs makes it a perfect candidate for high performance detection. Moreover, the high quality material available today, grown either by molecular beam epitaxy or liquid phase epitaxy, allows for very low dark currents at low temperatures, suitable for low flux detection applications such as science imaging. MCT has also demonstrated robustness to the aggressive environment of space and faces, therefore, a large demand for space applications. A satellite may stare at the earth, in which case detection usually involves a lot of photons, called a high flux scenario. Alternatively, a satellite may stare at outer space for science purposes, in which case the detected photon number is very low, leading to low flux scenarios. This latter case induces very strong constraints onto the detector: low dark current, low noise, (very) large focal plane arrays. The classical structure used to fulfill those requirements are usually p/ n MCT photodiodes. This type of structure has been deeply investigated in our laboratory for different spectral bands, in collaboration with the CEA Astrophysics lab. However, another alternative may also be investigated with low excess noise: MCT n/ p avalanche photodiodes (APD). This paper reviews the latest achievements obtained on this matter at DEFIR (LETI and Sofradir common laboratory) from the short wave infrared (SWIR) band detection for classical astronomical needs, to long wave infrared (LWIR) band for exoplanet transit spectroscopy, up to very long wave infrared (VLWIR) bands. The different available diode architectures ( n/ p VHg or p/ n, or even APDs) are reviewed, including different available ROIC architectures for low flux detection.

  18. A review of controlled-source electromagnetic science applications and opportunities for imaging in the depth range 20 m to 1 km (Invited)

    NASA Astrophysics Data System (ADS)

    Everett, M. E.

    2009-12-01

    There are many exciting geoscience opportunities available to those who can provide three—dimensional subsurface characterization within the 20 m—1.0 km depth range. Applications include gas hydrates and permafrost; climate change proxy signatures in the stratigraphic record; shoreline shaping processes; glacier and ice—sheet mass transport; watershed—scale and coastal hydrology including seawater intrusion; fault—zone characterization; Earth’s tectonic, volcanic, and extraterrestrial impact history; landslide hazard assessment; carbon sequestration; characterization of geothermal systems. Many of the aforementioned science applications can and have been addressed using various geophysical techniques. The shallower depth range is very suitable to multi—electrode resistivity imaging, which has seen a tremendous resurgence of late thanks to newly developed instrumentation. Ground—penetrating radar signals provide high—resolution subsurface images but they attenuate rapidly with depth and hence, except in special cases, do not probe beneath 20 m. Seismic reflection and refraction studies, using artificial sources, earthquakes and ambient noise, supplemented with newer surface wave and interferometric methods, are the traditional workhorse for the 20 m - 1.0 km depth range. Gravity and magnetic techniques continue to see great improvements and have long provided valuable subsurface information, when either used alone or in conjunction with another method. Other geophysical techniques such as spontaneous potential, induced polarization, and electroseismic are also gaining in importance. Controlled—source electromagnetics occupies an important niche for 20 m -1.0 km depth investigations as a complement to seismic and as an active technique that permits both parametric (variable frequency, or time—domain equivalent) and geometric (variable source—receiver separations) soundings. Low—frequency (sub—kHz) electromagnetic induction signals

  19. Image reconstruction for PET/CT scanners: past achievements and future challenges

    PubMed Central

    Tong, Shan; Alessio, Adam M; Kinahan, Paul E

    2011-01-01

    PET is a medical imaging modality with proven clinical value for disease diagnosis and treatment monitoring. The integration of PET and CT on modern scanners provides a synergy of the two imaging modalities. Through different mathematical algorithms, PET data can be reconstructed into the spatial distribution of the injected radiotracer. With dynamic imaging, kinetic parameters of specific biological processes can also be determined. Numerous efforts have been devoted to the development of PET image reconstruction methods over the last four decades, encompassing analytic and iterative reconstruction methods. This article provides an overview of the commonly used methods. Current challenges in PET image reconstruction include more accurate quantitation, TOF imaging, system modeling, motion correction and dynamic reconstruction. Advances in these aspects could enhance the use of PET/CT imaging in patient care and in clinical research studies of pathophysiology and therapeutic interventions. PMID:21339831

  20. In-depth imaging and quantification of degenerative changes associated with Achilles ruptured tendons by polarization-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bagnaninchi, P. O.; Yang, Y.; Bonesi, M.; Maffulli, G.; Phelan, C.; Meglinski, I.; El Haj, A.; Maffulli, N.

    2010-07-01

    The objective of this study was to develop a method based on polarization-sensitive optical coherent tomography (PSOCT) for the imaging and quantification of degenerative changes associated with Achilles tendon rupture. Ex vivo PSOCT examinations were performed in 24 patients. The study involved samples from 14 ruptured Achilles tendons, 4 tendinopathic Achilles tendons and 6 patellar tendons (collected during total knee replacement) as non-ruptured controls. The samples were imaged in both intensity and phase retardation modes within 24 h after surgery, and birefringence was quantified. The samples were fixed and processed for histology immediately after imaging. Slides were assessed twice in a blind manner to provide a semi-quantitative histological score of degeneration. In-depth micro structural imaging was demonstrated. Collagen disorganization and high cellularity were observable by PSOCT as the main markers associated with pathological features. Quantitative assessment of birefringence and penetration depth found significant differences between non-ruptured and ruptured tendons. Microstructure abnormalities were observed in the microstructure of two out of four tendinopathic samples. PSOCT has the potential to explore in situ and in-depth pathological change associated with Achilles tendon rupture, and could help to delineate abnormalities in tendinopathic samples in vivo.

  1. Nuclear imaging of the breast: Translating achievements in instrumentation into clinical use

    PubMed Central

    Hruska, Carrie B.; O'Connor, Michael K.

    2013-01-01

    Approaches to imaging the breast with nuclear medicine and/or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed. PMID:23635248

  2. Nuclear imaging of the breast: Translating achievements in instrumentation into clinical use

    SciTech Connect

    Hruska, Carrie B.; O'Connor, Michael K.

    2013-05-15

    Approaches to imaging the breast with nuclear medicine and/or molecular imaging methods have been under investigation since the late 1980s when a technique called scintimammography was first introduced. This review charts the progress of nuclear imaging of the breast over the last 20 years, covering the development of newer techniques such as breast specific gamma imaging, molecular breast imaging, and positron emission mammography. Key issues critical to the adoption of these technologies in the clinical environment are discussed, including the current status of clinical studies, the efforts at reducing the radiation dose from procedures associated with these technologies, and the relevant radiopharmaceuticals that are available or under development. The necessary steps required to move these technologies from bench to bedside are also discussed.

  3. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  4. Developments in electronic imaging techniques; Proceedings of the Seminar-in-Depth, San Mateo, Calif., October 16, 17, 1972.

    NASA Technical Reports Server (NTRS)

    Zirkind, R. (Editor); Nudelman, S. S.; Schnitzler, A.

    1973-01-01

    Capabilities and limitations of infrared imaging systems are discussed, and a real-time simulator for image data systems is described. Ultrahigh resolution electronic imaging and storage with the return beam vidicon is treated, and a description is given of an electron-lens for opaque photocathodes. Ground surveillance with an active low light level TV, digital processing of Mariner 9 TV data, image enhancement by holography, and application of data compression techniques to spacecraft imaging systems are given attention. Individual items are announced in this issue.

  5. Thermal Coherence Tomography: Depth-Resolved Imaging in Parabolic Diffusion-Wave Fields Using the Thermal-Wave Radar

    NASA Astrophysics Data System (ADS)

    Tabatabaei, N.; Mandelis, A.

    2012-11-01

    Energy transport in diffusion-wave fields is gradient driven and therefore diffuse, yielding depth-integrated responses with poor axial resolution. Using matched filter principles, a methodology is proposed enabling these parabolic diffusion-wave energy fields to exhibit energy localization akin to propagating hyperbolic wave fields. This not only improves the axial resolution, but also allows for deconvolution of individual responses of superposed axially discrete sources, opening a new field of depth-resolved subsurface thermal coherence tomography using diffusion waves. The depth-resolved nature of the developed methodology is verified through experiments carried out on phantoms and biological samples. The results suggest that thermal coherence tomography can resolve deep structural changes in hard dental and bone tissues, allowing for remote detection of early dental caries and potentially early osteoporosis.

  6. IRIS explorer software for radial-depth cueing reovirus particles and other macromolecular structures determined by cryoelectron microscopy and image reconstruction.

    PubMed

    Spencer, S M; Sgro, J Y; Dryden, K A; Baker, T S; Nibert, M L

    1997-10-01

    Structures of biological macromolecules determined by transmission cryoelectron microscopy (cryo-TEM) and three-dimensional image reconstruction are often displayed as surface-shaded representations with depth cueing along the viewed direction (Z cueing). Depth cueing to indicate distance from the center of virus particles (radial-depth cueing, or R cueing) has also been used. We have found that a style of R cueing in which color is applied in smooth or discontinuous gradients using the IRIS Explorer software is an informative technique for displaying the structures of virus particles solved by cryo-TEM and image reconstruction. To develop and test these methods, we used existing cryo-TEM reconstructions of mammalian reovirus particles. The newly applied visualization techniques allowed us to discern several new structural features, including sites in the inner capsid through which the viral mRNAs may be extruded after they are synthesized by the reovirus transcriptase complexes. To demonstrate the broad utility of the methods, we also applied them to cryo-TEM reconstructions of human rhinovirus, native and swollen forms of cowpea chlorotic mottle virus, truncated core of pyruvate dehydrogenase complex from Saccharomyces cerevisiae, and flagellar filament of Salmonella typhimurium. We conclude that R cueing with color gradients is a useful tool for displaying virus particles and other macromolecules analyzed by cryo-TEM and image reconstruction.

  7. Achieving consistent image quality with dose optimization in 64-row multidetector computed tomography prospective ECG gated coronary calcium scoring.

    PubMed

    Pan, Zilai; Pang, Lifang; Li, Jianying; Zhang, Huan; Yang, Wenjie; Ding, Bei; Chai, Weimin; Chen, Kemin; Yao, Weiwu

    2011-04-01

    To evaluate the clinical value of a body mass index (BMI) based tube current (mA) selection method for obtaining consistent image quality with dose optimization in MDCT prospective ECG gated coronary calcium scoring. A formula for selecting mA to achieve desired image quality based on patient BMI was established using a control group (A) of 200 MDCT cardiac patients with a standard scan protocol. One hundred patients in Group B were scanned with this BMI-dependent mA for achieving a desired noise level of 18 HU at 2.5 mm slice thickness. The CTDIvol and image noise on the ascending aorta for the two groups were recorded. Two experienced radiologists quantitatively evaluated the image quality using scores of 1-4 with 4 being the highest. The image quality scores had no statistical difference (P = 0.71) at 3.89 ± 0.32, 3.87 ± 0.34, respectively, for groups A and B of similar BMI. The image noise in Group A had linear relationship with BMI. The image noise in Group B using BMI-dependent mA was independent of BMI with average value of 17.9 HU and smaller deviations for the noise values than in Group A (2.0 vs. 2.9 HU). There was a 35% dose reduction with BMI-dependent mA selection method on average with the lowest effective dose being only 0.35 mSv for patient with BMI of 18.3. A quantitative BMI-based mA selection method in MDCT prospective ECG gated coronary calcium scoring has been proposed to obtain a desired and consistent image quality and provide dose optimization across patient population.

  8. Improved tumor contrast achieved by single time point dual-reporter fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Tichauer, Kenneth M.; Samkoe, Kimberley S.; Sexton, Kristian J.; Gunn, Jason R.; Hasan, Tayyaba; Pogue, Brian W.

    2012-06-01

    In this study, we demonstrate a method to quantify biomarker expression that uses an exogenous dual-reporter imaging approach to improve tumor signal detection. The uptake of two fluorophores, one nonspecific and one targeted to the epidermal growth factor receptor (EGFR), were imaged at 1 h in three types of xenograft tumors spanning a range of EGFR expression levels (n=6 in each group). Using this dual-reporter imaging methodology, tumor contrast-to-noise ratio was amplified by >6 times at 1 h postinjection and >2 times at 24 h. Furthermore, by as early as 20 min postinjection, the dual-reporter imaging signal in the tumor correlated significantly with a validated marker of receptor density (P<0.05, r=0.93). Dual-reporter imaging can improve sensitivity and specificity over conventional fluorescence imaging in applications such as fluorescence-guided surgery and directly approximates the receptor status of the tumor, a measure that could be used to inform choices of biological therapies.

  9. An easily-achieved time-domain beamformer for ultrafast ultrasound imaging based on compressive sensing.

    PubMed

    Wang, Congzhi; Peng, Xi; Liang, Dong; Xiao, Yang; Qiu, Weibao; Qian, Ming; Zheng, Hairong

    2015-01-01

    In ultrafast ultrasound imaging technique, how to maintain the high frame rate, and at the same time to improve the image quality as far as possible, has become a significant issue. Several novel beamforming methods based on compressive sensing (CS) theory have been proposed in previous literatures, but all have their own limitations, such as the excessively large memory consumption and the errors caused by the short-time discrete Fourier transform (STDFT). In this study, a novel CS-based time-domain beamformer for plane-wave ultrasound imaging is proposed and its image quality has been verified to be better than the traditional DAS method and even the popular coherent compounding method on several simulated phantoms. Comparing to the existing CS method, the memory consumption of our method is significantly reduced since the encoding matrix can be sparse-expressed. In addition, the time-delay calculations of the echo signals are directly accomplished in time-domain with a dictionary concept, avoiding the errors induced by the short-time Fourier translation calculation in those frequency-domain methods. The proposed method can be easily implemented on some low-cost hardware platforms, and can obtain ultrasound images with both high frame rate and good image quality, which make it has a great potential for clinical application.

  10. Investigating Pre-Service Candidates' Images of Mathematical Reasoning: An In-Depth Online Analysis of Common Core Mathematics Standards

    ERIC Educational Resources Information Center

    Davis, C. E.; Osler, James E.

    2013-01-01

    This paper details the outcomes of a qualitative in-depth investigation into teacher education mathematics preparation. This research is grounded in the notion that mathematics teacher education students (as "degree seeking candidates") need to develop strong foundations of mathematical practice as defined by the Common Core State…

  11. High-depth-resolution 3-dimensional radar-imaging system based on a few-cycle W-band photonic millimeter-wave pulse generator.

    PubMed

    Tseng, Tzu-Fang; Wun, Jhih-Min; Chen, Wei; Peng, Sui-Wei; Shi, Jin-Wei; Sun, Chi-Kuang

    2013-06-17

    We demonstrate that a near-single-cycle photonic millimeter-wave short-pulse generator at W-band is capable to provide high spatial resolution three-dimensional (3-D) radar imaging. A preliminary study indicates that 3-D radar images with a state-of-the-art ranging resolution of around 1.2 cm at the W-band can be achieved.

  12. High-resolution 1050 nm spectral domain retinal optical coherence tomography at 120 kHz A-scan rate with 6.1 mm imaging depth

    PubMed Central

    An, Lin; Li, Peng; Lan, Gongpu; Malchow, Doug; Wang, Ruikang K.

    2013-01-01

    We report a newly developed high speed 1050nm spectral domain optical coherence tomography (SD-OCT) system for imaging posterior segment of human eye. The system is capable of an axial resolution at ~10 µm in air, an imaging depth of 6.1 mm in air, a system sensitivity fall-off at ~6 dB/3mm and an imaging speed of 120,000 A-scans per second. We experimentally demonstrate the system’s capability to perform phase-resolved imaging of dynamic blood flow within retina, indicating high phase stability of the SDOCT system. Finally, we show an example that uses this newly developed system to image posterior segment of human eye with a large view of view (10 × 9 mm2), providing detailed visualization of microstructural features from anterior retina to posterior choroid. The demonstrated system parameters and imaging performances are comparable to those that a typical 1 µm swept source OCT would deliver for retinal imaging. PMID:23411636

  13. Clear-cornea cataract surgery: pupil size and shape changes, along with anterior chamber volume and depth changes. A Scheimpflug imaging study

    PubMed Central

    Kanellopoulos, Anastasios John; Asimellis, George

    2014-01-01

    Purpose To investigate, by high-precision digital analysis of data provided by Scheimpflug imaging, changes in pupil size and shape and anterior chamber (AC) parameters following cataract surgery. Patients and methods The study group (86 eyes, patient age 70.58±10.33 years) was subjected to cataract removal surgery with in-the-bag intraocular lens implantation (pseudophakic). A control group of 75 healthy eyes (patient age 51.14±16.27 years) was employed for comparison. Scheimpflug imaging (preoperatively and 3 months postoperatively) was employed to investigate central corneal thickness, AC depth, and AC volume. In addition, by digitally analyzing the black-and-white dotted line pupil edge marking in the Scheimpflug “large maps,” the horizontal and vertical pupil diameters were individually measured and the pupil eccentricity was calculated. The correlations between AC depth and pupil shape parameters versus patient age, as well as the postoperative AC and pupil size and shape changes, were investigated. Results Compared to preoperative measurements, AC depth and AC volume of the pseudophakic eyes increased by 0.99±0.46 mm (39%; P<0.001) and 43.57±24.59 mm3 (36%; P<0.001), respectively. Pupil size analysis showed that the horizontal pupil diameter was reduced by −0.27±0.22 mm (−9.7%; P=0.001) and the vertical pupil diameter was reduced by −0.32±0.24 mm (−11%; P<0.001). Pupil eccentricity was reduced by −39.56%; P<0.001. Conclusion Cataract extraction surgery appears to affect pupil size and shape, possibly in correlation to AC depth increase. This novel investigation based on digital analysis of Scheimpflug imaging data suggests that the cataract postoperative photopic pupil is reduced and more circular. These changes appear to be more significant with increasing patient age. PMID:25368512

  14. Apparent Depth.

    ERIC Educational Resources Information Center

    Nassar, Antonio B.

    1994-01-01

    Discusses a well-known optical refraction problem where the depth of an object in a liquid is determined. Proposes that many texts incorrectly solve the problem. Provides theory, equations, and diagrams. (MVL)

  15. Improving depth imaging of legacy seismic data using curvelet-based gather conditioning: A case study from Central Poland

    NASA Astrophysics Data System (ADS)

    Górszczyk, A.; Cyz, M.; Malinowski, M.

    2015-06-01

    In presented work we test the ray-based pre-stack depth migration (PreSDM) and tomographic velocity model building (VMB) workflow applied to vintage seismic data, acquired in the 70s and 80s, in the area affected by intense salt tectonics in Central Poland. We demonstrate that the key for successful VMB is the consistency of the input residual moveouts (RMO) picks, which we obtain by developing proper gather conditioning workflow. It is based on the 2D discrete curvelet transform (DCT). DCT-based conditioning algorithm is run in a two-step mode on the common offset sections and on the depth-slices, improving the performance of the autopicker and thus providing a more reliable input to a grid tomography. Additionally, in the case of the legacy data, such conditioning acts as a trace regularization. Taking into account limitations associated with low fold and low signal-to-noise ratio, obtained results are satisfactory, providing depth sections and velocity models for verifying structural interpretation of the study area. In the case when the grid-based tomography is applied to vintage data, we strongly recommend to devote some time for proper data conditioning aimed at signal coherency improvement before running the RMO autopicker.

  16. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  17. Improved ultrasonic TV images achieved by use of Lamb-wave orientation technique

    NASA Technical Reports Server (NTRS)

    Berger, H.

    1967-01-01

    Lamb-wave sample orientation technique minimizes the interference from standing waves in continuous wave ultrasonic television imaging techniques used with thin metallic samples. The sample under investigation is oriented such that the wave incident upon it is not normal, but slightly angled.

  18. Depth-varying density and organization of chondrocytes in immature and mature bovine articular cartilage assessed by 3d imaging and analysis

    NASA Technical Reports Server (NTRS)

    Jadin, Kyle D.; Wong, Benjamin L.; Bae, Won C.; Li, Kelvin W.; Williamson, Amanda K.; Schumacher, Barbara L.; Price, Jeffrey H.; Sah, Robert L.

    2005-01-01

    Articular cartilage is a heterogeneous tissue, with cell density and organization varying with depth from the surface. The objectives of the present study were to establish a method for localizing individual cells in three-dimensional (3D) images of cartilage and quantifying depth-associated variation in cellularity and cell organization at different stages of growth. Accuracy of nucleus localization was high, with 99% sensitivity relative to manual localization. Cellularity (million cells per cm3) decreased from 290, 310, and 150 near the articular surface in fetal, calf, and adult samples, respectively, to 120, 110, and 50 at a depth of 1.0 mm. The distance/angle to the nearest neighboring cell was 7.9 microm/31 degrees , 7.1 microm/31 degrees , and 9.1 microm/31 degrees for cells at the articular surface of fetal, calf, and adult samples, respectively, and increased/decreased to 11.6 microm/31 degrees , 12.0 microm/30 degrees , and 19.2 microm/25 degrees at a depth of 0.7 mm. The methodologies described here may be useful for analyzing the 3D cellular organization of cartilage during growth, maturation, aging, degeneration, and regeneration.

  19. Low-coherence in-depth microscopy for biological tissue imaging: design of a real-time control system

    NASA Astrophysics Data System (ADS)

    Blanchot, Loic; Lebec, Martial; Beaurepaire, Emmanuel; Gleyzes, Philippe; Boccara, Albert C.; Saint-Jalmes, Herve

    1998-01-01

    We describe the design of a versatile electronic system performing a lock-in detection in parallel on every pixel of a 2D CCD camera. The system is based on a multiplexed lock- in detection method that requires accurate synchronization of the camera, the excitation signal and the processing computer. This device has been incorporated in an imaging setup based on the optical coherence tomography principle, enabling to acquire a full 2D head-on image without scanning. The imaging experiment is implemented on a modified commercial microscope. Lateral resolution is on the order of 2 micrometers , and the coherence length of the light source defines an axial resolution of approximately 8 micrometers . Images of onion cells a few hundred microns deep into the sample are obtained with 100 dB sensitivity.

  20. Low-coherence in-depth microscopy for biological tissue imaging: design of a real-time control system

    NASA Astrophysics Data System (ADS)

    Blanchot, Loic; Lebec, Martial; Beaurepaire, Emmanuel; Gleyzes, Philippe; Boccara, A. Claude; Saint-Jalmes, Herve

    1997-12-01

    We describe the design of a versatile electronic system performing a lock-in detection in parallel on every pixel of a 2D CCD camera. The system is based on a multiplexed lock- in detection method that requires accurate synchronization of the camera, the excitation signal and the processing computer. This device has been incorporated in an imaging setup based on the optical coherence tomography principle, enabling to acquire a full 2D head-on image without scanning. The imaging experiment is implemented on a modified commercial microscope. Lateral resolution is on the order of 2 micrometers , and the coherence length of the light source defines an axial resolution of approximately 8 micrometers . Images of onion cells a few hundred microns deep into the sample are obtained with 100 dB sensitivity.

  1. Extended imaging depth to 12 mm for 1050-nm spectral domain optical coherence tomography for imaging the whole anterior segment of the human eye at 120-kHz A-scan rate

    NASA Astrophysics Data System (ADS)

    Li, Peng; An, Lin; Lan, Gongpu; Johnstone, Murray; Malchow, Doug; Wang, Ruikang K.

    2013-01-01

    We demonstrate a 1050-nm spectral domain optical coherence tomography (OCT) system with a 12 mm imaging depth in air, a 120 kHz A-scan rate and a 10 μm axial resolution for anterior-segment imaging of human eye, in which a new prototype InGaAs linescan camera with 2048 active-pixel photodiodes is employed to record OCT spectral interferograms in parallel. Combined with the full-range complex technique, we show that the system delivers comparable imaging performance to that of a swept-source OCT with similar system specifications.

  2. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues.

    PubMed

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-01-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology. PMID:27358000

  3. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    NASA Astrophysics Data System (ADS)

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  4. Intravascular imaging in Kounis syndrome: role of IVUS and OCT in achieving an etiopathogenic diagnosis

    PubMed Central

    Domínguez, Fernando; Santos, Susana Mingo; Escudier-Villa, Juan Manuel; Jiménez-Sánchez, Diego; Artaza, Josebe Goirigolzarri; Alonso-Pulpón, Luis; Goicolea, Javier

    2015-01-01

    We report a case of a 60-year-old male patient presenting with anaphylactic response to anchovies associated with an acute coronary syndrome. His history was remarkable for coronary artery disease treated with a drug eluting stent to the right coronary artery six years before and stent fracture documented by coronary angiography four years prior to the event. Coronary angiography on admission revealed a very late stent thrombosis (VLST) in the right coronary artery. Intracoronary imaging techniques (IVUS and OCT) were used and were key to discard main causes of VLST. We described the characteristics of intracoronary images, along with the advantages and disadvantages of these techniques. The findings described in this case could explain a new physiopathological mechanism of stent thrombosis occurring in stent fractures. PMID:25774348

  5. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues

    PubMed Central

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-01-01

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology. PMID:27358000

  6. Eigenspectra optoacoustic tomography achieves quantitative blood oxygenation imaging deep in tissues.

    PubMed

    Tzoumas, Stratis; Nunes, Antonio; Olefir, Ivan; Stangl, Stefan; Symvoulidis, Panagiotis; Glasl, Sarah; Bayer, Christine; Multhoff, Gabriele; Ntziachristos, Vasilis

    2016-06-30

    Light propagating in tissue attains a spectrum that varies with location due to wavelength-dependent fluence attenuation, an effect that causes spectral corruption. Spectral corruption has limited the quantification accuracy of optical and optoacoustic spectroscopic methods, and impeded the goal of imaging blood oxygen saturation (sO2) deep in tissues; a critical goal for the assessment of oxygenation in physiological processes and disease. Here we describe light fluence in the spectral domain and introduce eigenspectra multispectral optoacoustic tomography (eMSOT) to account for wavelength-dependent light attenuation, and estimate blood sO2 within deep tissue. We validate eMSOT in simulations, phantoms and animal measurements and spatially resolve sO2 in muscle and tumours, validating our measurements with histology data. eMSOT shows substantial sO2 accuracy enhancement over previous optoacoustic methods, potentially serving as a valuable tool for imaging tissue pathophysiology.

  7. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging.

    PubMed

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  8. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging.

    PubMed

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-06-14

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models.

  9. A luciferin analogue generating near-infrared bioluminescence achieves highly sensitive deep-tissue imaging

    PubMed Central

    Kuchimaru, Takahiro; Iwano, Satoshi; Kiyama, Masahiro; Mitsumata, Shun; Kadonosono, Tetsuya; Niwa, Haruki; Maki, Shojiro; Kizaka-Kondoh, Shinae

    2016-01-01

    In preclinical cancer research, bioluminescence imaging with firefly luciferase and D-luciferin has become a standard to monitor biological processes both in vitro and in vivo. However, the emission maximum (λmax) of bioluminescence produced by D-luciferin is 562 nm where light is not highly penetrable in biological tissues. This emphasizes a need for developing a red-shifted bioluminescence imaging system to improve detection sensitivity of targets in deep tissue. Here we characterize the bioluminescent properties of the newly synthesized luciferin analogue, AkaLumine-HCl. The bioluminescence produced by AkaLumine-HCl in reactions with native firefly luciferase is in the near-infrared wavelength ranges (λmax=677 nm), and yields significantly increased target-detection sensitivity from deep tissues with maximal signals attained at very low concentrations, as compared with D-luciferin and emerging synthetic luciferin CycLuc1. These characteristics offer a more sensitive and accurate method for non-invasive bioluminescence imaging with native firefly luciferase in various animal models. PMID:27297211

  10. Monocular catadioptric panoramic depth estimation via caustics-based virtual scene transition.

    PubMed

    He, Yu; Wang, Lingxue; Cai, Yi; Xue, Wei

    2016-09-01

    Existing catadioptric panoramic depth estimation systems usually require two panoramic imaging subsystems to achieve binocular disparity. The system structures are complicated and only sparse depth maps can be obtained. We present a novel monocular catadioptric panoramic depth estimation method that achieves dense depth maps of panoramic scenes using a single unmodified conventional catadioptric panoramic imaging system. Caustics model the reflection of the curved mirror and establish the distance relationship between the virtual and real panoramic scenes to overcome the nonlinear problem of the curved mirror. Virtual scene depth is then obtained by applying our structure classification regularization to depth from defocus. Finally, real panoramic scene depth is recovered using the distance relationship. Our method's effectiveness is demonstrated in experiments. PMID:27607512

  11. Anisotropic magnetism, resistivity, London penetration depth and magneto-optical imaging of superconducting K0.80Fe1.76Se2 single crystals

    NASA Astrophysics Data System (ADS)

    Hu, R.; Cho, K.; Kim, H.; Hodovanets, H.; Straszheim, W. E.; Tanatar, M. A.; Prozorov, R.; Bud'ko, S. L.; Canfield, P. C.

    2011-06-01

    Single crystals of K0.80Fe1.76Se2 were successfully grown from a ternary solution. We show that, although crystals form when cooling a near-stoichiometric melt, crystals are actually growing out of a ternary solution that remains liquid to at least 850 °C. We investigated their chemical composition, anisotropic magnetic susceptibility and resistivity, specific heat, thermoelectric power, London penetration depth and flux penetration via magneto-optical imaging. Whereas the samples appear to be homogeneously superconducting at low temperatures, there appears to be a broadened transition range close to Tc ~ 30 K that may be associated with small variations in stoichiometry.

  12. Depth-resolved mid-infrared photothermal imaging of living cells and organisms with submicrometer spatial resolution

    PubMed Central

    Zhang, Delong; Li, Chen; Zhang, Chi; Slipchenko, Mikhail N.; Eakins, Gregory; Cheng, Ji-Xin

    2016-01-01

    Chemical contrast has long been sought for label-free visualization of biomolecules and materials in complex living systems. Although infrared spectroscopic imaging has come a long way in this direction, it is thus far only applicable to dried tissues because of the strong infrared absorption by water. It also suffers from low spatial resolution due to long wavelengths and lacks optical sectioning capabilities. We overcome these limitations through sensing vibrational absorption–induced photothermal effect by a visible laser beam. Our mid-infrared photothermal (MIP) approach reached 10 μM detection sensitivity and submicrometer lateral spatial resolution. This performance has exceeded the diffraction limit of infrared microscopy and allowed label-free three-dimensional chemical imaging of live cells and organisms. Distributions of endogenous lipid and exogenous drug inside single cells were visualized. We further demonstrated in vivo MIP imaging of lipids and proteins in Caenorhabditis elegans. The reported MIP imaging technology promises broad applications from monitoring metabolic activities to high-resolution mapping of drug molecules in living systems, which are beyond the reach of current infrared microscopy. PMID:27704043

  13. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  14. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  15. Combining hard and soft magnetism into a single core-shell nanoparticle to achieve both hyperthermia and image contrast

    PubMed Central

    Yang, Qiuhong; Gong, Maogang; Cai, Shuang; Zhang, Ti; Douglas, Justin T; Chikan, Viktor; Davies, Neal M; Lee, Phil; Choi, In-Young; Ren, Shenqiang; Forrest, M Laird

    2015-01-01

    Background A biocompatible core/shell structured magnetic nanoparticles (MNPs) was developed to mediate simultaneous cancer therapy and imaging. Methods & results A 22-nm MNP was first synthesized via magnetically coupling hard (FePt) and soft (Fe3O4) materials to produce high relative energy transfer. Colloidal stability of the FePt@Fe3O4 MNPs was achieved through surface modification with silane-polyethylene glycol (PEG). Intravenous administration of PEG-MNPs into tumor-bearing mice resulted in a sustained particle accumulation in the tumor region, and the tumor burden of treated mice was a third that of the mice in control groups 2 weeks after a local hyperthermia treatment. In vivo magnetic resonance imaging exhibited enhanced T2 contrast in the tumor region. Conclusion This work has demonstrated the feasibility of cancer theranostics with PEG-MNPs. PMID:26606855

  16. Reconstruction of Indoor Models Using Point Clouds Generated from Single-Lens Reflex Cameras and Depth Images

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Wu, T.-S.; Lee, I.-C.; Chang, H.; Su, A. Y. S.

    2015-05-01

    This paper presents a data acquisition system consisting of multiple RGB-D sensors and digital single-lens reflex (DSLR) cameras. A systematic data processing procedure for integrating these two kinds of devices to generate three-dimensional point clouds of indoor environments is also developed and described. In the developed system, DSLR cameras are used to bridge the Kinects and provide a more accurate ray intersection condition, which takes advantage of the higher resolution and image quality of the DSLR cameras. Structure from Motion (SFM) reconstruction is used to link and merge multiple Kinect point clouds and dense point clouds (from DSLR color images) to generate initial integrated point clouds. Then, bundle adjustment is used to resolve the exterior orientation (EO) of all images. Those exterior orientations are used as the initial values to combine these point clouds at each frame into the same coordinate system using Helmert (seven-parameter) transformation. Experimental results demonstrate that the design of the data acquisition system and the data processing procedure can generate dense and fully colored point clouds of indoor environments successfully even in featureless areas. The accuracy of the generated point clouds were evaluated by comparing the widths and heights of identified objects as well as coordinates of pre-set independent check points against in situ measurements. Based on the generated point clouds, complete and accurate three-dimensional models of indoor environments can be constructed effectively.

  17. High-resolution cellular MRI: gadolinium and iron oxide nanoparticles for in-depth dual-cell imaging of engineered tissue constructs.

    PubMed

    Di Corato, Riccardo; Gazeau, Florence; Le Visage, Catherine; Fayol, Delphine; Levitz, Pierre; Lux, François; Letourneur, Didier; Luciani, Nathalie; Tillement, Olivier; Wilhelm, Claire

    2013-09-24

    Recent advances in cell therapy and tissue engineering opened new windows for regenerative medicine, but still necessitate innovative noninvasive imaging technologies. We demonstrate that high-resolution magnetic resonance imaging (MRI) allows combining cellular-scale resolution with the ability to detect two cell types simultaneously at any tissue depth. Two contrast agents, based on iron oxide and gadolinium oxide rigid nanoplatforms, were used to "tattoo" endothelial cells and stem cells, respectively, with no impact on cell functions, including their capacity for differentiation. The labeled cells' contrast properties were optimized for simultaneous MRI detection: endothelial cells and stem cells seeded together in a polysaccharide-based scaffold material for tissue engineering appeared respectively in black and white and could be tracked, at the cellular level, both in vitro and in vivo. In addition, endothelial cells labeled with iron oxide nanoparticles could be remotely manipulated by applying a magnetic field, allowing the creation of vessel substitutes with in-depth detection of individual cellular components.

  18. High-resolution cellular MRI: gadolinium and iron oxide nanoparticles for in-depth dual-cell imaging of engineered tissue constructs.

    PubMed

    Di Corato, Riccardo; Gazeau, Florence; Le Visage, Catherine; Fayol, Delphine; Levitz, Pierre; Lux, François; Letourneur, Didier; Luciani, Nathalie; Tillement, Olivier; Wilhelm, Claire

    2013-09-24

    Recent advances in cell therapy and tissue engineering opened new windows for regenerative medicine, but still necessitate innovative noninvasive imaging technologies. We demonstrate that high-resolution magnetic resonance imaging (MRI) allows combining cellular-scale resolution with the ability to detect two cell types simultaneously at any tissue depth. Two contrast agents, based on iron oxide and gadolinium oxide rigid nanoplatforms, were used to "tattoo" endothelial cells and stem cells, respectively, with no impact on cell functions, including their capacity for differentiation. The labeled cells' contrast properties were optimized for simultaneous MRI detection: endothelial cells and stem cells seeded together in a polysaccharide-based scaffold material for tissue engineering appeared respectively in black and white and could be tracked, at the cellular level, both in vitro and in vivo. In addition, endothelial cells labeled with iron oxide nanoparticles could be remotely manipulated by applying a magnetic field, allowing the creation of vessel substitutes with in-depth detection of individual cellular components. PMID:23924160

  19. Quantitative estimation of Secchi disk depth using the HJ-1B CCD image and in situ observations in Sishili Bay, China

    NASA Astrophysics Data System (ADS)

    Yu, Dingfeng; Zhou, Bin; Fan, Yanguo; Li, Tantan; Liang, Shouzhen; Sun, Xiaoling

    2014-11-01

    Secchi disk depth (SDD) is an important optical property of water related to water quality and primary production. The traditional sampling method is not only time-consuming and labor-intensive but also limited in terms of temporal and spatial coverage, while remote sensing technology can deal with these limitations. In this study, models estimating SDD have been proposed based on the regression analysis between the HJ-1 satellite CCD image and synchronous in situ water quality measurements. The results illustrate the band ratio model of B3/B1 of CCD could be used to estimate Secchi depth in this region, with the mean relative error (MRE) of 8.6% and root mean square error (RMSE) of 0.1 m, respectively. This model has been applied to one image of HJ-1 satellite CCD, generating water transparency on June 23, 2009, which will be of immense value for environmental monitoring. In addition, SDD was deeper in offshore waters than in inshore waters. River runoffs, hydrodynamic environments, and marine aquaculture are the main factors influencing SDD in this area.

  20. Single particle quantum dot imaging achieves ultrasensitive detection capabilities for Western immunoblot analysis.

    PubMed

    Scholl, Benjamin; Liu, Hong Yan; Long, Brian R; McCarty, Owen J T; O'Hare, Thomas; Druker, Brian J; Vu, Tania Q

    2009-06-23

    Substantially improved detection methods are needed to detect fractionated protein samples present at trace concentrations in complex, heterogeneous tissue and biofluid samples. Here we describe a modification of traditional Western immunoblotting using a technique to count quantum-dot-tagged proteins on optically transparent PVDF membranes. Counts of quantum-dot-tagged proteins on immunoblots achieved optimal detection sensitivity of 0.2 pg and a sample size of 100 cells. This translates to a 10(3)-fold improvement in detection sensitivity and a 10(2)-fold reduction in required cell sample, compared to traditional Westerns processed using the same membrane immunoblots. Quantum dot fluorescent blinking analysis showed that detection of single QD-tagged proteins is possible and that detected points of fluorescence consist of one or a few (<9) QDs. The application of single nanoparticle detection capabilities to Western blotting technologies may provide a new solution to a broad range of applications currently limited by insufficient detection sensitivity and/or sample availability.

  1. Multiangle Imaging Spectroradiometer (MISR) Global Aerosol Optical Depth Validation Based on 2 Years of Coincident Aerosol Robotic Network (AERONET) Observations

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph A.; Gaitley, Barbara J.; Martonchik, John V.; Diner, David J.; Crean, Kathleen A.; Holben, Brent

    2005-01-01

    Performance of the Multiangle Imaging Spectroradiometer (MISR) early postlaunch aerosol optical thickness (AOT) retrieval algorithm is assessed quantitatively over land and ocean by comparison with a 2-year measurement record of globally distributed AERONET Sun photometers. There are sufficient coincident observations to stratify the data set by season and expected aerosol type. In addition to reporting uncertainty envelopes, we identify trends and outliers, and investigate their likely causes, with the aim of refining algorithm performance. Overall, about 2/3 of the MISR-retrieved AOT values fall within [0.05 or 20% x AOT] of Aerosol Robotic Network (AERONET). More than a third are within [0.03 or 10% x AOT]. Correlation coefficients are highest for maritime stations (approx.0.9), and lowest for dusty sites (more than approx.0.7). Retrieved spectral slopes closely match Sun photometer values for Biomass burning and continental aerosol types. Detailed comparisons suggest that adding to the algorithm climatology more absorbing spherical particles, more realistic dust analogs, and a richer selection of multimodal aerosol mixtures would reduce the remaining discrepancies for MISR retrievals over land; in addition, refining instrument low-light-level calibration could reduce or eliminate a small but systematic offset in maritime AOT values. On the basis of cases for which current particle models are representative, a second-generation MISR aerosol retrieval algorithm incorporating these improvements could provide AOT accuracy unprecedented for a spaceborne technique.

  2. Static stereo vision depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, D. B.; Von Sydow, M.

    1988-01-01

    A major problem in high-precision teleoperation is the high-resolution presentation of depth information. Stereo television has so far proved to be only a partial solution, due to an inherent trade-off among depth resolution, depth distortion and the alignment of the stereo image pair. Converged cameras can guarantee image alignment but suffer significant depth distortion when configured for high depth resolution. Moving the stereo camera rig to scan the work space further distorts depth. The 'dynamic' (camera-motion induced) depth distortion problem was solved by Diner and Von Sydow (1987), who have quantified the 'static' (camera-configuration induced) depth distortion. In this paper, a stereo image presentation technique which yields aligned images, high depth resolution and low depth distortion is demonstrated, thus solving the trade-off problem.

  3. Teaching image-processing concepts in junior high school: boys' and girls' achievements and attitudes towards technology

    NASA Astrophysics Data System (ADS)

    Barak, Moshe; Asad, Khaled

    2012-04-01

    Background : This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose : The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these subjects to the children's world and to the digital culture characterizing society today. Sample : The participants were 60 junior high-school students (9th grade). Design and method : Data collection included observations in the classes, administering an attitude questionnaire before and after the course, giving an achievement exam and analyzing the students' final projects. Results and conclusions : The findings indicated that boys' and girls' achievements were similar throughout the course, and all managed to handle the mathematical knowledge without any particular difficulties. Learners' motivation to engage in the subject was high in the project-based learning part of the course in which they dealt, for instance, with editing their own pictures and experimenting with a facial recognition method. However, the students were less interested in learning the theory at the beginning of the course. The course increased the girls', more than the boys', interest in learning scientific-technological subjects in school, and the gender gap in this regard was bridged.

  4. Flexible depth of field photography.

    PubMed

    Kuthirummal, Sujit; Nagahara, Hajime; Zhou, Changyin; Nayar, Shree K

    2011-01-01

    The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics. PMID:21088319

  5. An Examination of the Relationship between Gifted Students' Self-Image, Gifted Program Model, Years in the Program, and Academic Achievement

    ERIC Educational Resources Information Center

    Creasy, Lydia A.

    2012-01-01

    This study examined the correlations between gifted students' self-image, academic achievement, and number of years enrolled in the gifted programming. In addition, the study examined the relationships between gifted students' educational placement, race, and gender with self-image. Study participants were gifted students in third…

  6. Cost-effective instrumentation for quantitative depth measurement of optic nerve head using stereo fundus image pair and image cross correlation techniques

    NASA Astrophysics Data System (ADS)

    de Carvalho, Luis Alberto V.; Carvalho, Valeria

    2014-02-01

    One of the main problems with glaucoma throughout the world is that there are typically no symptoms in the early stages. Many people who have the disease do not know they have it and by the time one finds out, the disease is usually in an advanced stage. Most retinal cameras available in the market today use sophisticated optics and have several other features/capabilities (wide-angle optics, red-free and angiography filters, etc) that make them expensive for the general practice or for screening purposes. Therefore, it is important to develop instrumentation that is fast, effective and economic, in order to reach the mass public in the general eye-care centers. In this work, we have constructed the hardware and software of a cost-effective and non-mydriatic prototype device that allows fast capturing and plotting of high-resolution quantitative 3D images and videos of the optical disc head and neighboring region (30° of field of view). The main application of this device is for glaucoma screening, although it may also be useful for the diagnosis of other pathologies related to the optic nerve.

  7. Predefined Redundant Dictionary for Effective Depth Maps Representation

    NASA Astrophysics Data System (ADS)

    Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi

    2016-01-01

    The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.

  8. Climatology of the aerosol optical depth by components from the Multi-angle Imaging SpectroRadiometer (MISR) and chemistry transport models

    NASA Astrophysics Data System (ADS)

    Lee, Huikyo; Kalashnikova, Olga V.; Suzuki, Kentaroh; Braverman, Amy; Garay, Michael J.; Kahn, Ralph A.

    2016-06-01

    The Multi-angle Imaging SpectroRadiometer (MISR) Joint Aerosol (JOINT_AS) Level 3 product has provided a global, descriptive summary of MISR Level 2 aerosol optical depth (AOD) and aerosol type information for each month over 16+ years since March 2000. Using Version 1 of JOINT_AS, which is based on the operational (Version 22) MISR Level 2 aerosol product, this study analyzes, for the first time, characteristics of observed and simulated distributions of AOD for three broad classes of aerosols: spherical nonabsorbing, spherical absorbing, and nonspherical - near or downwind of their major source regions. The statistical moments (means, standard deviations, and skewnesses) and distributions of AOD by components derived from the JOINT_AS are compared with results from two chemistry transport models (CTMs), the Goddard Chemistry Aerosol Radiation and Transport (GOCART) and SPectral RadIatioN-TrAnSport (SPRINTARS). Overall, the AOD distributions retrieved from MISR and modeled by GOCART and SPRINTARS agree with each other in a qualitative sense. Marginal distributions of AOD for each aerosol type in both MISR and models show considerable high positive skewness, which indicates the importance of including extreme AOD events when comparing satellite retrievals with models. The MISR JOINT_AS product will greatly facilitate comparisons between satellite observations and model simulations of aerosols by type.

  9. Climatology of the aerosol optical depth by components from the Multiangle Imaging SpectroRadiometer (MISR) and a high-resolution chemistry transport model

    NASA Astrophysics Data System (ADS)

    Lee, H.; Kalashnikova, O. V.; Suzuki, K.; Braverman, A.; Garay, M. J.; Kahn, R. A.

    2015-12-01

    The Multi-angle Imaging SpectroRadiometer (MISR) Joint Aerosol (JOINT_AS) Level 3 product provides a global, descriptive summary of MISR Level 2 aerosol optical depth (AOD) and aerosol type information for each month between March 2000 and the present. Using Version 1 of JOINT_AS, which is based on the operational (Version 22) MISR Level 2 aerosol product, this study analyzes, for the first time, characteristics of observed and simulated distributions of AOD for three broad classes of aerosols: non-absorbing, absorbing, and non-spherical - near or downwind of their major source regions. The statistical moments (means, standard deviations, and skewnesses) and distributions of AOD by components derived from the JOINT_AS are compared with results from the SPectral RadIatioN-TrAnSport (SPRINTARS) model, a chemistry transport model (CTM) with very high spatial and temporal resolution. Overall, the AOD distributions of combined MISR aerosol types show good agreement with those from SPRINTARS. Marginal distributions of AOD for each aerosol type in both MISR and SPRINTARS show considerable high positive skewness, which indicates the importance of including extreme AOD events when comparing satellite retrievals with models. The MISR JOINT_AS product will greatly facilitate comparisons between satellite observations and model simulations of aerosols by type.

  10. Evaluation of Choroidal Thickness and Volume during the Third Trimester of Pregnancy using Enhanced Depth Imaging Optical Coherence Tomography: A Pilot Study

    PubMed Central

    Meira, Dália M; Oliveira, Marisa A; Ribeiro, Lígia F; Fonseca, Sofia L

    2015-01-01

    Background During pregnancy the maternal choroid is exposed to the multiple haemodynamic and hormonal alterations inherent to this physiological condition. These changes may influence choroidal anatomy. In this study a quantitative assessment of overall choroidal structure is performed, by constructing a 3-dimensional topographic map of this vascular bed. Purpose To compare the thickness and volume of the maternal choroidal in the third trimester of pregnancy with that of an age-matched control group of women. Materials and Methods Twenty-four eyes of 12 pregnant women in the last trimester and 12 age-matched healthy controls (24 eyes) were included. Optical coherence tomography in enhanced depth imaging mode was used to construct maps of the choroid of the macular area. Choroidal thickness and volume were automatically calculated for the 9 subfields defined by the Early Treatment Diabetic Retinopathy Study (ETDRS). A comparative analysis between the two groups was performed using the two-way ANOVA test. Results The average thickness of the choroid for the entire ETDRS area of the pregnant group was 295.15 ±42.40μm and 271.56 ±37.65μm in the control group (p=0.051). The average choroidal volume was 8.05 ±1.12mm3 and 7.46 ±1.03mm3, respectively (p=0.067). Although the choroid of the pregnant group had larger thickness and volume in all subfields compared to the control group, this difference was statistically significant only in three regions - the central subfield, minimum foveal thickness and inferior inner macula (p<0.05). Conclusion Our study suggests that in the third trimester of pregnancy the choroid may be subjected to physiological changes in structure. Whether these changes are a result of hormonal and/or haemodynamic adaptations of pregnancy remains to be studied. PMID:26435977

  11. High-dimensional camera shake removal with given depth map.

    PubMed

    Yue, Tao; Suo, Jinli; Dai, Qionghai

    2014-06-01

    Camera motion blur is drastically nonuniform for large depth-range scenes, and the nonuniformity caused by camera translation is depth dependent but not the case for camera rotations. To restore the blurry images of large-depth-range scenes deteriorated by arbitrary camera motion, we build an image blur model considering 6-degrees of freedom (DoF) of camera motion with a given scene depth map. To make this 6D depth-aware model tractable, we propose a novel parametrization strategy to reduce the number of variables and an effective method to estimate high-dimensional camera motion as well. The number of variables is reduced by temporal sampling motion function, which describes the 6-DoF camera motion by sampling the camera trajectory uniformly in time domain. To effectively estimate the high-dimensional camera motion parameters, we construct the probabilistic motion density function (PMDF) to describe the probability distribution of camera poses during exposure, and apply it as a unified constraint to guide the convergence of the iterative deblurring algorithm. Specifically, PMDF is computed through a back projection from 2D local blur kernels to 6D camera motion parameter space and robust voting. We conduct a series of experiments on both synthetic and real captured data, and validate that our method achieves better performance than existing uniform methods and nonuniform methods on large-depth-range scenes.

  12. Event-related functional magnetic resonance imaging (efMRI) of depth-by-disparity perception: additional evidence for right-hemispheric lateralization.

    PubMed

    Baecke, Sebastian; Lützkendorf, Ralf; Tempelmann, Claus; Müller, Charles; Adolf, Daniela; Scholz, Michael; Bernarding, Johannes

    2009-07-01

    In natural environments depth-related information has to be extracted very fast from binocular disparity even if cues are presented shortly. However, few studies used efMRI to study depth perception. We therefore analyzed extension and localization of activation evoked by depth-by-disparity stimuli that were displayed for 1 s. As some clinical as well as neuroimaging studies had found a right-hemispheric lateralization of depth perception the sample size was increased to 26 subjects to gain higher statistical significance. All individuals reported a stable depth perception. In the random effects analysis the maximum activation of the disparity versus no disparity condition was highly significant and located in the extra-striate cortex, presumably in V3A (P < 0.05, family wise error). The activation was more pronounced in the right hemisphere. However, in the single-subject analysis depth-related right-hemispheric lateralization was observed only in 65% of the subjects. Lateralization of depth-by-disparity may therefore be obscured in smaller groups.

  13. Recent achievements in measurements of soot volume fraction and temperatures in a coflow, diffuse Ethylene-air flame by visible image processing

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-Chun; Lou, Chun; Lu, Jing

    2009-02-01

    In this review paper, the recent achievements in measurements of soot volume fraction and temperatures in a coflow, diffuse Ethylene-air flame by visible image processing are briefly outlined. For the inverse analysis of the radiative properties and temperatures, different methods show different features. The least-squares method, a regularization method and a linear programming method are all suitable for this problem, and a linear programming method can give more reasonable results. The red, green and blue flame images, which can be captured by some colour CCD camera, can be taken approximately as monochromatic images, and can be used to reconstruct temperature and soot volume fraction. But more ideal is the true monochromatic images filtered by filters at certain wavelengths. Finally, the optically-thin assumption, which is adopted widely, will cause large errors, about 100 K for temperature and 50% for soot volume fraction, as the absorption of the flame medium is neglected.

  14. Jupiter Clouds in Depth

    NASA Technical Reports Server (NTRS)

    2000-01-01

    [figure removed for brevity, see original site] 619 nm [figure removed for brevity, see original site] 727 nm [figure removed for brevity, see original site] 890 nm

    Images from NASA's Cassini spacecraft using three different filters reveal cloud structures and movements at different depths in the atmosphere around Jupiter's south pole.

    Cassini's cameras come equipped with filters that sample three wavelengths where methane gas absorbs light. These are in the red at 619 nanometer (nm) wavelength and in the near-infrared at 727 nm and 890 nm. Absorption in the 619 nm filter is weak. It is stronger in the 727 nm band and very strong in the 890 nm band where 90 percent of the light is absorbed by methane gas. Light in the weakest band can penetrate the deepest into Jupiter's atmosphere. It is sensitive to the amount of cloud and haze down to the pressure of the water cloud, which lies at a depth where pressure is about 6 times the atmospheric pressure at sea level on the Earth). Light in the strongest methane band is absorbed at high altitude and is sensitive only to the ammonia cloud level and higher (pressures less than about one-half of Earth's atmospheric pressure) and the middle methane band is sensitive to the ammonia and ammonium hydrosulfide cloud layers as deep as two times Earth's atmospheric pressure.

    The images shown here demonstrate the power of these filters in studies of cloud stratigraphy. The images cover latitudes from about 15 degrees north at the top down to the southern polar region at the bottom. The left and middle images are ratios, the image in the methane filter divided by the image at a nearby wavelength outside the methane band. Using ratios emphasizes where contrast is due to methane absorption and not to other factors, such as the absorptive properties of the cloud particles, which influence contrast at all wavelengths.

    The most prominent feature seen in all three filters is the polar stratospheric haze that makes Jupiter

  15. Enhanced up/down-conversion luminescence and heat: Simultaneously achieving in one single core-shell structure for multimodal imaging guided therapy.

    PubMed

    He, Fei; Feng, Lili; Yang, Piaoping; Liu, Bin; Gai, Shili; Yang, Guixin; Dai, Yunlu; Lin, Jun

    2016-10-01

    Upon near-infrared (NIR) light irradiation, the Nd(3+) doping derived down-conversion luminescence (DCL) in NIR region and thermal effect are extremely fascinating in bio-imaging and photothermal therapy (PTT) fields. However, the concentration quenching induced opposite changing trend of the two properties makes it difficult to get desired DCL and thermal effect together in one single particle. In this study, we firstly designed a unique NaGdF4:0.3%Nd@NaGdF4@NaGdF4:10%Yb/1%Er@NaGdF4:10%Yb @NaNdF4:10%Yb multiple core-shell structure. Here the inert two layers (NaGdF4 and NaGdF4:10%Yb) can substantially eliminate the quenching effects, thus achieving markedly enhanced NIR-to-NIR DCL, NIR-to-Vis up-conversion luminescence (UCL), and thermal effect under a single 808 nm light excitation simultaneously. The UCL excites the attached photosensitive drug (Au25 nanoclusters) to generate singlet oxygen ((1)O2) for photodynamic therapy (PDT), while DCL with strong NIR emission serves as probe for sensitive deep-tissue imaging. The in vitro and in vivo experimental results demonstrate the excellent cancer inhibition efficacy of this platform due to a synergistic effect arising from the combined PTT and PDT. Furthermore, multimodal imaging including fluorescence imaging (FI), photothermal imaging (PTI), and photoacoustic imaging (PAI) has been obtained, which is used to monitor the drug delivery process, internal structure of tumor and photo-therapeutic process, thus achieving the target of imaging-guided cancer therapy. PMID:27512942

  16. Dose reduction of up to 89% while maintaining image quality in cardiovascular CT achieved with prospective ECG gating

    NASA Astrophysics Data System (ADS)

    Londt, John H.; Shreter, Uri; Vass, Melissa; Hsieh, Jiang; Ge, Zhanyu; Adda, Olivier; Dowe, David A.; Sabllayrolles, Jean-Louis

    2007-03-01

    We present the results of dose and image quality performance evaluation of a novel, prospective ECG-gated Coronary CT Angiography acquisition mode (SnapShot Pulse, LightSpeed VCT-XT scanner, GE Healthcare, Waukesha, WI), and compare it to conventional retrospective ECG gated helical acquisition in clinical and phantom studies. Image quality phantoms were used to measure noise, slice sensitivity profile, in-plane resolution, low contrast detectability and dose, using the two acquisition modes. Clinical image quality and diagnostic confidence were evaluated in a study of 31 patients scanned with the two acquisition modes. Radiation dose reduction in clinical practice was evaluated by tracking 120 consecutive patients scanned with the prospectively gated scan mode. In the phantom measurements, the prospectively gated mode resulted in equivalent or better image quality measures at dose reductions of up to 89% compared to non-ECG modulated conventional helical scans. In the clinical study, image quality was rated excellent by expert radiologist reviewing the cases, with pathology being identical using the two acquisition modes. The average dose to patients in the clinical practice study was 5.6 mSv, representing 50% reduction compared to a similar patient population scanned with the conventional helical mode.

  17. Perceived depth from shading boundaries.

    PubMed

    Kim, Juno; Anstis, Stuart

    2016-01-01

    Shading is well known to provide information the visual system uses to recover the three-dimensional shape of objects. We examined conditions under which patterns in shading promote the experience of a change in depth at contour boundaries, rather than a change in reflectance. In Experiment 1, we used image manipulation to illuminate different regions of a smooth surface from different directions. This manipulation imposed local differences in shading direction across edge contours (delta shading). We found that increasing the angle of delta shading, from 0° to 180°, monotonically increased perceived depth across the edge. Experiment 2 found that the perceptual splitting of shading into separate foreground and background surfaces depended on an assumed light source from above prior. Image regions perceived as foreground structures in upright images appeared farther in depth when the same images were inverted. We also found that the experienced break in surface continuity could promote the experience of amodal completion of colored contours that were ambiguous as to their depth order (Experiment 3). These findings suggest that the visual system can identify occlusion relationships based on monocular variations in local shading direction, but interprets this information according to a light source from above prior of midlevel visual processing.

  18. Perceived depth from shading boundaries.

    PubMed

    Kim, Juno; Anstis, Stuart

    2016-01-01

    Shading is well known to provide information the visual system uses to recover the three-dimensional shape of objects. We examined conditions under which patterns in shading promote the experience of a change in depth at contour boundaries, rather than a change in reflectance. In Experiment 1, we used image manipulation to illuminate different regions of a smooth surface from different directions. This manipulation imposed local differences in shading direction across edge contours (delta shading). We found that increasing the angle of delta shading, from 0° to 180°, monotonically increased perceived depth across the edge. Experiment 2 found that the perceptual splitting of shading into separate foreground and background surfaces depended on an assumed light source from above prior. Image regions perceived as foreground structures in upright images appeared farther in depth when the same images were inverted. We also found that the experienced break in surface continuity could promote the experience of amodal completion of colored contours that were ambiguous as to their depth order (Experiment 3). These findings suggest that the visual system can identify occlusion relationships based on monocular variations in local shading direction, but interprets this information according to a light source from above prior of midlevel visual processing. PMID:27271807

  19. Teaching Image-Processing Concepts in Junior High School: Boys' and Girls' Achievements and Attitudes towards Technology

    ERIC Educational Resources Information Center

    Barak, Moshe; Asad, Khaled

    2012-01-01

    Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these…

  20. Academic Achievement and the Self-Image of Adolescents with Diabetes Mellitus Type-1 And Rheumatoid Arthritis.

    ERIC Educational Resources Information Center

    Erkolahti, Ritva; Ilonen, Tuula

    2005-01-01

    A total of 69 adolescents, 21 with diabetes mellitus type-1 (DM), 24 with rheumatoid arthritis (RA), and 24 controls matched for sex, age, social background, and living environment, were compared by means of their school grades and the Offer Self-Image Questionnaire. The ages of the children at the time of the diagnosis of the disease and its…

  1. Water depth estimation with ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Ross, D. S.

    1973-01-01

    Contrast-enhanced 9.5 inch ERTS-1 images were produced for an investigation on ocean water color. Such images lend themselves to water depth estimation by photographic and electronic density contouring. MSS-4 and -5 images of the Great Bahama Bank were density sliced by both methods. Correlation was found between the MSS-4 image and a hydrographic chart at 1:467,000 scale, in a number of areas corresponding to water depth of less than 2 meters, 5 to 10 meters and 10 to about 20 meters. The MSS-5 image was restricted to depths of about 2 meters. Where reflective bottom and clear water are found, ERTS-1 MSS-4 images can be used with density contouring by electronic or photographic methods for estimating depths to 5 meters within about one meter.

  2. SU-E-T-387: Achieving Optimal Patient Setup Imaging and Treatment Workflow Configurations in Multi-Room Proton Centers

    SciTech Connect

    Zhang, H; Prado, K; Langen, K; Yi, B; Mehta, M; Regine, W; D'Souza, W

    2014-06-01

    Purpose: To simulate patient flow in proton treatment center under uncertainty and to explore the feasibility of treatment preparation rooms to improve patient throughput and cyclotron utilization. Methods: Three center layout scenarios were modeled: (S1: In-Tx room imaging) patient setup and imaging (planar/volumetric) performed in treatment room, (S2: Patient setup in preparation room) each treatment room was assigned with preparation room(s) that was equipped with lasers only for patient setup and gross patient alignment, and (S3: Patient setup and imaging in preparation room) preparation room(s) was equipped with laser and volumetric imaging for patient setup, gross and fine patient alignment. A 'snap' imaging was performed in treatment room. For each scenario, the number of treatment rooms and the number of preparation rooms serving each treatment room were varied. We examined our results (average of 100 16-hour (two shifts) working days) by evaluating patient throughput and cyclotron utilization. Results: When the number of treatment rooms increased ([from, to]) [1, 5], daily patient throughput increased [32, 161], [29, 184] and [27, 184] and cyclotron utilization increased [13%, 85%], [12%, 98%], and [11%, 98%] for scenarios S1, S2 and S3 respectively. However, both measures plateaued after 4 rooms. With the preparation rooms, the throughput and the cyclotron utilization increased by 14% and 15%, respectively. Three preparation rooms were optimal to serve 1-3 treatment rooms and two preparation rooms were optimal to serve 4 or 5 treatment rooms. Conclusion: Patient preparation rooms for patient setup may increase throughput and decrease the need for additional treatment rooms (cost effective). Optimal number of preparation rooms serving each gantry room varies as a function of treatment rooms and patient setup scenarios. A 5th treatment room may not be justified by throughput or utilization.

  3. Depth Estimation Using a Sliding Camera.

    PubMed

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm. PMID:26685238

  4. Depth Estimation Using a Sliding Camera.

    PubMed

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  5. Chemical analysis of solid materials by a LIMS instrument designed for space research: 2D elemental imaging, sub-nm depth profiling and molecular surface analysis

    NASA Astrophysics Data System (ADS)

    Moreno-García, Pavel; Grimaudo, Valentine; Riedo, Andreas; Neuland, Maike B.; Tulej, Marek; Broekmann, Peter; Wurz, Peter

    2016-04-01

    Direct quantitative chemical analysis with high lateral and vertical resolution of solid materials is of prime importance for the development of a wide variety of research fields, including e.g., astrobiology, archeology, mineralogy, electronics, among many others. Nowadays, studies carried out by complementary state-of-the-art analytical techniques such as Auger Electron Spectroscopy (AES), X-ray Photoelectron Spectroscopy (XPS), Secondary Ion Mass Spectrometry (SIMS), Glow Discharge Time-of-Flight Mass Spectrometry (GD-TOF-MS) or Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) provide extensive insight into the chemical composition and allow for a deep understanding of processes that might have fashioned the outmost layers of an analyte due to its interaction with the surrounding environment. Nonetheless, these investigations typically employ equipment that is not suitable for implementation on spacecraft, where requirements concerning weight, size and power consumption are very strict. In recent years Laser Ablation/Ionization Mass Spectrometry (LIMS) has re-emerged as a powerful analytical technique suitable not only for laboratory but also for space applications.[1-3] Its improved performance and measurement capabilities result from the use of cutting edge ultra-short femtosecond laser sources, improved vacuum technology and fast electronics. Because of its ultimate compactness, simplicity and robustness it has already proven to be a very suitable analytical tool for elemental and isotope investigations in space research.[4] In this contribution we demonstrate extended capabilities of our LMS instrument by means of three case studies: i) 2D chemical imaging performed on an Allende meteorite sample,[5] ii) depth profiling with unprecedented sub-nm vertical resolution on Cu electrodeposited interconnects[6,7] and iii) preliminary molecular desorption of polymers without assistance of matrix or functionalized substrates.[8] On the whole

  6. Dual-band Fourier domain optical coherence tomography with depth-related compensations

    PubMed Central

    Zhang, Miao; Ma, Lixin; Yu, Ping

    2013-01-01

    Dual-band Fourier domain optical coherence tomography (FD-OCT) provides depth-resolved spectroscopic imaging that enhances tissue contrast and reduces image speckle. However, previous dual-band FD-OCT systems could not correctly give the tissue spectroscopic contrast due to depth-related discrepancy in the imaging method and attenuation in biological tissue samples. We designed a new dual-band full-range FD-OCT imaging system and developed an algorithm to compensate depth-related fall-off and light attenuation. In our imaging system, the images from two wavelength bands were intrinsically overlapped and their intensities were balanced. The processing time of dual-band OCT image reconstruction and depth-related compensations were minimized by using multiple threads that execute in parallel. Using the newly developed system, we studied tissue phantoms and human cancer xenografts and muscle tissues dissected from severely compromised immune deficient mice. Improved spectroscopic contrast and sensitivity were achieved, benefiting from the depth-related compensations. PMID:24466485

  7. Validation of MODIS Aerosol Optical Depth Retrieval Over Land

    NASA Technical Reports Server (NTRS)

    Chu, D. A.; Kaufman, Y. J.; Ichoku, C.; Remer, L. A.; Tanre, D.; Holben, B. N.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Aerosol optical depths are derived operationally for the first time over land in the visible wavelengths by MODIS (Moderate Resolution Imaging Spectroradiometer) onboard the EOSTerra spacecraft. More than 300 Sun photometer data points from more than 30 AERONET (Aerosol Robotic Network) sites globally were used in validating the aerosol optical depths obtained during July - September 2000. Excellent agreement is found with retrieval errors within (Delta)tau=+/- 0.05 +/- 0.20 tau, as predicted, over (partially) vegetated surfaces, consistent with pre-launch theoretical analysis and aircraft field experiments. In coastal and semi-arid regions larger errors are caused predominantly by the uncertainty in evaluating the surface reflectance. The excellent fit was achieved despite the ongoing improvements in instrument characterization and calibration. This results show that MODIS-derived aerosol optical depths can be used quantitatively in many applications with cautions for residual clouds, snow/ice, and water contamination.

  8. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    SciTech Connect

    Wang, Qi; Wang, Junting; Lu, Qingyou; Hou, Yubin

    2013-11-15

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d{sub 31} coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  9. A high-stability scanning tunneling microscope achieved by an isolated tiny scanner with low voltage imaging capability

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Hou, Yubin; Wang, Junting; Lu, Qingyou

    2013-11-01

    We present a novel homebuilt scanning tunneling microscope (STM) with high quality atomic resolution. It is equipped with a small but powerful GeckoDrive piezoelectric motor which drives a miniature and detachable scanning part to implement coarse approach. The scanning part is a tiny piezoelectric tube scanner (industry type: PZT-8, whose d31 coefficient is one of the lowest) housed in a slightly bigger polished sapphire tube, which is riding on and spring clamped against the knife edges of a tungsten slot. The STM so constructed shows low back-lashing and drifting and high repeatability and immunity to external vibrations. These are confirmed by its low imaging voltages, low distortions in the spiral scanned images, and high atomic resolution quality even when the STM is placed on the ground of the fifth floor without any external or internal vibration isolation devices.

  10. True-Depth: a new type of true 3D volumetric display system suitable for CAD, medical imaging, and air-traffic control

    NASA Astrophysics Data System (ADS)

    Dolgoff, Eugene

    1998-04-01

    Floating Images, Inc. is developing a new type of volumetric monitor capable of producing a high-density set of points in 3D space. Since the points of light actually exist in space, the resulting image can be viewed with continuous parallax, both vertically and horizontally, with no headache or eyestrain. These 'real' points in space are always viewed with a perfect match between accommodation and convergence. All scanned points appear to the viewer simultaneously, making this display especially suitable for CAD, medical imaging, air-traffic control, and various military applications. This system has the potential to display imagery so accurately that a ruler could be placed within the aerial image to provide precise measurement in any direction. A special virtual imaging arrangement allows the user to superimpose 3D images on a solid object, making the object look transparent. This is particularly useful for minimally invasive surgery in which the internal structure of a patient is visible to a surgeon in 3D. Surgical procedures can be carried out through the smallest possible hole while the surgeon watches the procedure from outside the body as if the patient were transparent. Unlike other attempts to produce volumetric imaging, this system uses no massive rotating screen or any screen at all, eliminating down time due to breakage and possible danger due to potential mechanical failure. Additionally, it is also capable of displaying very large images.

  11. Depth perception in autostereograms: 1/f noise is best

    NASA Astrophysics Data System (ADS)

    Yankelevsky, Yael; Shvartz, Ishai; Avraham, Tamar; Bruckstein, Alfred M.

    2016-02-01

    An autostereogram is a single image that encodes depth information that pops out when looking at it. The trick is achieved by replicating a vertical strip that sets a basic two-dimensional pattern with disparity shifts that encode a three-dimensional scene. It is of interest to explore the dependency between the ease of perceiving depth in autostereograms and the choice of the basic pattern used for generating them. In this work we confirm a theory proposed by Bruckstein et al. to explain the process of autostereographic depth perception, providing a measure for the ease of "locking into" the depth profile, based on the spectral properties of the basic pattern used. We report the results of three sets of psychophysical experiments using autostereograms generated from two-dimensional random noise patterns having power spectra of the form $1/f^\\beta$. The experiments were designed to test the ability of human subjects to identify smooth, low resolution surfaces, as well as detail, in the form of higher resolution objects in the depth profile, and to determine limits in identifying small objects as a function of their size. In accordance with the theory, we discover a significant advantage of the $1/f$ noise pattern (pink noise) for fast depth lock-in and fine detail detection, showing that such patterns are optimal choices for autostereogram design. Validating the theoretical model predictions strengthens its underlying assumptions, and contributes to a better understanding of the visual system's binocular disparity mechanisms.

  12. Remote sensing of stream depths with hydraulically assisted bathymetry (HAB) models

    NASA Astrophysics Data System (ADS)

    Fonstad, Mark A.; Marcus, W. Andrew

    2005-12-01

    This article introduces a technique for using a combination of remote sensing imagery and open-channel flow principles to estimate depths for each pixel in an imaged river. This technique, which we term hydraulically assisted bathymetry (HAB), uses a combination of local stream gage information on discharge, image brightness data, and Manning-based estimates of stream resistance to calculate water depth. The HAB technique does not require ground-truth depth information at the time of flight. HAB can be accomplished with multispectral or hyperspectral data, and therefore can be applied over entire watersheds using standard high spatial resolution satellite or aerial images. HAB also has the potential to be applied retroactively to historic imagery, allowing researchers to map temporal changes in depth. We present two versions of the technique, HAB-1 and HAB-2. HAB-1 is based primarily on the geometry, discharge and velocity relationships of river channels. Manning's equation (assuming average depth approximates the hydraulic radius), the discharge equation, and the assumption that the frequency distribution of depths within a cross-section approximates that of a triangle are combined with discharge data from a local station, width measurements from imagery, and slope measurements from maps to estimate minimum, average and maximum depths at a multiple cross-sections. These depths are assigned to pixels of maximum, average, and minimum brightness within the cross-sections to develop a brightness-depth relation to estimate depths throughout the remainder of the river. HAB-2 is similar to HAB-1 in operation, but the assumption that the distribution of depths approximates that of a triangle is replaced by an optical Beer-Lambert law of light absorbance. In this case, the flow equations and the optical equations are used to iteratively scale the river pixel values until their depths produce a discharge that matches that of a nearby gage. R2 values for measured depths

  13. Depth perception of illusory surfaces.

    PubMed

    Kogo, Naoki; Drożdżewska, Anna; Zaenen, Peter; Alp, Nihan; Wagemans, Johan

    2014-03-01

    The perception of an illusory surface, a subjectively perceived surface that is not given in the image, is one of the most intriguing phenomena in vision. It strongly influences the perception of some fundamental properties, namely, depth, lightness and contours. Recently, we suggested (1) that the context-sensitive mechanism of depth computation plays a key role in creating the illusion, (2) that the illusory lightness perception can be explained by an influence of depth perception on the lightness computation, and (3) that the perception of variations of the Kanizsa figure can be well-reproduced by implementing these principles in a model (Kogo, Strecha, et al., 2010). However, depth perception, lightness perception, contour perception, and their interactions can be influenced by various factors. It is essential to measure the differences between the variation figures in these aspects separately to further understand the mechanisms. As a first step, we report here the results of a new experimental paradigm to compare the depth perception of the Kanizsa figure and its variations. One of the illusory figures was presented side-by-side with a non-illusory variation whose stereo disparities were varied. Participants had to decide in which of these two figures the central region appeared closer. The results indicate that the depth perception of the illusory surface was indeed different in the variation figures. Furthermore, there was a non-linear interaction between the occlusion cues and stereo disparity cues. Implications of the results for the neuro-computational mechanisms are discussed.

  14. Depth enhanced and content aware video stabilization

    NASA Astrophysics Data System (ADS)

    Lindner, A.; Atanassov, K.; Goma, S.

    2015-03-01

    We propose a system that uses depth information for video stabilization. The system uses 2D-homographies as frame pair transforms that are estimated with keypoints at the depth of interest. This makes the estimation more robust as the points lie on a plane. The depth of interest can be determined automatically from the depth histogram, inferred from user input such as tap-to-focus, or selected by the user; i.e., tap-to-stabilize. The proposed system can stabilize videos on the fly in a single pass and is especially suited for mobile phones with multiple cameras that can compute depth maps automatically during image acquisition.

  15. Correlation Plenoptic Imaging.

    PubMed

    D'Angelo, Milena; Pepe, Francesco V; Garuccio, Augusto; Scarcelli, Giuliano

    2016-06-01

    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging. PMID:27314718

  16. Correlation Plenoptic Imaging.

    PubMed

    D'Angelo, Milena; Pepe, Francesco V; Garuccio, Augusto; Scarcelli, Giuliano

    2016-06-01

    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in standard plenoptic imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this Letter, we demonstrate that the correlation in both momentum and position of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.

  17. Imaging Live Cells at the Nanometer-Scale with Single-Molecule Microscopy: Obstacles and Achievements in Experiment Optimization for Microbiology

    PubMed Central

    Haas, Beth L.; Matson, Jyl S.; DiRita, Victor J.; Biteen, Julie S.

    2015-01-01

    Single-molecule fluorescence microscopy enables biological investigations inside living cells to achieve millisecond- and nanometer-scale resolution. Although single-molecule-based methods are becoming increasingly accessible to non-experts, optimizing new single-molecule experiments can be challenging, in particular when super-resolution imaging and tracking are applied to live cells. In this review, we summarize common obstacles to live-cell single-molecule microscopy and describe the methods we have developed and applied to overcome these challenges in live bacteria. We examine the choice of fluorophore and labeling scheme, approaches to achieving single-molecule levels of fluorescence, considerations for maintaining cell viability, and strategies for detecting single-molecule signals in the presence of noise and sample drift. We also discuss methods for analyzing single-molecule trajectories and the challenges presented by the finite size of a bacterial cell and the curvature of the bacterial membrane. PMID:25123183

  18. Superpixel-based 3D warping using view plus depth data from multiple viewpoints

    NASA Astrophysics Data System (ADS)

    Tezuka, Tomoyuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    This paper presents a method of virtual view synthesis using view plus depth data from multiple viewpoints. Intuitively, virtual view generation from those data can be easily achieved by simple 3D warping. However, 3D points reconstructed from those data are isolated, i.e. not connected with each other. Consequently, the images generated by existing methods have many holes that are very annoying due to occlusions and the limited sampling density. To tackle this problem, we propose two steps algorithm as follows. In the first step, view plus depth data from each viewpoint is 3D warped to the virtual viewpoint. In this process, we determine which neighboring pixels should be connected or kept isolated. For this determination, we use depth differences among neighboring pixels, and SLIC-based superpixel segmentation that considers both color and depth information. The pixel pairs that have small depth differences or reside in same superpixels are connected, and the polygons enclosed by the connected pixels are inpainted, which greatly reduces the holes. This warping process is performed individually for each viewpoint from which view plus depth data are provided, resulting in several images at the virtual viewpoint that are warped from different viewpoints. In the second step, we merge those warped images to obtain the final result. Thanks to the data provided from different viewpoints, the final result has less noises and holes compared to the result from single viewpoint information. Experimental results using publicly available view plus depth data are reported to validate our method.

  19. Photon counting compressive depth mapping.

    PubMed

    Howland, Gregory A; Lum, Daniel J; Ware, Matthew R; Howell, John C

    2013-10-01

    We demonstrate a compressed sensing, photon counting lidar system based on the single-pixel camera. Our technique recovers both depth and intensity maps from a single under-sampled set of incoherent, linear projections of a scene of interest at ultra-low light levels around 0.5 picowatts. Only two-dimensional reconstructions are required to image a three-dimensional scene. We demonstrate intensity imaging and depth mapping at 256 × 256 pixel transverse resolution with acquisition times as short as 3 seconds. We also show novelty filtering, reconstructing only the difference between two instances of a scene. Finally, we acquire 32 × 32 pixel real-time video for three-dimensional object tracking at 14 frames-per-second. PMID:24104293

  20. 7.0-T magnetic resonance imaging characterization of acute blood-brain-barrier disruption achieved with intracranial irreversible electroporation.

    PubMed

    Garcia, Paulo A; Rossmeisl, John H; Robertson, John L; Olson, John D; Johnson, Annette J; Ellis, Thomas L; Davalos, Rafael V

    2012-01-01

    The blood-brain-barrier (BBB) presents a significant obstacle to the delivery of systemically administered chemotherapeutics for the treatment of brain cancer. Irreversible electroporation (IRE) is an emerging technology that uses pulsed electric fields for the non-thermal ablation of tumors. We hypothesized that there is a minimal electric field at which BBB disruption occurs surrounding an IRE-induced zone of ablation and that this transient response can be measured using gadolinium (Gd) uptake as a surrogate marker for BBB disruption. The study was performed in a Good Laboratory Practices (GLP) compliant facility and had Institutional Animal Care and Use Committee (IACUC) approval. IRE ablations were performed in vivo in normal rat brain (n = 21) with 1-mm electrodes (0.45 mm diameter) separated by an edge-to-edge distance of 4 mm. We used an ECM830 pulse generator to deliver ninety 50-μs pulse treatments (0, 200, 400, 600, 800, and 1000 V/cm) at 1 Hz. The effects of applied electric fields and timing of Gd administration (-5, +5, +15, and +30 min) was assessed by systematically characterizing IRE-induced regions of cell death and BBB disruption with 7.0-T magnetic resonance imaging (MRI) and histopathologic evaluations. Statistical analysis on the effect of applied electric field and Gd timing was conducted via Fit of Least Squares with α = 0.05 and linear regression analysis. The focal nature of IRE treatment was confirmed with 3D MRI reconstructions with linear correlations between volume of ablation and electric field. Our results also demonstrated that IRE is an ablation technique that kills brain tissue in a focal manner depicted by MRI (n = 16) and transiently disrupts the BBB adjacent to the ablated area in a voltage-dependent manner as seen with Evan's Blue (n = 5) and Gd administration. PMID:23226293

  1. 7.0-T Magnetic Resonance Imaging Characterization of Acute Blood-Brain-Barrier Disruption Achieved with Intracranial Irreversible Electroporation

    PubMed Central

    Garcia, Paulo A.; Rossmeisl, John H.; Robertson, John L.; Olson, John D.; Johnson, Annette J.; Ellis, Thomas L.; Davalos, Rafael V.

    2012-01-01

    The blood-brain-barrier (BBB) presents a significant obstacle to the delivery of systemically administered chemotherapeutics for the treatment of brain cancer. Irreversible electroporation (IRE) is an emerging technology that uses pulsed electric fields for the non-thermal ablation of tumors. We hypothesized that there is a minimal electric field at which BBB disruption occurs surrounding an IRE-induced zone of ablation and that this transient response can be measured using gadolinium (Gd) uptake as a surrogate marker for BBB disruption. The study was performed in a Good Laboratory Practices (GLP) compliant facility and had Institutional Animal Care and Use Committee (IACUC) approval. IRE ablations were performed in vivo in normal rat brain (n = 21) with 1-mm electrodes (0.45 mm diameter) separated by an edge-to-edge distance of 4 mm. We used an ECM830 pulse generator to deliver ninety 50-μs pulse treatments (0, 200, 400, 600, 800, and 1000 V/cm) at 1 Hz. The effects of applied electric fields and timing of Gd administration (−5, +5, +15, and +30 min) was assessed by systematically characterizing IRE-induced regions of cell death and BBB disruption with 7.0-T magnetic resonance imaging (MRI) and histopathologic evaluations. Statistical analysis on the effect of applied electric field and Gd timing was conducted via Fit of Least Squares with α = 0.05 and linear regression analysis. The focal nature of IRE treatment was confirmed with 3D MRI reconstructions with linear correlations between volume of ablation and electric field. Our results also demonstrated that IRE is an ablation technique that kills brain tissue in a focal manner depicted by MRI (n = 16) and transiently disrupts the BBB adjacent to the ablated area in a voltage-dependent manner as seen with Evan's Blue (n = 5) and Gd administration. PMID:23226293

  2. Automatic exposure control in multichannel CT with tube current modulation to achieve a constant level of image noise: Experimental assessment on pediatric phantoms

    SciTech Connect

    Brisse, Herve J.; Madec, Ludovic; Gaboriaud, Genevieve; Lemoine, Thomas; Savignoni, Alexia; Neuenschwander, Sylvia; Aubert, Bernard; Rosenwald, Jean-Claude

    2007-07-15

    Automatic exposure control (AEC) systems have been developed by computed tomography (CT) manufacturers to improve the consistency of image quality among patients and to control the absorbed dose. Since a multichannel helical CT scan may easily increase individual radiation doses, this technical improvement is of special interest in children who are particularly sensitive to ionizing radiation, but little information is currently available regarding the precise performance of these systems on small patients. Our objective was to assess an AEC system on pediatric dose phantoms by studying the impact of phantom transmission and acquisition parameters on tube current modulation, on the resulting absorbed dose and on image quality. We used a four-channel CT scan working with a patient-size and z-axis-based AEC system designed to achieve a constant noise within the reconstructed images by automatically adjusting the tube current during acquisition. The study was performed with six cylindrical poly(methylmethacrylate) (PMMA) phantoms of variable diameters (10-32 cm) and one 5 years of age equivalent pediatric anthropomorphic phantom. After a single scan projection radiograph (SPR), helical acquisitions were performed and images were reconstructed with a standard convolution kernel. Tube current modulation was studied with variable SPR settings (tube angle, mA, kVp) and helical parameters (6-20 HU noise indices, 80-140 kVp tube potential, 0.8-4 s. tube rotation time, 5-20 mm x-ray beam thickness, 0.75-1.5 pitch, 1.25-10 mm image thickness, variable acquisition, and reconstruction fields of view). CT dose indices (CTDIvol) were measured, and the image quality criterion used was the standard deviation of the CT number measured in reconstructed images of PMMA material. Observed tube current levels were compared to the expected values from Brooks and Di Chiro's [R.A. Brooks and G.D. Chiro, Med. Phys. 3, 237-240 (1976)] model and calculated values (product of a reference value

  3. Depth perception in autostereograms: 1/f noise is best.

    PubMed

    Yankelevsky, Yael; Shvartz, Ishai; Avraham, Tamar; Bruckstein, Alfred M

    2016-02-01

    An autostereogram is a single image that encodes depth information that pops out when looking at it. The trick is achieved by setting a basic 2D pattern and continuously replicating the local pattern at each point in the image with a shift defined by the desired disparity. In this work, we explore the dependency between the ease of perceiving depth in autostereograms and the choice of the basic pattern used for generating them. We report the results of three sets of psychophysical experiments using autostereograms generated from 2D random noise patterns having power spectra of the form 1/f(β) The experiments were designed to test the ability of human subjects to identify smooth low-resolution surfaces, as well as detail, in the form of higher-resolution objects in the depth profile, and to determine limits in identifying small objects as a function of their size. In accordance with the theory, we discover a significant advantage of the 1/f noise pattern (pink noise) for fast depth lock-in and fine detail detection, showing that such patterns are optimal choices for autostereogram design.

  4. The Depths from Skin to the Major Organs at Chest Acupoints of Pediatric Patients

    PubMed Central

    Ma, Yi-Chun; Peng, Ching-Tien; Huang, Yu-Chuen; Lin, Hung-Yi; Lin, Jaung-Geng

    2015-01-01

    Background. Acupuncture is applied to treat numerous diseases in pediatric patients. Few reports have been published on the depth to which it is safe to insert needle acupoints in pediatric patients. We evaluated the depths to which acupuncture needles can be inserted safely in chest acupoints in pediatric patients and the variations in safe depth according to sex, age, body weight, and body mass index (BMI). Methods. We retrospectively studied computed tomography (CT) images of pediatric patients aged 4 to 18 years who had undergone chest CT at China Medical University Hospital from December 2004 to May 2013. The safe depth of chest acupoints was directly measured from the CT images. The relationships between the safe depth of these acupoints and sex, age, body weight, and BMI were analyzed. Results. The results demonstrated significant differences in depth among boys and girls at KI25 (kidney meridian), ST16 (stomach meridian), ST18, SP17 (spleen meridian), SP19, SP20, PC1 (pericardium meridian), LU2 (lung meridian), and GB22 (gallbladder meridian). Safe depth significantly differed among the age groups (P < 0.001), weight groups (P < 0.05), and BMI groups (P < 0.05). Conclusion. Physicians should focus on large variations in needle depth during acupuncture for achieving optimal therapeutic effect and preventing complications. PMID:26457105

  5. ToF-SIMS depth profiling of cells: z-correction, 3D imaging, and sputter rate of individual NIH/3T3 fibroblasts.

    PubMed

    Robinson, Michael A; Graham, Daniel J; Castner, David G

    2012-06-01

    Proper display of three-dimensional time-of-flight secondary ion mass spectrometry (ToF-SIMS) imaging data of complex, nonflat samples requires a correction of the data in the z-direction. Inaccuracies in displaying three-dimensional ToF-SIMS data arise from projecting data from a nonflat surface onto a 2D image plane, as well as possible variations in the sputter rate of the sample being probed. The current study builds on previous studies by creating software written in Matlab, the ZCorrectorGUI (available at http://mvsa.nb.uw.edu/), to apply the z-correction to entire 3D data sets. Three-dimensional image data sets were acquired from NIH/3T3 fibroblasts by collecting ToF-SIMS images, using a dual beam approach (25 keV Bi(3)(+) for analysis cycles and 20 keV C(60)(2+) for sputter cycles). The entire data cube was then corrected by using the new ZCorrectorGUI software, producing accurate chemical information from single cells in 3D. For the first time, a three-dimensional corrected view of a lipid-rich subcellular region, possibly the nuclear membrane, is presented. Additionally, the key assumption of a constant sputter rate throughout the data acquisition was tested by using ToF-SIMS and atomic force microscopy (AFM) analysis of the same cells. For the dried NIH/3T3 fibroblasts examined in this study, the sputter rate was found to not change appreciably in x, y, or z, and the cellular material was sputtered at a rate of approximately 10 nm per 1.25 × 10(13) ions C(60)(2+)/cm(2). PMID:22530745

  6. Noninvasive Optical Imaging and In Vivo Cell Tracking of Indocyanine Green Labeled Human Stem Cells Transplanted at Superficial or In-Depth Tissue of SCID Mice.

    PubMed

    Sabapathy, Vikram; Mentam, Jyothsna; Jacob, Paul Mazhuvanchary; Kumar, Sanjay

    2015-01-01

    Stem cell based therapies hold great promise for the treatment of human diseases; however results from several recent clinical studies have not shown a level of efficacy required for their use as a first-line therapy, because more often in these studies fate of the transplanted cells is unknown. Thus monitoring the real-time fate of in vivo transplanted cells is essential to validate the full potential of stem cells based therapy. Recent studies have shown how real-time in vivo molecular imaging has helped in identifying hurdles towards clinical translation and designing potential strategies that may contribute to successful transplantation of stem cells and improved outcomes. At present, there are no cost effective and efficient labeling techniques for tracking the cells under in vivo conditions. Indocyanine green (ICG) is a safer, economical, and superior labelling technique for in vivo optical imaging. ICG is a FDA-approved agent and decades of usage have clearly established the effectiveness of ICG for human clinical applications. In this study, we have optimized the ICG labelling conditions that is optimal for noninvasive optical imaging and demonstrated that ICG labelled cells can be successfully used for in vivo cell tracking applications in SCID mice injury models.

  7. RGB-D depth-map restoration using smooth depth neighborhood supports

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Xue, Haoyang; Yu, Zhongjie; Wu, Qiang; Yang, Jie

    2015-05-01

    A method to restore the depth map of an RGB-D image using smooth depth neighborhood (SDN) supports is presented. The SDN supports are computed based on the corresponding color image of the depth map. Compared with the most widely used square supports, the proposed SDN supports can well-capture the local structure of the object. Only pixels with similar depth values are allowed to be included in the support. We combine our SDN supports with the joint bilateral filter (JBF) to form the SDN-JBF and use it to restore depth maps. Experimental results show that our SDN-JBF can not only rectify the misaligned depth pixels but also preserve sharp depth discontinuities.

  8. Terahertz interferometric synthetic aperture tomography for confocal imaging systems.

    PubMed

    Heimbeck, M S; Marks, D L; Brady, D; Everitt, H O

    2012-04-15

    Terahertz (THz) interferometric synthetic aperture tomography (TISAT) for confocal imaging within extended objects is demonstrated by combining attributes of synthetic aperture radar and optical coherence tomography. Algorithms recently devised for interferometric synthetic aperture microscopy are adapted to account for the diffraction-and defocusing-induced spatially varying THz beam width characteristic of narrow depth of focus, high-resolution confocal imaging. A frequency-swept two-dimensional TISAT confocal imaging instrument rapidly achieves in-focus, diffraction-limited resolution over a depth 12 times larger than the instrument's depth of focus in a manner that may be easily extended to three dimensions and greater depths.

  9. Achieving high-resolution soft-tissue imaging with cone-beam CT: a two-pronged approach for modulation of x-ray fluence and detector gain

    NASA Astrophysics Data System (ADS)

    Graham, S. A.; Siewerdsen, J. H.; Moseley, D. J.; Keller, H.; Shkumat, N. A.; Jaffray, D. A.

    2005-04-01

    Cone-beam computed tomography (CBCT) presents a highly promising and challenging advanced application of flat-panel detectors (FPDs). The great advantage of this adaptable technology is in the potential for sub-mm 3D spatial resolution in combination with soft-tissue detectability. While the former is achieved naturally by CBCT systems incorporating modern FPD designs (e.g., 200 - 400 um pixel pitch), the latter presents a significant challenge due to limitations in FPD dynamic range, large field of view, and elevated levels of x-ray scatter in typical CBCT configurations. We are investigating a two-pronged strategy to maximizing soft-tissue detectability in CBCT: 1) front-end solutions, including novel beam modulation designs (viz., spatially varying compensators) that alleviate detector dynamic range requirements, reduce x-ray scatter, and better distribute imaging dose in a manner suited to soft-tissue visualization throughout the field of view; and 2) back-end solutions, including implementation of an advanced FPD design (Varian PaxScan 4030CB) that features dual-gain and dynamic gain switching that effectively extends detector dynamic range to 18 bits. These strategies are explored quantitatively on CBCT imaging platforms developed in our laboratory, including a dedicated CBCT bench and a mobile isocentric C-arm (Siemens PowerMobil). Pre-clinical evaluation of improved soft-tissue visibility was carried out in phantom and patient imaging with the C-arm device. Incorporation of these strategies begin to reveal the full potential of CBCT for soft-tissue visualization, an essential step in realizing broad utility of this adaptable technology for diagnostic and image-guided procedures.

  10. Combination of an optical parametric oscillator and quantum-dots 655 to improve imaging depth of vasculature by intravital multicolor two-photon microscopy.

    PubMed

    Ricard, Clément; Lamasse, Lisa; Jaouen, Alexandre; Rougon, Geneviève; Debarbieux, Franck

    2016-06-01

    Simultaneous imaging of different cell types and structures in the mouse central nervous system (CNS) by intravital two-photon microscopy requires the characterization of fluorophores and advances in approaches to visualize them. We describe the use of a two-photon infrared illumination generated by an optical parametric oscillator (OPO) on quantum-dots 655 (QD655) nanocrystals to improve resolution of the vasculature deeper in the mouse brain both in healthy and pathological conditions. Moreover, QD655 signal can be unmixed from the DsRed2, CFP, EGFP and EYFP fluorescent proteins, which enhances the panel of multi-parametric correlative investigations both in the cortex and the spinal cord.

  11. Assessment of imaging with extended depth-of-field by means of the light sword lens in terms of visual acuity scale

    PubMed Central

    Kakarenko, Karol; Ducin, Izabela; Grabowiecki, Krzysztof; Jaroszewicz, Zbigniew; Kolodziejczyk, Andrzej; Mira-Agudelo, Alejandro; Petelczyc, Krzysztof; Składowska, Aleksandra; Sypek, Maciej

    2015-01-01

    We present outcomes of an imaging experiment using the refractive light sword lens (LSL) as a contact lens in an optical system that serves as a simplified model of the presbyopic eye. The results show that the LSL produces significant improvements in visual acuity of the simplified presbyopic eye model over a wide range of defocus. Therefore, this element can be an interesting alternative for the multifocal contact and intraocular lenses currently used in ophthalmology. The second part of the article discusses possible modifications of the LSL profile in order to render it more suitable for fabrication and ophthalmological applications. PMID:26137376

  12. Assessment of imaging with extended depth-of-field by means of the light sword lens in terms of visual acuity scale.

    PubMed

    Kakarenko, Karol; Ducin, Izabela; Grabowiecki, Krzysztof; Jaroszewicz, Zbigniew; Kolodziejczyk, Andrzej; Mira-Agudelo, Alejandro; Petelczyc, Krzysztof; Składowska, Aleksandra; Sypek, Maciej

    2015-05-01

    We present outcomes of an imaging experiment using the refractive light sword lens (LSL) as a contact lens in an optical system that serves as a simplified model of the presbyopic eye. The results show that the LSL produces significant improvements in visual acuity of the simplified presbyopic eye model over a wide range of defocus. Therefore, this element can be an interesting alternative for the multifocal contact and intraocular lenses currently used in ophthalmology. The second part of the article discusses possible modifications of the LSL profile in order to render it more suitable for fabrication and ophthalmological applications.

  13. Structured light field 3D imaging.

    PubMed

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-01

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces. PMID:27607639

  14. Oxygen depth profiling with subnanometre depth resolution

    NASA Astrophysics Data System (ADS)

    Kosmata, Marcel; Munnik, Frans; Hanf, Daniel; Grötzschel, Rainer; Crocoll, Sonja; Möller, Wolfhard

    2014-10-01

    A High-depth Resolution Elastic Recoil Detection (HR-ERD) set-up using a magnetic spectrometer has been taken into operation at the Helmholtz-Zentrum Dresden-Rossendorf for the first time. This instrument allows the investigation of light elements in ultra-thin layers and their interfaces with a depth resolution of less than 1 nm near the surface. As the depth resolution is highly influenced by the experimental measurement parameters, sophisticated optimisation procedures have been implemented. Effects of surface roughness and sample damage caused by high fluences need to be quantified for each kind of material. Also corrections are essential for non-equilibrium charge state distributions that exist very close to the surface. Using the example of a high-k multilayer SiO2/Si3N4Ox/SiO2/Si it is demonstrated that oxygen in ultra-thin films of a few nanometres thickness can be investigated by HR-ERD.

  15. Combination of an optical parametric oscillator and quantum-dots 655 to improve imaging depth of vasculature by intravital multicolor two-photon microscopy

    PubMed Central

    Ricard, Clément; Lamasse, Lisa; Jaouen, Alexandre; Rougon, Geneviève; Debarbieux, Franck

    2016-01-01

    Simultaneous imaging of different cell types and structures in the mouse central nervous system (CNS) by intravital two-photon microscopy requires the characterization of fluorophores and advances in approaches to visualize them. We describe the use of a two-photon infrared illumination generated by an optical parametric oscillator (OPO) on quantum-dots 655 (QD655) nanocrystals to improve resolution of the vasculature deeper in the mouse brain both in healthy and pathological conditions. Moreover, QD655 signal can be unmixed from the DsRed2, CFP, EGFP and EYFP fluorescent proteins, which enhances the panel of multi-parametric correlative investigations both in the cortex and the spinal cord. PMID:27375951

  16. Sampling Depths, Depth Shifts, and Depth Resolutions for Bi(n)(+) Ion Analysis in Argon Gas Cluster Depth Profiles.

    PubMed

    Havelund, R; Seah, M P; Gilmore, I S

    2016-03-10

    Gas cluster sputter depth profiling is increasingly used for the spatially resolved chemical analysis and imaging of organic materials. Here, a study is reported of the sampling depth in secondary ion mass spectrometry depth profiling. It is shown that effects of the sampling depth leads to apparent shifts in depth profiles of Irganox 3114 delta layers in Irganox 1010 sputtered, in the dual beam mode, using 5 keV Ar₂₀₀₀⁺ ions and analyzed with Bi(q+), Bi₃(q+) and Bi₅(q+) ions (q = 1 or 2) with energies between 13 and 50 keV. The profiles show sharp delta layers, broadened from their intrinsic 1 nm thickness to full widths at half-maxima (fwhm's) of 8-12 nm. For different secondary ions, the centroids of the measured delta layers are shifted deeper or shallower by up to 3 nm from the position measured for the large, 564.36 Da (C₃₃H₄₆N₃O₅⁻) characteristic ion for Irganox 3114 used to define a reference position. The shifts are linear with the Bi(n)(q+) beam energy and are greatest for Bi₃(q+), slightly less for Bi₅(q+) with its wider or less deep craters, and significantly less for Bi(q+) where the sputtering yield is very low and the primary ion penetrates more deeply. The shifts increase the fwhm’s of the delta layers in a manner consistent with a linearly falling generation and escape depth distribution function (GEDDF) for the emitted secondary ions, relevant for a paraboloid shaped crater. The total depth of this GEDDF is 3.7 times the delta layer shifts. The greatest effect is for the peaks with the greatest shifts, i.e. Bi₃(q+) at the highest energy, and for the smaller fragments. It is recommended that low energies be used for the analysis beam and that carefully selected, large, secondary ion fragments are used for measuring depth distributions, or that the analysis be made in the single beam mode using the sputtering Ar cluster ions also for analysis. PMID:26883085

  17. Real-time structured light depth extraction

    NASA Astrophysics Data System (ADS)

    Keller, Kurtis; Ackerman, Jeremy D.

    2000-03-01

    Gathering depth data using structured light has been a procedure for many different environments and uses. Many of these system are utilized instead of laser line scanning because of their quickness. However, to utilize depth extraction for some applications, in our case laparoscopic surgery, the depth extraction must be in real time. We have developed an apparatus that speeds up the raw image display and grabbing in structured light depth extraction from 30 frames per second to 60 and 180 frames per second. This results in an updated depth and texture map of about 15 times per second versus about 3. This increased update rate allows for real time depth extraction for use in augmented medical/surgical applications. Our miniature, fist-sized projector utilizes an internal ferro-reflective LCD display that is illuminated with cold light from a flex light pipe. The miniature projector, attachable to a laparoscope, displays inverted pairs of structured light into the body where these images are then viewed by a high-speed camera set slightly off axis from the projector that grabs images synchronously. The images from the camera are ported to a graphics-processing card where six frames are worked on simultaneously to extract depth and create mapped textures from these images. This information is then sent to the host computer with 3D coordinate information of the projector/camera and the associated textures. The surgeon is then able to view body images in real time from different locations without physically moving the laparoscope imager/projector, thereby, reducing the trauma of moving laparoscopes in the patient.

  18. X-ray photoelectron spectroscopy, depth profiling, and elemental imaging of metal/polyimide interfaces of high density interconnect packages subjected to temperature and humidity

    SciTech Connect

    Jung, David R.; Ibidunni, Bola; Ashraf, Muhammad

    1998-11-24

    X-ray photoelectron spectroscopy (XPS) was used to analyze surfaces and buried interfaces of a tape ball grid array (TBGA) interconnect package that was exposed to temperature and humidity testing (the pressure cooker test or PCT). Two metallization structures, employing 3.5 and 7.5 nm Cr adhesion layers, respectively, showed dramatically different results in the PCT. For the metallization with 3.5 nm Cr, spontaneous failure occurred on the polymer side of the metal/polyimide interface. Copper and other metals were detected by XPS on and below this polymer surface. For the metallization with 7.5 nm Cr, which did not delaminate in the PCT, the metallization was manually peeled away and also showed failure at the polymer side of the interface. Conventional XPS taken from a 1 mm diameter area showed the presence of metals on and below this polymer surface. Detailed spatially-resolved analysis using small area XPS (0.1 mm diameter area) and imaging XPS (7 {mu}m resolution) showed that this metal did not migrate through and below the metal/polymer interface, but around and outside of the metallized area.

  19. Stereoscopic depth constancy

    PubMed Central

    Guan, Phillip

    2016-01-01

    Depth constancy is the ability to perceive a fixed depth interval in the world as constant despite changes in viewing distance and the spatial scale of depth variation. It is well known that the spatial frequency of depth variation has a large effect on threshold. In the first experiment, we determined that the visual system compensates for this differential sensitivity when the change in disparity is suprathreshold, thereby attaining constancy similar to contrast constancy in the luminance domain. In a second experiment, we examined the ability to perceive constant depth when the spatial frequency and viewing distance both changed. To attain constancy in this situation, the visual system has to estimate distance. We investigated this ability when vergence, accommodation and vertical disparity are all presented accurately and therefore provided veridical information about viewing distance. We found that constancy is nearly complete across changes in viewing distance. Depth constancy is most complete when the scale of the depth relief is constant in the world rather than when it is constant in angular units at the retina. These results bear on the efficacy of algorithms for creating stereo content. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269596

  20. Stereoscopic depth constancy.

    PubMed

    Guan, Phillip; Banks, Martin S

    2016-06-19

    Depth constancy is the ability to perceive a fixed depth interval in the world as constant despite changes in viewing distance and the spatial scale of depth variation. It is well known that the spatial frequency of depth variation has a large effect on threshold. In the first experiment, we determined that the visual system compensates for this differential sensitivity when the change in disparity is suprathreshold, thereby attaining constancy similar to contrast constancy in the luminance domain. In a second experiment, we examined the ability to perceive constant depth when the spatial frequency and viewing distance both changed. To attain constancy in this situation, the visual system has to estimate distance. We investigated this ability when vergence, accommodation and vertical disparity are all presented accurately and therefore provided veridical information about viewing distance. We found that constancy is nearly complete across changes in viewing distance. Depth constancy is most complete when the scale of the depth relief is constant in the world rather than when it is constant in angular units at the retina. These results bear on the efficacy of algorithms for creating stereo content.This article is part of the themed issue 'Vision in our three-dimensional world'.

  1. Depth-encoded all-fiber swept source polarization sensitive OCT

    PubMed Central

    Wang, Zhao; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Lee, ByungKun; Choi, WooJhon; Potsaid, Benjamin; Liu, Jonathan; Jayaraman, Vijaysekhar; Cable, Alex; Kraus, Martin F.; Liang, Kaicheng; Hornegger, Joachim; Fujimoto, James G.

    2014-01-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of conventional OCT and can assess depth-resolved tissue birefringence in addition to intensity. Most existing PS-OCT systems are relatively complex and their clinical translation remains difficult. We present a simple and robust all-fiber PS-OCT system based on swept source technology and polarization depth-encoding. Polarization multiplexing was achieved using a polarization maintaining fiber. Polarization sensitive signals were detected using fiber based polarization beam splitters and polarization controllers were used to remove the polarization ambiguity. A simplified post-processing algorithm was proposed for speckle noise reduction relaxing the demand for phase stability. We demonstrated systems design for both ophthalmic and catheter-based PS-OCT. For ophthalmic imaging, we used an optical clock frequency doubling method to extend the imaging range of a commercially available short cavity light source to improve polarization depth-encoding. For catheter based imaging, we demonstrated 200 kHz PS-OCT imaging using a MEMS-tunable vertical cavity surface emitting laser (VCSEL) and a high speed micromotor imaging catheter. The system was demonstrated in human retina, finger and lip imaging, as well as ex vivo swine esophagus and cardiovascular imaging. The all-fiber PS-OCT is easier to implement and maintain compared to previous PS-OCT systems and can be more easily translated to clinical applications due to its robust design. PMID:25401008

  2. Motivation with Depth.

    ERIC Educational Resources Information Center

    DiSpezio, Michael A.

    2000-01-01

    Presents an illusional arena by offering experience in optical illusions in which students must apply critical analysis to their innate information gathering systems. Introduces different types of depth illusions for students to experience. (ASK)

  3. Depth Optimization Study

    DOE Data Explorer

    Kawase, Mitsuhiro

    2009-11-22

    The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.

  4. Measurement of PSF for the extended depth of field of microscope based on liquid lens

    NASA Astrophysics Data System (ADS)

    Xue, Yujia; Qu, Yufu; Zhu, Shenyu

    2015-08-01

    To obtain the accurate integral PSF of an extended depth of field (EDOF) microscope based on liquid tunable lens and volumetric sampling (VS) method, a method based on statistic and inverse filtering using quantum dot fluorescence nanosphere as a point source is proposed in this paper. First, a number of raw quantum dot images were captured separately when the focus length of the liquid lens was fixed and changed over the exposure time. Second, the raw images were separately added and averaged to obtain two noise-free mean images. Third, the integral PSF was achieved by computing the inverse Fourier transform of the mean image's Fourier transform caught when the focus lens is fixed divided by that when the focus length is changed. Finally, experimental results show that restored image using the measured accumulated PSF has good image quality and no artifacts.

  5. Evaluating methods for controlling depth perception in stereoscopic cinematography

    NASA Astrophysics Data System (ADS)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  6. Capturing Motion and Depth Before Cinematography.

    PubMed

    Wade, Nicholas J

    2016-01-01

    Visual representations of biological states have traditionally faced two problems: they lacked motion and depth. Attempts were made to supply these wants over many centuries, but the major advances were made in the early-nineteenth century. Motion was synthesized by sequences of slightly different images presented in rapid succession and depth was added by presenting slightly different images to each eye. Apparent motion and depth were combined some years later, but they tended to be applied separately. The major figures in this early period were Wheatstone, Plateau, Horner, Duboscq, Claudet, and Purkinje. Others later in the century, like Marey and Muybridge, were stimulated to extend the uses to which apparent motion and photography could be applied to examining body movements. These developments occurred before the birth of cinematography, and significant insights were derived from attempts to combine motion and depth. PMID:26684420

  7. Capturing Motion and Depth Before Cinematography.

    PubMed

    Wade, Nicholas J

    2016-01-01

    Visual representations of biological states have traditionally faced two problems: they lacked motion and depth. Attempts were made to supply these wants over many centuries, but the major advances were made in the early-nineteenth century. Motion was synthesized by sequences of slightly different images presented in rapid succession and depth was added by presenting slightly different images to each eye. Apparent motion and depth were combined some years later, but they tended to be applied separately. The major figures in this early period were Wheatstone, Plateau, Horner, Duboscq, Claudet, and Purkinje. Others later in the century, like Marey and Muybridge, were stimulated to extend the uses to which apparent motion and photography could be applied to examining body movements. These developments occurred before the birth of cinematography, and significant insights were derived from attempts to combine motion and depth.

  8. Contour detection combined with depth information

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Cai, Chao

    2015-12-01

    Many challenging computer vision problems have been proven to benefit from the incorporation of depth information, to name a few, semantic labellings, pose estimations and even contour detection. Different objects have different depths from a single monocular image. The depth information of one object is coherent and the depth information of different objects may vary discontinuously. Meanwhile, there exists a broad non-classical receptive field (NCRF) outside the classical receptive field (CRF). The response of the central neuron is affected not only by the stimulus inside the CRF, but also modulated by the stimulus surrounding it. The contextual modulation is mediated by horizontal connections across the visual cortex. Based on the findings and researches, a biological-inspired contour detection model which combined with depth information is proposed in this paper.

  9. Histology validation of mapping depth-resolved cardiac fiber orientation in fresh mouse heart using optical polarization tractography

    PubMed Central

    Wang, Y.; Zhang, K.; Wasala, N. B.; Yao, X.; Duan, D.; Yao, G.

    2014-01-01

    Myofiber organization in cardiac muscle plays an important role in achieving normal mechanical and electrical heart functions. An imaging tool that can reveal microstructural details of myofiber organization is valuable for both basic research and clinical applications. A high-resolution optical polarization tractography (OPT) was recently developed based on Jones matrix optical coherence tomography (JMOCT). In this study, we validated the accuracy of using OPT for measuring depth-resolved fiber orientation in fresh heart samples by comparing directly with histology images. Systematic image processing algorithms were developed to register OPT with histology images. The pixel-wise differences between the two tractographic results were analyzed in details. The results indicate that OPT can accurately image depth-resolved fiber orientation in fresh heart tissues and reveal microstructural details at the histological level. PMID:25136507

  10. Lunar Far Side Regolith Depth

    NASA Astrophysics Data System (ADS)

    Bart, G. D.; Melosh, H. J.

    2005-08-01

    The lunar far side contains the South Pole Aitken Basin, which is the largest known impact basin in the solar system, and is enhanced in titanium and iron compared to the rest of the lunar highlands. Although we have known of this enigmatic basin since the 60's, most lunar photography and science covered the equatorial near side where the Apollo spacecraft landed. With NASA's renewed interest in the Moon, the South Pole Aitken Basin is a likely target for future exploration. The regolith depth is a crucial measurement for understanding the source of the surface material in the Basin. On the southern far side of the Moon (20 S, 180 W), near the north edge of the Basin, we determined the regolith depth by examining 11 flat-floored craters about 200 m in diameter. We measured the ratio of the diameter of the flat floor to the diameter of the crater, and used it to calculate the regolith thickness using the method of Quaide and Oberbeck (1968). We used Apollo 15 panoramic images --- still the highest resolution images available for this region of the Moon. We found the regolith depth at that location to be about 40 m. This value is significantly greater than values for the lunar near side: 3 m (Oceanus Procellarum), 16 m (Hipparchus), and 1-10 m at the Surveyor landing sites. The thicker value obtained for the far side regolith is consistent with the older age of the far side. It also suggests that samples returned from the far side may have originated from deeper beneath the surface than their near side counterparts.

  11. Depth estimation using a lightfield camera

    NASA Astrophysics Data System (ADS)

    Roper, Carissa

    The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.

  12. Improved Boundary Layer Depth Retrievals from MPLNET

    NASA Technical Reports Server (NTRS)

    Lewis, Jasper R.; Welton, Ellsworth J.; Molod, Andrea M.; Joseph, Everette

    2013-01-01

    Continuous lidar observations of the planetary boundary layer (PBL) depth have been made at the Micropulse Lidar Network (MPLNET) site in Greenbelt, MD since April 2001. However, because of issues with the operational PBL depth algorithm, the data is not reliable for determining seasonal and diurnal trends. Therefore, an improved PBL depth algorithm has been developed which uses a combination of the wavelet technique and image processing. The new algorithm is less susceptible to contamination by clouds and residual layers, and in general, produces lower PBL depths. A 2010 comparison shows the operational algorithm overestimates the daily mean PBL depth when compared to the improved algorithm (1.85 and 1.07 km, respectively). The improved MPLNET PBL depths are validated using radiosonde comparisons which suggests the algorithm performs well to determine the depth of a fully developed PBL. A comparison with the Goddard Earth Observing System-version 5 (GEOS-5) model suggests that the model may underestimate the maximum daytime PBL depth by 410 m during the spring and summer. The best agreement between MPLNET and GEOS-5 occurred during the fall and they diered the most in the winter.

  13. A Neural Representation of Depth from Motion Parallax in Macaque Visual Cortex

    PubMed Central

    Nadler, Jacob W.; Angelaki, Dora E.; DeAngelis, Gregory C.

    2008-01-01

    Perception of depth is a fundamental challenge for the visual system, particularly for observers moving through their environment. The brain makes use of multiple visual cues to reconstruct the three-dimensional structure of a scene. One potent cue, motion parallax, frequently arises during translation of the observer because the images of objects at different distances move across the retina with different velocities. Human psychophysical studies have demonstrated that motion parallax can be a powerful depth cue1-5, and motion parallax appears to be heavily exploited by animal species that lack highly developed binocular vision6-8. However, little is known about the neural mechanisms that underlie this capacity. We used a virtual-reality system to translate macaque monkeys while they viewed motion parallax displays that simulated objects at different depths. We show that many neurons in the middle temporal (MT) area signal the sign of depth (i.e., near vs. far) from motion parallax in the absence of other depth cues. To achieve this, neurons must combine visual motion with extra-retinal (non-visual) signals related to the animal's movement. Our findings suggest a new neural substrate for depth perception, and demonstrate a robust interaction of visual and non-visual cues in area MT. Combined with previous studies that implicate area MT in depth perception based on binocular disparities9-12, our results suggest that MT contains a more general representation of three dimensional space that leverages multiple cues. PMID:18344979

  14. The construction of landslides achieves by using 1969 CORONA (KH-4B) image and aerial photos- A case study of the catchment of Te-chi reservoir

    NASA Astrophysics Data System (ADS)

    Jen, Chia-Hung; Dirk, Wenske; Lin, Jiun-Chuan; Böse, Margot

    2010-05-01

    Landslides are common phenomenon in Taiwan for the extreme climate, intensive tectonic movement and highly fracture bedrock. In the study of landslides, to make the historical archive is critical for both long term monitoring and landform evolution research. For the first three decades since the 1950s, only few maps and written documents are available for the high mountain areas, so historical remote sensing data can be a viable way to achieve detailed information about human activities and landscape reaction in terms of increasing denudation. In this study, we try to use different kind of data to identify landslides, including CORONA imagery of 969, ortho-rectified aerial photo map of 1980 and ortho-rectified aerial photo of 2004. The historical CORONA imagery can be orthorectified and georeferenced therefore can be used as a source of data for landslides identification and landslide archive construction. The study area is in the upper catchment of Ta-chia River. This area is the homeland to Taiyal aboriginal tribe. The Tachia River is "Taiwan's TVA" in terms of its vast hydroelectric power potential. The rough terrain makes accessibility very difficult, isolating the upper Tachia basin from the rest of Taiwan's densely populated areas. The construction of the Central Cross-Island Highway officially started in July 1956 and was completed in May 1960. It connects the towns of Tong-shi in the west and Taroko in the east, across the upper Ta-chia basin. There are branches off to the town of Pu-li in the south and I-lan in the north, so the upper Ta-chia basin becomes the pivotal node for cross-island traffic in four directions. Apart from its military purposes, the Central Cross-Island Highway has a substantial impact on the mountainous areas of upper Tachia basin, the most important aspect being the increase of population and farming. The rough terrain makes the human accessibility very lower so the upper Ta-chia basin is isolated from the rest of densely populated

  15. Depth-encoded synthetic aperture optical coherence tomography of biological tissues with extended focal depth.

    PubMed

    Mo, Jianhua; de Groot, Mattijs; de Boer, Johannes F

    2015-02-23

    Optical coherence tomography (OCT) has proven to be able to provide three-dimensional (3D) volumetric images of scattering biological tissues for in vivo medical diagnostics. Unlike conventional optical microscopy, its depth-resolving ability (axial resolution) is exclusively determined by the laser source and therefore invariant over the full imaging depth. In contrast, its transverse resolution is determined by the objective's numerical aperture and the wavelength which is only approximately maintained over twice the Rayleigh range. However, the prevailing laser sources for OCT allow image depths of more than 5 mm which is considerably longer than the Rayleigh range. This limits high transverse resolution imaging with OCT. Previously, we reported a novel method to extend the depth-of-focus (DOF) of OCT imaging in Mo et al.Opt. Express 21, 10048 (2013)]. The approach is to create three different optical apertures via pupil segmentation with an annular phase plate. These three optical apertures produce three OCT images from the same sample, which are encoded to different depth positions in a single OCT B-scan. This allows for correcting the defocus-induced curvature of wave front in the pupil so as to improve the focus. As a consequence, the three images originating from those three optical apertures can be used to reconstruct a new image with an extended DOF. In this study, we successfully applied this method for the first time to both an artificial phantom and biological tissues over a four times larger depth range. The results demonstrate a significant DOF improvement, paving the way for 3D high resolution OCT imaging beyond the conventional Rayleigh range. PMID:25836528

  16. Optical coherence microscopy for deep tissue imaging of the cerebral cortex with intrinsic contrast

    NASA Astrophysics Data System (ADS)

    Srinivasan, Vivek J.; Radhakrishnan, Harsha; Jiang, James Y.; Barry, Scott; Cable, Alex E.

    2012-01-01

    We demonstrate Optical Coherence Microscopy (OCM) for in vivo imaging of the rat cerebral cortex. Imaging does not require addition of dyes or contrast agents, and is achieved through intrinsic scattering contrast and image processing alone. Furthermore, we demonstrate in vivo, quantitative measurements of optical properties and angiography in the rat cerebral cortex. Imaging depths greater than those achieved by conventional two-photon microscopy are demonstrated.

  17. Achieving Success in Small Business. A Self-Instruction Program for Small Business Owner-Managers. Creating an Effective Business Image.

    ERIC Educational Resources Information Center

    Virginia Polytechnic Inst. and State Univ., Blacksburg. Div. of Vocational-Technical Education.

    This self-instructional module on creating an effective business image is the fourth in a set of twelve modules designed for small business owner-managers. Competencies for this module are (1) identify the key factors which contribute to formation of a business image and (2) assess your current image and determine if it communicates the…

  18. Real-time viewpoint image synthesis using strips of multi-camera images

    NASA Astrophysics Data System (ADS)

    Date, Munekazu; Takada, Hideaki; Kojima, Akira

    2015-03-01

    A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.

  19. Seismic depth conversion vs. structural validation

    NASA Astrophysics Data System (ADS)

    Totake, Yukitsugu; Butler, Rob; Bond, Clare

    2016-04-01

    Interpretation based on seismic reflection data is inherently an uncertain product based on imperfect datasets, with limits in data resolution and spatial extent. This has boosted geologists to use structural validation techniques to verify their seismic interpretations for many years. Structural validation of seismic interpretations should be ideally completed on depth sections, which are converted from time domain using velocities derived from well checkshot survey, seismic velocity analysis, or even estimates. Choices of velocity model critically control the final depth image and hence structural geometry of interpretations that are used as initial datasets for structural validations. However, the depth conversion is never perfectly accurate because of absence of depth constraint. Now, how robust are structural validation techniques to depth conversion uncertainty? Here we explore how structural validation techniques respond to different versions of depth interpretations converted by different velocities. We use a seismic time-based image of a fold-thrust structure in the deepwater Niger Delta to interpret, and convert to depth using three different velocity models: constant velocity (VM1); a single layer having initial velocity v0 at layer top with vertical velocity gradient k (VM2); and three layers having each v0-k set (VM3) below seabed. Forward modelling, automated trishear modelling algorithm called 'inverse trishear modelling' and Groshong's area-depth-strain (ADS) methods are applied to test the structural geometry of the depth-converted interpretations. We find forward modelling and inverse trishear modelling reasonably 'fit' all versions of interpretation, regardless of the velocity model used for depth conversion, with multiple sets of model parameters. On the other hand, only velocity model VM3 'passes' the ADS validation method, with the detachment level interpreted concordant with the depth estimated from excess area analysis, based on interpreted

  20. Crack depth determination with inductive thermography

    NASA Astrophysics Data System (ADS)

    Oswald-Tranta, B.; Schmidt, R.

    2015-05-01

    Castings, forgings and other steel products are nowadays usually tested with magnetic particle inspection, in order to detect surface cracks. An alternative method is active thermography with inductive heating, which is quicker, it can be well automated and as in this paper presented, even the depth of a crack can be estimated. The induced eddy current, due to its very small penetration depth in ferro-magnetic materials, flows around a surface crack, heating this selectively. The surface temperature is recorded during and after the short inductive heating pulse with an infrared camera. Using Fourier transformation the whole IR image sequence is evaluated and the phase image is processed to detect surface cracks. The level and the local distribution of the phase around a crack correspond to its depth. Analytical calculations were used to model the signal distribution around cracks with different depth and a relationship has been derived between the depth of a crack and its phase value. Additionally, also the influence of the heating pulse duration has been investigated. Samples with artificial and with natural cracks have been tested. Results are presented comparing the calculated and measured phase values depending on the crack depth. Keywords: inductive heating, eddy current, infrared

  1. Motion parallax thresholds for unambiguous depth perception.

    PubMed

    Holmin, Jessica; Nawrot, Mark

    2015-10-01

    The perception of unambiguous depth from motion parallax arises from the neural integration of retinal image motion and extra-retinal eye movement signals. It is only recently that these parameters have been articulated in the form of the motion/pursuit ratio. In the current study, we explored the lower limits of the parameter space in which observers could accurately perform near/far relative depth-sign discriminations for a translating random-dot stimulus. Stationary observers pursued a translating random dot stimulus containing relative image motion. Their task was to indicate the location of the peak in an approximate square-wave stimulus. We measured thresholds for depth from motion parallax, quantified as motion/pursuit ratios, as well as lower motion thresholds and pursuit accuracy. Depth thresholds were relatively stable at pursuit velocities 5-20 deg/s, and increased at lower and higher velocities. The pattern of results indicates that minimum motion/pursuit ratios are limited by motion and pursuit signals, both independently and in combination with each other. At low and high pursuit velocities, depth thresholds were limited by inaccurate pursuit signals. At moderate pursuit velocities, depth thresholds were limited by motion signals.

  2. Investigating the San Andreas Fault System in the Northern Salton Trough by a Combination of Seismic Tomography and Pre-stack Depth Migration: Results from the Salton Seismic Imaging Project (SSIP)

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Ryberg, T.; Fuis, G. S.; Goldman, M.; Catchings, R.; Rymer, M. J.; Hole, J. A.; Stock, J. M.

    2013-12-01

    The Salton Trough in southern California is a tectonically active pull-apart basin which was formed in migrating step-overs between strike-slip faults, of which the San Andreas fault (SAF) and the Imperial fault are current examples. It is located within the large-scale transition between the onshore SAF strike-slip system to the north and the marine rift system of the Gulf of California to the south. Crustal stretching and sinking formed the distinct topographic features and sedimentary successions of the Salton Trough. The active SAF and related fault systems can produce potentially large damaging earthquakes. The Salton Seismic Imaging Project (SSIP), funded by NSF and USGS, was undertaken to generate seismic data and images to improve the knowledge of fault geometry and seismic velocities within the sedimentary basins and underlying crystalline crust around the SAF in this key region. The results from these studies are required as input for modeling of earthquake scenarios and prediction of strong ground motion in the surrounding populated areas and cities. We present seismic data analysis and results from tomography and pre-stack depth migration for a number of seismic profiles (Lines 1, 4-7) covering mainly the northern Salton Trough. The controlled-source seismic data were acquired in 2011. The seismic lines have lengths ranging from 37 to 72 km. On each profile, 9-17 explosion sources with charges of 110-460 kg were recorded by 100-m spaced vertical component receivers. On Line 7, additional OBS data were acquired within the Salton Sea. Travel times of first arrivals were picked and inverted for initial 1D velocity models. Alternatively, the starting models were derived from the crustal-scale velocity models developed by the Southern California Earthquake Center. The final 2D velocity models were obtained using the algorithm of Hole (1992; JGR). We have also tested the tomography packages FAST and SIMUL2000, resulting in similar velocity structures. An

  3. Burn Depth Monitor

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Supra Medical Systems is successfully marketing a device that detects the depth of burn wounds in human skin. To develop the product, the companyused technology developed by NASA Langley physicists looking for better ultrasonic detection of small air bubbles and cracks in metal. The device is being marketed to burn wound analysis and treatment centers. Through a Space Act agreement, NASA and the company are also working to further develop ultrasonic instruments for new medical applications.

  4. Burn Depth Monitor

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Supra Medical Systems is successfully marketing a device that detects the depth of burn wounds in human skin. To develop the product, the company used technology developed by NASA Langley physicists looking for better ultrasonic detection of small air bubbles and cracks in metal. The device is being marketed to burn wound analysis and treatment centers. Through a Space Act agreement, NASA and the company are also working to further develop ultrasonic instruments for new medical applications

  5. Burn Depth Monitor

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Supra Medical Systems is successfully marketing a device that detects the depth of burn wounds in human skin. To develop the product, the company used technology developed by NASA Langley physicists looking for better ultrasonic detection of small air bubbles and cracks in metal. The device is being marketed to burn wound analysis and treatment centers. Through a Space Act agreement, NASA and the company are also working to further develop ultrasonic instruments for new medical applications.

  6. Variable depth core sampler

    DOEpatents

    Bourgeois, P.M.; Reger, R.J.

    1996-02-20

    A variable depth core sampler apparatus is described comprising a first circular hole saw member, having longitudinal sections that collapses to form a point and capture a sample, and a second circular hole saw member residing inside said first hole saw member to support the longitudinal sections of said first hole saw member and prevent them from collapsing to form a point. The second hole saw member may be raised and lowered inside said first hole saw member. 7 figs.

  7. Variable depth core sampler

    DOEpatents

    Bourgeois, Peter M.; Reger, Robert J.

    1996-01-01

    A variable depth core sampler apparatus comprising a first circular hole saw member, having longitudinal sections that collapses to form a point and capture a sample, and a second circular hole saw member residing inside said first hole saw member to support the longitudinal sections of said first hole saw member and prevent them from collapsing to form a point. The second hole saw member may be raised and lowered inside said first hole saw member.

  8. Variable depth core sampler

    SciTech Connect

    Bourgeois, P.M.; Reger, R.J.

    1994-12-31

    This invention relates to a sampling means, more particularly to a device to sample hard surfaces at varying depths. Often it is desirable to take samples of a hard surface wherein the samples are of the same diameter but of varying depths. Current practice requires that a full top-to-bottom sample of the material be taken, using a hole saw, and boring a hole from one end of the material to the other. The sample thus taken is removed from the hole saw and the middle of said sample is then subjected to further investigation. This paper describes a variable depth core sampler comprimising a circular hole saw member, having longitudinal sections that collapse to form a point and capture a sample, and a second saw member residing inside the first hole saw member to support the longitudinal sections of the first member and prevent them from collapsing to form a point. The second hole saw member may be raised and lowered inside the the first hole saw member.

  9. Focus cues affect perceived depth

    PubMed Central

    Watt, Simon J.; Akeley, Kurt; Ernst, Marc O.; Banks, Martin S.

    2007-01-01

    Depth information from focus cues—accommodation and the gradient of retinal blur—is typically incorrect in three-dimensional (3-D) displays because the light comes from a planar display surface. If the visual system incorporates information from focus cues into its calculation of 3-D scene parameters, this could cause distortions in perceived depth even when the 2-D retinal images are geometrically correct. In Experiment 1 we measured the direct contribution of focus cues to perceived slant by varying independently the physical slant of the display surface and the slant of a simulated surface specified by binocular disparity (binocular viewing) or perspective/texture (monocular viewing). In the binocular condition, slant estimates were unaffected by display slant. In the monocular condition, display slant had a systematic effect on slant estimates. Estimates were consistent with a weighted average of slant from focus cues and slant from disparity/texture, where the cue weights are determined by the reliability of each cue. In Experiment 2, we examined whether focus cues also have an indirect effect on perceived slant via the distance estimate used in disparity scaling. We varied independently the simulated distance and the focal distance to a disparity-defined 3-D stimulus. Perceived slant was systematically affected by changes in focal distance. Accordingly, depth constancy (with respect to simulated distance) was significantly reduced when focal distance was held constant compared to when it varied appropriately with the simulated distance to the stimulus. The results of both experiments show that focus cues can contribute to estimates of 3-D scene parameters. Inappropriate focus cues in typical 3-D displays may therefore contribute to distortions in perceived space. PMID:16441189

  10. Design of an optical system with large depth of field using in the micro-assembly

    NASA Astrophysics Data System (ADS)

    Li, Rong; Chang, Jun; Zhang, Zhi-jing; Ye, Xin; Zheng, Hai-jing

    2013-08-01

    Micro system currently is the mainstream of application and demand of the field of micro fabrication of civilian and national defense. Compared with the macro assembly, the requirements on location accuracy of the micro-assembly system are much higher. Usually the dimensions of the components of the micro-assembly are mostly between a few microns to several hundred microns. The general assembly precision requires for the sub-micron level. Micro system assembly is the bottleneck of micro fabrication currently. The optical stereo microscope used in the field of micro assembly technology can achieve high-resolution imaging, but the depth of field of the optical imaging system is too small. Thus it's not conducive to the three-dimensional observation process of the micro-assembly. This paper summarizes the development of micro system assembly at home and abroad firstly. Based on the study of the core features of the technology, a program is proposed which uses wave front coding technology to increase the depth of field of the optical imaging system. In the wave front coding technology, by combining traditional optical design with digital image processing creatively, the depth of field can be greatly increased, moreover, all defocus-related aberrations, such as spherical aberration, chromatic aberration, astigmatism, Ptzvel(field) curvature, distortion, and other defocus induced by the error of assembling and temperature change, can be corrected or minimized. In this paper, based on the study of theory, a set of optical microscopy imaging system is designed. This system is designed and optimized by optical design software CODE V and ZEMAX. At last, the imaging results of the traditional optical stereo microscope and the optical stereo microscope applied wave front coding technology are compared. The results show that: the method has a practical operability and the phase plate obtained by optimized has a good effect on improving the imaging quality and increasing the

  11. Boundary Depth Information Using Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Wang, Ruisheng

    2016-06-01

    Depth information is widely used for representation, reconstruction and modeling of 3D scene. Generally two kinds of methods can obtain the depth information. One is to use the distance cues from the depth camera, but the results heavily depend on the device, and the accuracy is degraded greatly when the distance from the object is increased. The other one uses the binocular cues from the matching to obtain the depth information. It is more and more mature and convenient to collect the depth information of different scenes by stereo matching methods. In the objective function, the data term is to ensure that the difference between the matched pixels is small, and the smoothness term is to smooth the neighbors with different disparities. Nonetheless, the smoothness term blurs the boundary depth information of the object which becomes the bottleneck of the stereo matching. This paper proposes a novel energy function for the boundary to keep the discontinuities and uses the Hopfield neural network to solve the optimization. We first extract the region of interest areas which are the boundary pixels in original images. Then, we develop the boundary energy function to calculate the matching cost. At last, we solve the optimization globally by the Hopfield neural network. The Middlebury stereo benchmark is used to test the proposed method, and results show that our boundary depth information is more accurate than other state-of-the-art methods and can be used to optimize the results of other stereo matching methods.

  12. Depth profiling of gold nanoparticles and characterization of point spread functions in reconstructed and human skin using multiphoton microscopy.

    PubMed

    Labouta, Hagar I; Hampel, Martina; Thude, Sibylle; Reutlinger, Katharina; Kostka, Karl-Heinz; Schneider, Marc

    2012-01-01

    Multiphoton microscopy has become popular in studying dermal nanoparticle penetration. This necessitates studying the imaging parameters of multiphoton microscopy in skin as an imaging medium, in terms of achievable detection depths and the resolution limit. This would simulate real-case scenarios rather than depending on theoretical values determined under ideal conditions. This study has focused on depth profiling of sub-resolution gold nanoparticles (AuNP) in reconstructed (fixed and unfixed) and human skin using multiphoton microscopy. Point spread functions (PSF) were determined for the used water-immersion objective of 63×/NA = 1.2. Factors such as skin-tissue compactness and the presence of wrinkles were found to deteriorate the accuracy of depth profiling. A broad range of AuNP detectable depths (20-100 μm) in reconstructed skin was observed. AuNP could only be detected up to ∼14 μm depth in human skin. Lateral (0.5 ± 0.1 μm) and axial (1.0 ± 0.3 μm) PSF in reconstructed and human specimens were determined. Skin cells and intercellular components didn't degrade the PSF with depth. In summary, the imaging parameters of multiphoton microscopy in skin and practical limitations encountered in tracking nanoparticle penetration using this approach were investigated.

  13. Extended depth of field system for long distance iris acquisition

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Lin; Hsieh, Sheng-Hsun; Hung, Kuo-En; Yang, Shi-Wen; Li, Yung-Hui; Tien, Chung-Hao

    2012-10-01

    Using biometric signatures for identity recognition has been practiced for centuries. Recently, iris recognition system attracts much attention due to its high accuracy and high stability. The texture feature of iris provides a signature that is unique for each subject. Currently most commercial iris recognition systems acquire images in less than 50 cm, which is a serious constraint that needs to be broken if we want to use it for airport access or entrance that requires high turn-over rate . In order to capture the iris patterns from a distance, in this study, we developed a telephoto imaging system with image processing techniques. By using the cubic phase mask positioned front of the camera, the point spread function was kept constant over a wide range of defocus. With adequate decoding filter, the blurred image was restored, where the working distance between the subject and the camera can be achieved over 3m associated with 500mm focal length and aperture F/6.3. The simulation and experimental results validated the proposed scheme, where the depth of focus of iris camera was triply extended over the traditional optics, while keeping sufficient recognition accuracy.

  14. Binocular disparity magnitude affects perceived depth magnitude despite inversion of depth order.

    PubMed

    Matthews, Harold; Hill, Harold; Palmisano, Stephen

    2011-01-01

    The hollow-face illusion involves a misperception of depth order: our perception follows our top-down knowledge that faces are convex, even though bottom-up depth information reflects the actual concave surface structure. While pictorial cues can be ambiguous, stereopsis should unambiguously indicate the actual depth order. We used computer-generated stereo images to investigate how, if at all, the sign and magnitude of binocular disparities affect the perceived depth of the illusory convex face. In experiment 1 participants adjusted the disparity of a convex comparison face until it matched a reference face. The reference face was either convex or hollow and had binocular disparities consistent with an average face or had disparities exaggerated, consistent with a face stretched in depth. We observed that apparent depth increased with disparity magnitude, even when the hollow faces were seen as convex (ie when perceived depth order was inconsistent with disparity sign). As expected, concave faces appeared flatter than convex faces, suggesting that disparity sign also affects perceived depth. In experiment 2, participants were presented with pairs of real and illusory convex faces. In each case, their task was to judge which of the two stimuli appeared to have the greater depth. Hollow faces with exaggerated disparities were again perceived as deeper. PMID:22132512

  15. 3D astigmatic depth sensing camera

    NASA Astrophysics Data System (ADS)

    Birch, Gabriel C.; Tyo, J. Scott; Schwiegerling, Jim

    2011-10-01

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture threedimensional images inexpensively and without major modifications to current cameras is uncommon. Our goal is to create a modification to a common commercial camera that allows a three dimensional reconstruction. We desire such an imaging system to be inexpensive and easy to use. Furthermore, we require that any three-dimensional modification to a camera does not reduce its resolution. Here we present a possible solution to this problem. A commercial digital camera is used with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of the projected pattern, thereby encoding depth. This projector could be integrated into the flash unit of the camera. By carefully choosing a pattern we are able to exploit this differential focus in image processing. Wavelet transforms are performed on the image that pick out the projected pattern. By taking ratios of certain wavelet coefficients we are able to correlate the distance an object at a particular transverse position is from the camera to the contrast ratios. We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  16. Photoacoustic molecular imaging

    NASA Astrophysics Data System (ADS)

    Kiser, William L., Jr.; Reinecke, Daniel; DeGrado, Timothy; Bhattacharyya, Sibaprasad; Kruger, Robert A.

    2007-02-01

    It is well documented that photoacoustic imaging has the capability to differentiate tissue based on the spectral characteristics of tissue in the optical regime. The imaging depth in tissue exceeds standard optical imaging techniques, and systems can be designed to achieve excellent spatial resolution. A natural extension of imaging the intrinsic optical contrast of tissue is to demonstrate the ability of photoacoustic imaging to detect contrast agents based on optically absorbing dyes that exhibit well defined absorption peaks in the infrared. The ultimate goal of this project is to implement molecular imaging, in which Herceptin TM, a monoclonal antibody that is used as a therapeutic agent in breast cancer patients that over express the HER2 gene, is labeled with an IR absorbing dye, and the resulting in vivo bio-distribution is mapped using multi-spectral, infrared stimulation and subsequent photoacoustic detection. To lay the groundwork for this goal and establish system sensitivity, images were collected in tissue mimicking phantoms to determine maximum detection depth and minimum detectable concentration of Indocyanine Green (ICG), a common IR absorbing dye, for a single angle photoacoustic acquisition. A breast mimicking phantom was constructed and spectra were also collected for hemoglobin and methanol. An imaging schema was developed that made it possible to separate the ICG from the other tissue mimicking components in a multiple component phantom. We present the results of these experiments and define the path forward for the detection of dye labeled Herceptin TM in cell cultures and mice models.

  17. The relation between Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol optical depth and PM2.5 over the United States: a geographical comparison by U.S. Environmental Protection Agency regions.

    PubMed

    Zhang, Hai; Hoff, Raymond M; Engel-Cox, Jill A

    2009-11-01

    Aerosol optical depth (AOD) acquired from satellite measurements demonstrates good correlation with particulate matter with diameters less than 2.5 microm (PM2.5) in some regions of the United States and has been used for monitoring and nowcasting air quality over the United States. This work investigates the relation between Moderate Resolution Imaging Spectroradiometer (MODIS) AOD and PM2.5 over the 10 U.S. Environmental Protection Agency (EPA)-defined geographic regions in the United States on the basis of a 2-yr (2005-2006) match-up dataset of MODIS AOD and hourly PM2.5 measurements. The AOD retrievals demonstrate a geographical and seasonal variation in their relation with PM2.5. Good correlations are mostly observed over the eastern United States in summer and fall. The southeastern United States has the highest correlation coefficients at more than 0.6. The southwestern United States has the lowest correlation coefficient of approximately 0.2. The seasonal regression relations derived for each region are used to estimate the PM2.5 from AOD retrievals, and it is shown that the estimation using this method is more accurate than that using a fixed ratio between PM2.5 and AOD. Two versions of AOD from Terra (v4.0.1 and v5.2.6) are also compared in terms of the inversion methods and screening algorithms. The v5.2.6 AOD retrievals demonstrate better correlation with PM2.5 than v4.0.1 retrievals, but they have much less coverage because of the differences in the cloud-screening algorithm.

  18. Bessel beam Grueneisen photoacoustic microscopy with extended depth of field

    NASA Astrophysics Data System (ADS)

    Shi, Junhui; Wang, Lidai; Noordam, Cedric; Wang, Lihong V.

    2016-03-01

    The short focal depth of a Gaussian beam limits the volumetric imaging speed of optical resolution photoacoustic microscopy (OR-PAM). A Bessel beam, which is diffraction-free, provides a long focal depth, but its side-lobes may deteriorate image quality when the Bessel beam is directly employed to excite photoacoustic signals in ORPAM. Here, we present a nonlinear approach based on the Grueneisen relaxation effect to suppress the side-lobe artifacts in photoacoustic imaging. This method extends the focal depth of OR-PAM and speeds up volumetric imaging. We experimentally demonstrated a 1-mm focal depth with a 7-μm lateral resolution and volumetrically imaged a carbon fiber and red blood cell samples.

  19. Molecular depth profiling by wedged crater beveling.

    PubMed

    Mao, Dan; Lu, Caiyan; Winograd, Nicholas; Wucher, Andreas

    2011-08-15

    Time-of-flight secondary ion mass spectrometry and atomic force microscopy are employed to characterize a wedge-shaped crater eroded by a 40-keV C(60)(+) cluster ion beam on an organic film of Irganox 1010 doped with Irganox 3114 delta layers. From an examination of the resulting surface, the information about depth resolution, topography, and erosion rate can be obtained as a function of crater depth for every depth in a single experiment. It is shown that when measurements are performed at liquid nitrogen temperature, a constant erosion rate and reduced bombardment induced surface roughness is observed. At room temperature, however, the erosion rate drops by ∼(1)/(3) during the removal of the 400 nm Irganox film and the roughness gradually increased to from 1 nm to ∼4 nm. From SIMS lateral images of the beveled crater and AFM topography results, depth resolution was further improved by employing glancing angles of incidence and lower primary ion beam energy. Sub-10 nm depth resolution was observed under the optimized conditions on a routine basis. In general, we show that the wedge-crater beveling is an important tool for elucidating the factors that are important for molecular depth profiling experiments.

  20. Depth-resolved soft x-ray photoelectron emission microscopy in nanostructures via standing-wave excited photoemission

    SciTech Connect

    Kronast, F.; Ovsyannikov, R.; Kaiser, A.; Wiemann, C.; Yang, S.-H.; Locatelli, A.; Burgler, D.E.; Schreiber, R.; Salmassi, F.; Fischer, P.; Durr, H.A.; Schneider, C.M.; Eberhardt, W.; Fadley, C.S.

    2008-11-24

    We present an extension of conventional laterally resolved soft x-ray photoelectron emission microscopy. A depth resolution along the surface normal down to a few {angstrom} can be achieved by setting up standing x-ray wave fields in a multilayer substrate. The sample is an Ag/Co/Au trilayer, whose first layer has a wedge profile, grown on a Si/MoSi2 multilayer mirror. Tuning the incident x-ray to the mirror Bragg angle we set up standing x-ray wave fields. We demonstrate the resulting depth resolution by imaging the standing wave fields as they move through the trilayer wedge structure.

  1. Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display.

    PubMed

    Jung, Jae-Hyun; Yeom, Jiwoon; Hong, Jisoo; Hong, Keehoon; Min, Sung-Wook; Lee, Byoungho

    2011-10-10

    In three-dimensional television (3D TV) broadcasting, we find the effect of fundamental depth resolution and the cardboard effect to the perceived depth resolution on multi-view display is important. The observer distance and the specification of multi-view display quantize the expressible depth range, which affect the perception of depth resolution of the observer. In addition, the multi-view 3D TV needs the view synthesis process using depth image-based rendering which induces the cardboard effect from the relation among the stereo pickup, the multi-view synthesis and the multi-view display. In this paper, we analyze the fundamental depth resolution and the cardboard effect from the synthesis process in the multi-view 3D TV broadcasting. After the analysis, the numerical comparison and subjective tests with 20 participants are performed to find the effect of fundamental depth resolution and the cardboard effect to the perceived depth resolution. PMID:21997055

  2. Natural fracturing, by depth

    NASA Astrophysics Data System (ADS)

    Hooker, John; Laubach, Stephen

    2013-04-01

    Natural opening-mode fractures commonly fall upon a spectrum whose end-members are veins, which have wide ranges of sizes and are mostly or thoroughly cemented, and joints, which have little opening displacement and little or no cement. The vein end-member is common in metamorphic rocks, whose high temperature and pressure of formation place them outside typical reservoir settings; conversely, many uncemented joints likely form near the surface and so too have limited relevance to subsurface exploration. Sampling of cores retrieved from tight-gas sandstone reservoirs suggest that it is intermediate fractures, not true joints or veins, that provide natural porosity and permeability. Such fractures have abundant pore space among fracture-bridging cements, which may hold fractures open despite varying states of stress through time. Thus the more sophisticated our understanding of the processes that form veins and joints, i.e., how natural fracturing varies by depth, the better our ability to predict intermediate fractures. Systematic differences between veins and joints, in terms of size-scaling and lateral and stratigraphic spatial arrangement, have been explained in the literature by the mechanical effects of sedimentary layering, which likely exert more control over fracture patterns at shallower depths. Thus stratabound joints commonly have narrow size ranges and regular spacing; non-stratabound veins have a wide range of sizes and spacings. However, new fieldwork and careful literature review suggest that the effects of mechanical layering are only half the story. Although atypical, veins may be highly stratabound and yet spatially clustered; non-stratabound fractures may nonetheless feature narrow size ranges. These anomalous fracture arrangements are better explained by the presence of precipitating cements during fracture opening than by mechanical layering. Cement is thought to be highly important for fracture permeability, but potential effects of

  3. Depth perception not found in human observers for static or dynamic anti-correlated random dot stereograms.

    PubMed

    Hibbard, Paul B; Scott-Brown, Kenneth C; Haigh, Emma C; Adrain, Melanie

    2014-01-01

    One of the greatest challenges in visual neuroscience is that of linking neural activity with perceptual experience. In the case of binocular depth perception, important insights have been achieved through comparing neural responses and the perception of depth, for carefully selected stimuli. One of the most important types of stimulus that has been used here is the anti-correlated random dot stereogram (ACRDS). In these stimuli, the contrast polarity of one half of a stereoscopic image is reversed. While neurons in cortical area V1 respond reliably to the binocular disparities in ACRDS, they do not create a sensation of depth. This discrepancy has been used to argue that depth perception must rely on neural activity elsewhere in the brain. Currently, the psychophysical results on which this argument rests are not clear-cut. While it is generally assumed that ACRDS do not support the perception of depth, some studies have reported that some people, some of the time, perceive depth in some types of these stimuli. Given the importance of these results for understanding the neural correlates of stereopsis, we studied depth perception in ACRDS using a large number of observers, in order to provide an unambiguous conclusion about the extent to which these stimuli support the perception of depth. We presented observers with random dot stereograms in which correlated dots were presented in a surrounding annulus and correlated or anti-correlated dots were presented in a central circular region. While observers could reliably report the depth of the central region for correlated stimuli, we found no evidence for depth perception in static or dynamic anti-correlated stimuli. Confidence ratings for stereoscopic perception were uniformly low for anti-correlated stimuli, but showed normal variation with disparity for correlated stimuli. These results establish that the inability of observers to perceive depth in ACRDS is a robust phenomenon.

  4. Stratistician and Other Special Education Delivery Models: Changes Over Time in Teacher Ratings, Self-Image, Perceived Classroom Climate and Academic Achievement Among Handicapped and Nonhandicapped Children.

    ERIC Educational Resources Information Center

    Buffmire, Judy Ann

    Examined with 343 handicapped and 202 nonhandicapped elementary grade children was the relationship between exposure to a stratistician-generalist program and scores on measures of teacher ratings, self-concept, student perception of classroom climate, academic achievement, as well as grade level, sex, and classification. The 17…

  5. THEMIS Observations of Atmospheric Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Smith, Michael D.; Bandfield, Joshua L.; Christensen, Philip R.; Richardson, Mark I.

    2003-01-01

    The Mars Odyssey spacecraft entered into Martian orbit in October 2001 and after successful aerobraking began mapping in February 2002 (approximately Ls=330 deg.). Images taken by the Thermal Emission Imaging System (THEMIS) on-board the Odyssey spacecraft allow the quantitative retrieval of atmospheric dust and water-ice aerosol optical depth. Atmospheric quantities retrieved from THEMIS build upon existing datasets returned by Mariner 9, Viking, and Mars Global Surveyor (MGS). Data from THEMIS complements the concurrent MGS Thermal Emission Spectrometer (TES) data by offering a later local time (approx. 2:00 for TES vs. approx. 4:00 - 5:30 for THEMIS) and much higher spatial resolution.

  6. Higher resolution stimulus facilitates depth perception: MT+ plays a significant role in monocular depth perception.

    PubMed

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Hiruma, Nobuyuki

    2014-01-01

    Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies. PMID:25327168

  7. Higher resolution stimulus facilitates depth perception: MT+ plays a significant role in monocular depth perception.

    PubMed

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Hiruma, Nobuyuki

    2014-10-20

    Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies.

  8. Biomedical photoacoustic imaging

    PubMed Central

    Beard, Paul

    2011-01-01

    Photoacoustic (PA) imaging, also called optoacoustic imaging, is a new biomedical imaging modality based on the use of laser-generated ultrasound that has emerged over the last decade. It is a hybrid modality, combining the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging. In essence, a PA image can be regarded as an ultrasound image in which the contrast depends not on the mechanical and elastic properties of the tissue, but its optical properties, specifically optical absorption. As a consequence, it offers greater specificity than conventional ultrasound imaging with the ability to detect haemoglobin, lipids, water and other light-absorbing chomophores, but with greater penetration depth than purely optical imaging modalities that rely on ballistic photons. As well as visualizing anatomical structures such as the microvasculature, it can also provide functional information in the form of blood oxygenation, blood flow and temperature. All of this can be achieved over a wide range of length scales from micrometres to centimetres with scalable spatial resolution. These attributes lend PA imaging to a wide variety of applications in clinical medicine, preclinical research and basic biology for studying cancer, cardiovascular disease, abnormalities of the microcirculation and other conditions. With the emergence of a variety of truly compelling in vivo images obtained by a number of groups around the world in the last 2–3 years, the technique has come of age and the promise of PA imaging is now beginning to be realized. Recent highlights include the demonstration of whole-body small-animal imaging, the first demonstrations of molecular imaging, the introduction of new microscopy modes and the first steps towards clinical breast imaging being taken as well as a myriad of in vivo preclinical imaging studies. In this article, the underlying physical principles of the technique, its practical

  9. Biomedical photoacoustic imaging.

    PubMed

    Beard, Paul

    2011-08-01

    Photoacoustic (PA) imaging, also called optoacoustic imaging, is a new biomedical imaging modality based on the use of laser-generated ultrasound that has emerged over the last decade. It is a hybrid modality, combining the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging. In essence, a PA image can be regarded as an ultrasound image in which the contrast depends not on the mechanical and elastic properties of the tissue, but its optical properties, specifically optical absorption. As a consequence, it offers greater specificity than conventional ultrasound imaging with the ability to detect haemoglobin, lipids, water and other light-absorbing chomophores, but with greater penetration depth than purely optical imaging modalities that rely on ballistic photons. As well as visualizing anatomical structures such as the microvasculature, it can also provide functional information in the form of blood oxygenation, blood flow and temperature. All of this can be achieved over a wide range of length scales from micrometres to centimetres with scalable spatial resolution. These attributes lend PA imaging to a wide variety of applications in clinical medicine, preclinical research and basic biology for studying cancer, cardiovascular disease, abnormalities of the microcirculation and other conditions. With the emergence of a variety of truly compelling in vivo images obtained by a number of groups around the world in the last 2-3 years, the technique has come of age and the promise of PA imaging is now beginning to be realized. Recent highlights include the demonstration of whole-body small-animal imaging, the first demonstrations of molecular imaging, the introduction of new microscopy modes and the first steps towards clinical breast imaging being taken as well as a myriad of in vivo preclinical imaging studies. In this article, the underlying physical principles of the technique, its practical

  10. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  11. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    PubMed Central

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-01-01

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942

  12. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    PubMed

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  13. Holographic coherent anti-Stokes Raman scattering bio-imaging

    PubMed Central

    Shi, Kebin; Edwards, Perry S.; Hu, Jing; Xu, Qian; Wang, Yanming; Psaltis, Demetri; Liu, Zhiwen

    2012-01-01

    CARS holography captures both the amplitude and the phase of a complex anti-Stokes field, and can perform three-dimensional imaging by digitally focusing onto different depths inside a specimen. The application of CARS holography for bio-imaging is demonstrated. It is shown that holographic CARS imaging of sub-cellular components in live HeLa cells can be achieved. PMID:22808443

  14. Noncontact depth-resolved micro-scale optical coherence elastography of the cornea

    PubMed Central

    Wang, Shang; Larin, Kirill V.

    2014-01-01

    High-resolution elastographic assessment of the cornea can greatly assist clinical diagnosis and treatment of various ocular diseases. Here, we report on the first noncontact depth-resolved micro-scale optical coherence elastography of the cornea achieved using shear wave imaging optical coherence tomography (SWI-OCT) combined with the spectral analysis of the corneal Lamb wave propagation. This imaging method relies on a focused air-puff device to load the cornea with highly-localized low-pressure short-duration air stream and applies phase-resolved OCT detection to capture the low-amplitude deformation with nano-scale sensitivity. The SWI-OCT system is used here to image the corneal Lamb wave propagation with the frame rate the same as the OCT A-line acquisition speed. Based on the spectral analysis of the corneal temporal deformation profiles, the phase velocity of the Lamb wave is obtained at different depths for the major frequency components, which shows the depthwise distribution of the corneal stiffness related to its structural features. Our pilot experiments on ex vivo rabbit eyes demonstrate the feasibility of this method in depth-resolved micro-scale elastography of the cornea. The assessment of the Lamb wave dispersion is also presented, suggesting the potential for the quantitative measurement of corneal viscoelasticity. PMID:25426312

  15. Objective methods for achieving an early prediction of the effectiveness of regional block anesthesia using thermography and hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Klaessens, John H. G. M.; Landman, Mattijs; de Roode, Rowland; Noordmans, Herke J.; Verdaasdonk, Rudolf M.

    2011-03-01

    An objective method to measure the effectiveness of regional anesthesia can reduce time and unintended pain inflicted to the patient. A prospective observational study was performed on 22 patients during a local anesthesia before undergoing hand surgery. Two non-invasive techniques thermal and oxygenation imaging were applied to observe the region affected by the peripheral block and the results were compared to the standard cold sensation test. The supraclavicular block was placed under ultrasound guidance around the brachial plexus by injecting 20 cc Ropivacaine. The sedation causes a relaxation of the muscles around the blood vessels resulting in dilatation and hence an increase of blood perfusion, skin temperature and skin oxygenation in the lower arm and hand. Temperatures were acquired with an IR thermal camera (FLIR ThermoCam SC640). The data were recorded and analyzed with the ThermaCamTMResearcher and Matlab software. Narrow band spectral images were acquired at selected wavelengths with a CCD camera either combined with a Liquid Crystal Tunable Filter (420-730 nm) or a tunable hyper-wavelength LED light source (450-880nm). Concentration changes of oxygenated and deoxygenated hemoglobin in the dermis of the skin were calculated using the modified Lambert Beer equation. Both imaging methods showed distinct oxygenation and temperature differences at the surface of the skin of the hand with a good correlation to the anesthetized areas. A temperature response was visible within 5 minutes compared to the standard of 30 minutes. Both non-contact methods show to be more objective and can have an earlier prediction for the effectiveness of the anesthetic block.

  16. Timely Follow-Up of Abnormal Diagnostic Imaging Test Results in an Outpatient Setting: Are Electronic Medical Records Achieving Their Potential?

    PubMed Central

    Singh, Hardeep; Thomas, Eric J.; Mani, Shrinidi; Sittig, Dean; Arora, Harvinder; Espadas, Donna; Khan, Myrna M.; Petersen, Laura A.

    2010-01-01

    Background Given the fragmentation of outpatient care, timely follow-up of abnormal diagnostic test results remains a challenge. We hypothesized that an EMR that facilitates the transmission and availability of critical imaging results through either automated notification (alerting) or direct access to the primary report would eliminate this problem. Methods We studied critical imaging alert notifications in the outpatient setting of a tertiary care VA facility from November 2007 to June 2008. Tracking software determined whether the alert was acknowledged (i.e. provider opened the message for viewing) within two weeks of transmission; acknowledged alerts were considered read. We reviewed medical records and contacted providers to determine timely follow-up actions (e.g. ordering a follow-up test or consultation) within 4 weeks of transmission. Multivariable logistic regression models accounting for clustering effect by providers analyzed predictors for two outcomes; lack of acknowledgment and lack of timely follow-up. Results Of 123,638 studies (including X-rays, CT scans, ultrasounds, MRI and mammography), 1196 (0.97%) images generated alerts; 217 (18.1%) of these were unacknowledged. Alerts had a higher risk of being unacknowledged when ordering providers were trainees (OR, 5.58;95%CI, 2.86-10.89) and when dual (more than one provider alerted) as opposed to single communication was used (OR, 2.02;95%CI, 1.22-3.36). Timely follow-up was lacking in 92 (7.7% of all alerts) and was similar for acknowledged and unacknowledged alerts (7.3% vs. 9.7%;p=0.2). Risk for lack of timely follow-up was higher with dual communication (OR,1.99;95%CI, 1.06-3.48) but lower when additional verbal communication was used by the radiologist (OR, 0.12;95%CI: 0.04-0.38). Nearly all abnormal results lacking timely follow-up at 4 weeks were eventually found to have measurable clinical impact in terms of further diagnostic testing or treatment. Conclusions Critical imaging results may not

  17. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.

    PubMed

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-06-28

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.

  18. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.

    PubMed

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction. PMID:27367687

  19. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis

    PubMed Central

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687

  20. Extended depth of field in an intrinsically wavefront-encoded biometric iris camera

    NASA Astrophysics Data System (ADS)

    Bergkoetter, Matthew D.; Bentley, Julie L.

    2014-12-01

    This work describes a design process which greatly increases the depth of field of a simple three-element lens system intended for biometric iris recognition. The system is optimized to produce a point spread function which is insensitive to defocus, so that recorded images may be deconvolved without knowledge of the exact object distance. This is essentially a variation on the technique of wavefront encoding, however the desired encoding effect is achieved by aberrations intrinsic to the lens system itself, without the need for a pupil phase mask.

  1. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  2. Aerosol optical properties derived from the DRAGON-NE Asia campaign, and implications for a single-channel algorithm to retrieve aerosol optical depth in spring from Meteorological Imager (MI) on-board the Communication, Ocean, and Meteorological Satellite (COMS)

    NASA Astrophysics Data System (ADS)

    Kim, M.; Kim, J.; Jeong, U.; Kim, W.; Hong, H.; Holben, B.; Eck, T. F.; Lim, J. H.; Song, C. K.; Lee, S.; Chung, C.-Y.

    2016-02-01

    An aerosol model optimized for northeast Asia is updated with the inversion data from the Distributed Regional Aerosol Gridded Observation Networks (DRAGON)-northeast (NE) Asia campaign which was conducted during spring from March to May 2012. This updated aerosol model was then applied to a single visible channel algorithm to retrieve aerosol optical depth (AOD) from a Meteorological Imager (MI) on-board the geostationary meteorological satellite, Communication, Ocean, and Meteorological Satellite (COMS). This model plays an important role in retrieving accurate AOD from a single visible channel measurement. For the single-channel retrieval, sensitivity tests showed that perturbations by 4 % (0.926 ± 0.04) in the assumed single scattering albedo (SSA) can result in the retrieval error in AOD by over 20 %. Since the measured reflectance at the top of the atmosphere depends on both AOD and SSA, the overestimation of assumed SSA in the aerosol model leads to an underestimation of AOD. Based on the AErosol RObotic NETwork (AERONET) inversion data sets obtained over East Asia before 2011, seasonally analyzed aerosol optical properties (AOPs) were categorized by SSAs at 675 nm of 0.92 ± 0.035 for spring (March, April, and May). After the DRAGON-NE Asia campaign in 2012, the SSA during spring showed a slight increase to 0.93 ± 0.035. In terms of the volume size distribution, the mode radius of coarse particles was increased from 2.08 ± 0.40 to 2.14 ± 0.40. While the original aerosol model consists of volume size distribution and refractive indices obtained before 2011, the new model is constructed by using a total data set after the DRAGON-NE Asia campaign. The large volume of data in high spatial resolution from this intensive campaign can be used to improve the representative aerosol model for East Asia. Accordingly, the new AOD data sets retrieved from a single-channel algorithm, which uses a precalculated look-up table (LUT) with the new aerosol model, show an

  3. Method and apparatus to measure the depth of skin burns

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.

    2002-01-01

    A new device for measuring the depth of surface tissue burns based on the rate at which the skin temperature responds to a sudden differential temperature stimulus. This technique can be performed without physical contact with the burned tissue. In one implementation, time-dependent surface temperature data is taken from subsequent frames of a video signal from an infrared-sensitive video camera. When a thermal transient is created, e.g., by turning off a heat lamp directed at the skin surface, the following time-dependent surface temperature data can be used to determine the skin burn depth. Imaging and non-imaging versions of this device can be implemented, thereby enabling laboratory-quality skin burn depth imagers for hospitals as well as hand-held skin burn depth sensors the size of a small pocket flashlight for field use and triage.

  4. Contribution of motion parallax to segmentation and depth perception.

    PubMed

    Yoonessi, Ahmad; Baker, Curtis L

    2011-08-24

    Relative image motion resulting from active movement of the observer could potentially serve as a powerful perceptual cue, both for segmentation of object boundaries and for depth perception. To examine the perceptual role of motion parallax from shearing motion, we measured human performance in three psychophysical tasks: segmentation, depth ordering, and depth magnitude estimation. Stimuli consisted of random dot textures that were synchronized to head movement with sine- or square-wave modulation patterns. Segmentation was assessed with a 2AFC orientation judgment of a motion-defined boundary. In the depth-ordering task, observers reported which modulation half-cycle appeared in front of the other. Perceived depth magnitude was matched to that of a 3D rendered image with multiple static cues. The results indicate that head movement might not be important for segmentation, even though it is crucial for obtaining depth from motion parallax--thus, concomitant depth perception does not appear to facilitate segmentation. Our findings suggest that segmentation works best for abrupt, sharply defined motion boundaries, whereas smooth gradients are more powerful for obtaining depth from motion parallax. Thus, motion parallax may contribute in a different manner to segmentation and to depth perception and suggests that their underlying mechanisms might be distinct.

  5. The neural mechanism of binocular depth discrimination

    PubMed Central

    Barlow, H. B.; Blakemore, C.; Pettigrew, J. D.

    1967-01-01

    1. Binocularly driven units were investigated in the cat's primary visual cortex. 2. It was found that a stimulus located correctly in the visual fields of both eyes was more effective in driving the units than a monocular stimulus, and much more effective than a binocular stimulus which was correctly positioned in only one eye: the response to the correctly located image in one eye is vetoed if the image is incorrectly located in the other eye. 3. The vertical and horizontal disparities of the paired retinal images that yielded the maximum response were measured in 87 units from seven cats: the range of horizontal disparities was 6·6°, of vertical disparities 2·2°. 4. With fixed convergence, different units will be optimally excited by objects lying at different distances. This may be the basic mechanism underlying depth discrimination in the cat. PMID:6065881

  6. Image

    SciTech Connect

    Marsh, Amber; Harsch, Tim; Pitt, Julie; Firpo, Mike; Lekin, April; Pardes, Elizabeth

    2007-08-31

    The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.

  7. Effect of Head Position on Facial Soft Tissue Depth Measurements Obtained Using Computed Tomography.

    PubMed

    Caple, Jodi M; Stephan, Carl N; Gregory, Laura S; MacGregor, Donna M

    2016-01-01

    Facial soft tissue depth (FSTD) studies employing clinical computed tomography (CT) data frequently rely on depth measurements from raw 2D orthoslices. However, the position of each patient's head was not standardized in this method, potentially decreasing measurement reliability and accuracy. This study measured FSTDs along the original orthoslice plane and compared these measurements to those standardized by the Frankfurt horizontal (FH). Subadult cranial CT scans (n = 115) were used to measure FSTDs at 18 landmarks. Significant differences were observed between the methods at eight of these landmarks (p < 0.05), demonstrating that high-quality data are not generated simply by employing modern imaging modalities such as CT. Proper technique is crucial to useful results, and maintaining control over head position during FSTD data collection is important. This is easily and most readily achieved in CT techniques by rotating the head to the FH plane after constructing a 3D rendering of the data.

  8. Stereo depth distortions in teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Vonsydow, Marika

    1988-01-01

    In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).

  9. Perception of relative depth interval: systematic biases in perceived depth.

    PubMed

    Harris, Julie M; Chopin, Adrien; Zeiner, Katharina; Hibbard, Paul B

    2012-01-01

    Given an estimate of the binocular disparity between a pair of points and an estimate of the viewing distance, or knowledge of eye position, it should be possible to obtain an estimate of their depth separation. Here we show that, when points are arranged in different vertical geometric configurations across two intervals, many observers find this task difficult. Those who can do the task tend to perceive the depth interval in one configuration as very different from depth in the other configuration. We explore two plausible explanations for this effect. The first is the tilt of the empirical vertical horopter: Points perceived along an apparently vertical line correspond to a physical line of points tilted backwards in space. Second, the eyes can rotate in response to a particular stimulus. Without compensation for this rotation, biases in depth perception would result. We measured cyclovergence indirectly, using a standard psychophysical task, while observers viewed our depth configuration. Biases predicted from error due either to cyclovergence or to the tilted vertical horopter were not consistent with the depth configuration results. Our data suggest that, even for the simplest scenes, we do not have ready access to metric depth from binocular disparity.

  10. All-optical depth coloring based on directional gating.

    PubMed

    Lim, Sungjin; Kim, Mugeon; Hahn, Joonku

    2016-09-19

    In non-contacting depth extraction there are several issues, such as the accuracy and the measurement speed. In the issue of the measurement speed, the computation cost for image processing is significant. We present an all-optical depth extraction method by coloring objects according to their depth. Our system is operated fully optically and both encoding and decoding processes are optically performed. Therefore, all-optical depth coloring has a distinct advantage to extract the depth information in real time without any computation cost. We invent a directional gating method to extract the points from the object which are positioned at the same distance. Based on this method, the objects look painted by different colors according to the distance when the objects are observed through our system. In this paper, we demonstrate the all-optical depth coloring system and verify the feasibility of our method. PMID:27661875

  11. High-resolution spectrometer: solution to the axial resolution and ranging depth trade-off of SD-OCT

    NASA Astrophysics Data System (ADS)

    Marvdashti, Tahereh; Lee, Hee Yoon; Ellerbee, Audrey K.

    2013-03-01

    We demonstrate a cross-dispersed spectrometer for Spectral Domain Optical Coherence Tomography (SD-OCT). The resolution of a conventional SD-OCT spectrometer is limited by the available sizes of the linear array detectors. The adverse consequences of this finite resolution is a trade-off between achieving practical field of view (i.e. ranging depth) and maintaining high axial resolution. Inspired by spectrometer designs for astronomy, we take advantage of very high pixel-density 2D CCD arrays to map a single-shot 2D spectrum to an OCT A-scan. The basic system can be implemented using a high-resolution Echelle grating crossed with a prism in a direction orthogonal to the dispersion axis. In this geometry, the interferometric light returning from the OCT system is dispersed in two dimensions; the resulting spectrum can achieve more pixels than a traditional OCT spectrometer (which increases the ranging depth) and maintains impressive axial resolution because of the broad bandwidth of the detected OCT light. To the best of our knowledge, we present the first demonstration of OCT data using an Echelle-based cross-dispersed spectrometer. Potential applications for such a system include high-resolution imaging of the retina or the anterior segment of the eye over extended imaging depths and small animal imaging.

  12. Single grating x-ray imaging for dynamic biological systems

    NASA Astrophysics Data System (ADS)

    Morgan, Kaye S.; Paganin, David M.; Parsons, David W.; Donnelley, Martin; Yagi, Naoto; Uesugi, Kentaro; Suzuki, Yoshio; Takeuchi, Akihisa; Siu, Karen K. W.

    2012-07-01

    Biomedical studies are already benefiting from the excellent contrast offered by phase contrast x-ray imaging, but live imaging work presents several challenges. Living samples make it particularly difficult to achieve high resolution, sensitive phase contrast images, as exposures must be short and cannot be repeated. We therefore present a single-exposure, high-flux method of differential phase contrast imaging [1, 2, 3] in the context of imaging live airways for Cystic Fibrosis (CF) treatment assessment [4]. The CF study seeks to non-invasively observe the liquid lining the airways, which should increase in depth in response to effective treatments. Both high spatial resolution and sensitivity are required in order to track micron size changes in a liquid that is not easily differentiated from the tissue on which it lies. Our imaging method achieves these goals by using a single attenuation grating or grid as a reference pattern, and analyzing how the sample deforms the pattern to quantitatively retrieve the phase depth of the sample. The deformations are mapped at each pixel in the image using local cross-correlations comparing each 'sample and pattern' image with a reference 'pattern only' image taken before the sample is introduced. This produces a differential phase image, which may be integrated to give the sample phase depth.

  13. Image-guided high-dose-rate brachytherapy: preliminary outcomes and toxicity of a joint interventional radiology and radiation oncology technique for achieving local control in challenging cases

    PubMed Central

    Kishan, Amar U.; Lee, Edward W.; McWilliams, Justin; Lu, David; Genshaft, Scott; Motamedi, Kambiz; Demanes, D. Jeffrey; Park, Sang June; Hagio, Mary Ann; Wang, Pin-Chieh

    2015-01-01