Note: This page contains sample records for the topic 3-d retinal imaging from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results. Last update: November 12, 2013.
\\u000a Blood vessel on retina is generally used for medical image registration. Three dimensional (3D) OCT is the new technique capable\\u000a of providing the detailed 3D structure of retina. Most algorithms of 3D OCT vessel segmentation need to use the result of\\u000a retinal layer segmentation to enhance vessel pattern. The proposed 3D boosting learning algorithm is an independent pixel\\u000a (A-scan projection
Juan Xu; D. A. Tolliver; Hiroshi Ishikawa; Gadi Wollstein; Joel S. Schuman
|Discusses 3Dimaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3Dimaging for libraries and museums. (LRW)|
Discusses 3Dimaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3Dimaging for libraries and museums. (LRW)
We demonstrate in vivo velocity-resolved, volumetric bidirectional blood flow imaging in human retina using single-pass flow imaging spectral domain optical coherence tomography (SPFI-SDOCT). This technique uses previously described methods for separating moving and non-moving scatterers within a depth by using a modified Hilbert transform. Additionally, a moving spatial frequency window is applied, creating a stack of depth-resolved images of moving scatterers, each representing a finite velocity range. The resulting velocity reconstruction is validated with and strongly correlated to velocities measured with conventional Doppler OCT in flow phantoms. In vivo velocity-resolved flow mapping is acquired in healthy human retina and demonstrate the measurement of vessel size, peak velocity, and total foveal blood flow with OCT.
Tao, Yuankai K.; Kennedy, Kristen M.; Izatt, Joseph A.
As three-dimensional (3D) techniques continue to evolve from their humble beginnings-nineteenth century stereo photographs and twentieth century movies and holographs, the urgency for advancement in 3D display is escalating, as the need for widespread application in medical imaging, baggage scanning, gaming, television and movie display, and military strategizing increases. The most recent 3D developments center upon volumetric display, which generate
Automated Explosive Detection Systems utilizing Computed Tomography perform a series X-ray scans of passenger bags being checked in at the airport, and produce various 2-D projection images and 3-D volumetric images of the bag. The determination as to whether the passenger bag contains an explosive and needs to be searched manually is performed through trained Transportation Security Administration screeners following an approved protocol. In order to keep the screeners vigilant with regards to screening quality, the Transportation Security Administration has mandated the use of Threat Image Projection on 2-D projection X-ray screening equipment used at all US airports. These algorithms insert visual artificial threats into images of the normal passenger bags in order to test the screeners with regards to their screening efficiency and their screening quality at determining threats. This technology for 2-D X-ray system is proven and is widespread amongst multiple manufacturers of X-ray projection systems. Until now, Threat Image Projection has been unsuccessful at being introduced into 3-D Automated Explosive Detection Systems for numerous reasons. The failure of these prior attempts are mainly due to imaging queues that the screeners pickup on, and therefore make it easy for the screeners to discern the presence of the threat image and thus defeating the intended purpose. This paper presents a novel approach for 3-D Threat Image Projection for 3-D Automated Explosive Detection Systems. The method presented here is a projection based approach where both the threat object and the bag remain in projection sinogram space. Novel approaches have been developed for projection based object segmentation, projection based streak reduction used for threat object isolation along with scan orientation independence and projection based streak generation for an overall realistic 3-Dimage. The algorithms are prototyped in MatLab and C++ and demonstrate non discernible 3-D threat image insertion into various luggage, and non discernable streak patterns for 3-Dimages when compared to actual scanned images.
Yildiz, Yesna O.; Abraham, Douglas Q.; Agaian, Sos; Panetta, Karen
Automated Explosive Detection Systems utilizing Computed Tomography perform a series X-ray scans of passenger bags being checked in at the airport, and produce various 2-D projection images and 3-D volumetric images of the bag. The determination as to whether the passenger bag contains an explosive and needs to be searched manually is performed through trained Transportation Security Administration screeners following
Yesna O. Yildiz; Douglas Q. Abraham; Sos Agaian; Karen Panetta
As three-dimensional (3D) techniques continue to evolve from their humble beginnings-nineteenth century stereo photographs and twentieth century movies and holographs, the urgency for advancement in 3D display is escalating, as the need for widespread application in medical imaging, baggage scanning, gaming, television and movie display, and military strategizing increases. The most recent 3D developments center upon volumetric display, which generate 3Dimages within actual 3D space. More specifically, CSpace volumetric display generates a truly natural 3Dimage consisting of perceived width, height, and depth within the confines of physical space. Wireframe graphics enable viewers a 360-degree display without the use of additional visual aids. In this paper, research detailing the selection and testing of several rare earth, single-doped, fluoride crystals, namely 1%Er: NYF4, 2%Er: NYF4, 3%Er: NYF4 , 2%Er:KY3F10, and 2%Er:YLF, is introduced. These materials are the basis for CSpace display in a two-step twofrequency up-Conversion process. Significant determinants were tested and identified to aid in the selection of a suitable medium. Results show that 2%Er: NYF4 demonstrates good optical emitted power. Its superior level of brightness makes it the most suitable candidate for CSpace display. Testing also proved 2%Er: KY3F10 crystal might be a viable medium.
The application of stylization filters to photographs is common, Instagram being a popular recent example. These image manipulation applications work great for 2D images. However, stereoscopic 3D cameras are increasingly available to consumers (Nintendo 3DS, Fuji W3 3D, HTC Evo 3D). How will users apply these same stylizations to stereoscopic images?
This article will give an overview of the methods of transition from the set of images into 3D model. Direct method of creating 3D model using 3D software will be described. Creating photorealistic 3D models from a set of photographs is challenging problem in computer vision because the technology is still in its development stage while the demands for 3D
One of the main functions of vision is to estimate the 3D shape of objects in our environment. Many different visual cues, such as stereopsis, motion parallax, and shading, are thought to be involved. One important cue that remains poorly understood comes from surface texture markings. When a textured surface is slanted in 3D relative to the observer, the surface patterns appear compressed in the retinalimage, providing potentially important information about 3D shape. What is not known, however, is how the brain actually measures this information from the retinalimage. Here, we explain how the key information could be extracted by populations of cells tuned to different orientations and spatial frequencies, like those found in the primary visual cortex. To test this theory, we created stimuli that selectively stimulate such cell populations, by "smearing" (filtering) images of 2D random noise into specific oriented patterns. We find that the resulting patterns appear vividly 3D, and that increasing the strength of the orientation signals progressively increases the sense of 3D shape, even though the filtering we apply is physically inconsistent with what would occur with a real object. This finding suggests we have isolated key mechanisms used by the brain to estimate shape from texture. Crucially, we also find that adapting the visual system's orientation detectors to orthogonal patterns causes unoriented random noise to look like a specific 3D shape. Together these findings demonstrate a crucial role of orientation detectors in the perception of 3D shape. PMID:22147916
Fleming, Roland W; Holtmann-Rice, Daniel; Bülthoff, Heinrich H
Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-Dretinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).
Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan
This paper presents a novel concept of Automatic Target Recognition (ATR) for 3D medical imaging. Such 3Dimaging can be obtained from X-ray Computerized Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Ultrasonography (USG), functional MRI, and others. In the case of CT, such 3Dimaging can be derived from 3D-mapping of X-ray linear attenuation coefficients, related to 3D Fourier transform of Radon transform, starting from frame segmentation (or contour definition) into an object and background. Then, 3D template matching is provided, based on inertial tensor invariants, adopted from rigid body mechanics, by comparing the mammographic data base with a real object of interest, such as a malignant breast tumor. The method is more general than CAD breast mammography.
Jannson, Tomasz; Kostrzewski, Andrew; Paki Amouzou, P.
Reliable 3D wholebody scanners which output digitized 3Dimages of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define
Roy P. Pargas; Nancy J. Staples; Brian F. Malloy; Ken Cantrell; Murtuza Chhatriwala
Without doubt, the greatest challenge of multidetector-row CT is dealing with ‘data explosion’. For our carotid\\/intracranial CT angiograms, we routinely have 375 images to review (300 mm coverage reconstructed every 0.8 mm); for aortic studies we have 450–500 images (?600 mm coverage reconstructed every 1.3 mm); and for a study of the lower extremity inflow and run-off, we may generate
A neural-net robotic optical circuit that generates a 3D-visual sensation and is based on Wheatstone's stereoscope may be obtained by reverse engineering the neurophysiology (central connections) associated with the modalities of the retinal receptors. The optical circuit, called the Neuronal Correlate of a Modality (NCM)-circuit may be used to solve the inverse optics problem for visual \\
A beam of light reflected from a mirror attached to a contact lens, worn by the subject, produces a test object whose image does not move across the retina in response to eye-movements. This is called the stabilised retinalimage. In this paper, conditions for accurate stabilisation are discussed. It is shown that a small defect of stabilisation is important
Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3DImaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.
There are many applications, such as rapid prototyping, simulations and presentations, where non-professional computer end-users could benefit from the ability to create simple 3D models. Existing tools are geared towards the creation of production quality 3D models by professional users with sufficient background, time and motivation to overcome steep learning curves. Inflatable Icons combine diffusion-based image extrusion with a number
Imaging multichannel seismic data for amplitude inversion is a challenging task. The process seeks an inverse for a matrix of very high order that relates the data to a reflectivity model. Due to the irregular coverage of 3D surveys, the matrix is ill-conditioned and its coefficients are badly scaled. In this dissertation, I present a new approach for imaging irregularly sampled 3D data. The strategy is to reduce the size of the full matrix by reducing the size of 3D prestack data before imaging, and to balance the coefficients of the matrix by regularizing the coverage of 3D surveys. I tackle the case of Kirchhoff imaging operators because of their I/O flexibility and computational efficiency. However, after regularization, full-wave extrapolation techniques may become attractive and practical to implement on the regularly sampled prestack volume. For adequately sampled 3D data with varying surface coverage, I use an asymptotic approximate inverse to obtain a good image. I apply a new partial prestack operator named azimuth moveout (AMO) to reduce the size of the prestack. data and regularize its coverage by partial stacking. The effects of irregular coverage and varying illumination at depth are reduced by applying a diagonal transformation to the Kirchhoff operator. Problems arise in 3D reflection seismology where fine sampling is not possible and the sparse geometry of 3D surveys results in spatial aliasing. I develop a new dealaising technique which I refer to as inversion to common offset (ICO). Posing partial stacking as an optimization process, the inversion improves the stack when the data are spatially aliased. I present two formulations for ICO, namely data-space and model-space inversion and design an efficient implementation of the algorithm in the Log-stretch Fourier domain. To accelerate the convergence of the iterative solution I present a new technique for Preconditioning the inversion based on row and column scaling. Results from field marine and land surveys are presented to demonstrate the application of AMO and ICO for regularizing the coverage of 3D surveys and reducing the costs of 3D prestack imaging. The images obtained by prestack migration after regularization are superior to those obtained by migrating the irregularly sampled data. Furthermore, ICO provides a promising approach for reducing the costs of 3D acquisition.
We have developed experimental 3DTV systems based on integral method to obtain full-color, full-parallax with spatial imaging in real time. The resolution factors for reconstructed 3Dimages, namely, diffraction of an elemental lens, number of elemental lenses, number of pixels of an elemental image, and viewing zone are described. It is clarified a huge number of pixels is required for
In this paper, we study speckle reduction technology for 3-D ultrasound images and a 3-D anisotropic diffusion (AD) filter is developed. The 3-D anisotropic diffusion filter works directly in the 3-Dimage domain and can overcome the limitations of the 2-D anisotropic diffusion filter and the traditional 3-D anisotropic diffusion filter. The proposed algorithm uses normalized gradient to replace gradient in the computation of the diffusion coefficients, which can reduce the speckle effectively while preserving the edges. Experiments have been performed on real 3-D ultrasound images and the experimental results show the effectiveness of the proposed 3D anisotropic diffusion filter.
3D seismic imaging was carried out in the 3D seismic volume situated in the middle of Banat region in Serbia. The 3D area is about 300 km square. The aim of 3D investigation was defining geology structures and techtonics especially in Mesozoik complex. The investigation objects are located in depth from 2000 to 3000 m. There are number of wells
Two approaches to the characterization of three-dimensional (3-D) textures are presented: one based on gradient vectors and one on generalized co-occurrence matrices. They are investigated with the help of simulated data for their behavior in the presence of noise and for various values of the parameters they depend on. They are also applied to several medical volume images characterized by
Vassili A. Kovalev; Maria Petrou; Yaroslav S. Bondar
We propose a framework to model, analyze and design three-dimensional (3-D) imaging systems. A system engineering approach is adopted which relates 3-Dimages (real or synthesized) to 3-D objects (real or synthesized) using a novel representation of the optical data which we call \\
This work studies retinalimage registration in the context of the National Institutes of Health (NIH) Early Treatment Diabetic Retinopathy Study (ETDRS) standard. The ETDRS imaging protocol specifies seven fields of each retina and presents three major challenges for the image registration task. First, small overlaps between adjacent fields lead to inadequate landmark points for feature-based methods. Second, the non-uniform contrast/intensity distributions due to imperfect data acquisition will deteriorate the performance of area-based techniques. Third, high-resolution images contain large homogeneous nonvascular/texureless regions that weaken the capabilities of both feature-based and area-based techniques. In this work, we propose a hybrid retinalimage registration approach for ETDRS images that effectively combines both area-based and feature-based methods. Four major steps are involved. First, the vascular tree is extracted by using an efficient local entropy-based thresholding technique. Next, zeroth-order translation is estimated by maximizing mutual information based on the binary image pair (area-based). Then image quality assessment regarding the ETDRS field definition is performed based on the translation model. If the image pair is accepted, higher-order transformations will be involved. Specifically, we use two types of features, landmark points and sampling points, for affine/quadratic model estimation. Three empirical conditions are derived experimentally to control the algorithm progress, so that we can achieve the lowest registration error and the highest success rate. Simulation results on 504 pairs of ETDRS images show the effectiveness and robustness of the proposed algorithm. PMID:16445258
Chanwimaluang, Thitiporn; Fan, Guoliang; Fransen, Stephen R
A fully automated, robust vessel segmentation algorithm has been developed for choroidal OCT, employing multiscale 3D edge filtering and projection of “probability cones” to determine the vessel “core”, even in the tomograms with low signal-to-noise ratio (SNR). Based on the ideal vessel response after registration and multiscale filtering, with computed depth related SNR, the vessel core estimate is dilated to quantify the full vessel diameter. As a consequence, various statistics can be computed using the 3D choroidal vessel information, such as ratios of inner (smaller) to outer (larger) choroidal vessels or the absolute/relative volume of choroid vessels. Choroidal vessel quantification can be displayed in various forms, focused and averaged within a special region of interest, or analyzed as the function of image depth. In this way, the proposed algorithm enables unique visualization of choroidal watershed zones, as well as the vessel size reduction when investigating the choroid from the sclera towards the retinal pigment epithelium (RPE). To the best of our knowledge, this is the first time that an automatic choroidal vessel segmentation algorithm is successfully applied to 1060 nm 3D OCT of healthy and diseased eyes.
Kajic, Vedran; Esmaeelpour, Marieh; Glittenberg, Carl; Kraus, Martin F.; Honegger, Joachim; Othara, Richu; Binder, Susanne; Fujimoto, James G.; Drexler, Wolfgang
A fully automated, robust vessel segmentation algorithm has been developed for choroidal OCT, employing multiscale 3D edge filtering and projection of "probability cones" to determine the vessel "core", even in the tomograms with low signal-to-noise ratio (SNR). Based on the ideal vessel response after registration and multiscale filtering, with computed depth related SNR, the vessel core estimate is dilated to quantify the full vessel diameter. As a consequence, various statistics can be computed using the 3D choroidal vessel information, such as ratios of inner (smaller) to outer (larger) choroidal vessels or the absolute/relative volume of choroid vessels. Choroidal vessel quantification can be displayed in various forms, focused and averaged within a special region of interest, or analyzed as the function of image depth. In this way, the proposed algorithm enables unique visualization of choroidal watershed zones, as well as the vessel size reduction when investigating the choroid from the sclera towards the retinal pigment epithelium (RPE). To the best of our knowledge, this is the first time that an automatic choroidal vessel segmentation algorithm is successfully applied to 1060 nm 3D OCT of healthy and diseased eyes. PMID:23304653
Kaji?, Vedran; Esmaeelpour, Marieh; Glittenberg, Carl; Kraus, Martin F; Honegger, Joachim; Othara, Richu; Binder, Susanne; Fujimoto, James G; Drexler, Wolfgang
In this paper, we present the system architecture of a 360 degree view 3Dimaging system. The system consists of multiple 3D sensors synchronized to take 3Dimages around the object. Each 3D camera employs a single high-resolution digital camera and a color-coded light projector. The cameras are synchronized to rapidly capture the 3D and color information of a static object or a live person. The color encoded structure lighting ensures the precise reconstruction of the depth of the object. A 3Dimaging system architecture is presented. The architecture employs the displacement of the camera and the projector to triangulate the depth information. The 3D camera system has achieved high depth resolution down to 0.1mm on a human head sized object and 360 degree imaging capability.
Lu, Thomas; Yin, Stuart; Zhang, Jianzhong; Li, Jiangan; Wu, Frank
Optical coherence tomography (OCT) is a powerful and noninvasive method for retinalimaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean±SD) was 8.52±3.13 and 7.56±2.95?m for the 2D and 3D methods, respectively. PMID:23837966
Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D; Sonka, Milan
We propose a multi-view image coding system in 3-D space based on an improved volumetric 3-D reconstruction. Unlike existing multi-view image coding schemes, in which the 3- D scene information is represented by a mesh model as well as the texture data, we use a 3-D voxel model to represent the 3-D scene information of the images to be encoded.
This paper gives an overview of works done in our group on 3D and appearance modeling of objects, from images. The backbone of our approach is to use what we consider as the principled optimization criterion for this problem: to maximize photoconsistency between input images and images rendered from the estimated surface geometry and appearance. In initial works, we have derived a general solution for this, showing how to write the gradient for this cost function (a non-trivial undertaking). In subsequent works, we have applied this solution to various scenarios: recovery of textured or uniform Lambertian or non-Lambertian surfaces, under static or varying illumination and with static or varying viewpoint. Our approach can be applied to these different cases, which is possible since it naturally merges cues that are often considered separately: stereo information, shading, silhouettes. This merge naturally happens as a result of the cost function used: when rendering estimated geometry and appearance (given known lighting conditions), the resulting images automatically contain these cues and their comparison with the input images thus implicitly uses these cues simultaneously.
Oblique section reconstruction can produce a 3-Dimage from electron micrographs of a sectioned crystal when the orientation of the section plane is not aligned with the principal planes of the unit cell. We describe here the reconstruction protocol and the specialized computer software for a Fourier space method that can extract 3-D information from 2-D projection images of oblique
Integral imaging is employed as part of a three-dimensional imaging system, allowing the display of full colour images with continuous parallax within a wide viewing zone. A significant quantity of data is required to represent a captured integral 3Dimage with high resolution. A lossy compression scheme has been developed based on the use of a 3D-DCT, which make possible
The use of computed tomography for dental imaging procedures has in- creased recently. Use of CT for even seemingly routine diagnosis and treatment procedures suggests that the desire for 3-Dimaging is more than a current trend but rather a shift toward a future of dimensional volume imaging. Recognizing this shift, several imaging manufacturers recently have developed 3-Dimaging devices
Describes a non-contact optical sensing technology called C3D that is based on speckle texture projection photogrammetry. C3D has been applied to capturing all-round 3D models of the human body of high dimensional accuracy and photorealistic appearance. The essential strengths and limitation of the C3D approach are presented and the basic principles of this stereo-imaging approach are outlined, from image capture
We have proposed a glasses-free three-dimensional (3D) display for displaying 3Dimages on a large screen using multi-projectors and an optical screen consisting of a special diffuser film with a large condenser lens. To achieve high presence communication with natural large-screen 3Dimages, we numerically analyze the factors responsible for degrading image quality to increase the image size. A major factor that determines the 3Dimage quality is the arrangement of component units, such as the projector array and condenser lens, as well as the diffuser film characteristics. We design and fabricate a prototype 200-inch glasses-free 3D display system on the basis of the numerical results. We select a suitable diffuser film, and we combine it with an optimally designed condenser lens. We use 57 high-definition projector units to obtain viewing angles of 13.5°. The prototype system can display glasses-free 3Dimages of a life-size car using natural parallax images.
Kawakita, M.; Iwasawa, S.; Sakai, M.; Haino, Y.; Sato, M.; Inoue, N.
A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads to blurring or ghosting, and therefore to a decrease in perceived
The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D repository. While far from perfect, the presented results demonstrate that on-line repositories of 3D content can be used for effective 2D-to-3Dimage conversion. With the continuously increasing amount of 3D data on-line and with the rapidly growing computing power in the cloud, the proposed framework seems a promising alternative to operator-assisted 2D-to-3D conversion.
Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.
3D seismic imaging was carried out in the 3D seismic volume situated in the middle of Banat region in Serbia. The 3D area is about 300 km square. The aim of 3D investigation was defining geology structures and techtonics especially in Mesozoik complex. The investigation objects are located in depth from 2000 to 3000 m. There are number of wells in this area but they are not enough deep to help in the interpretation. It was necessary to get better seismic image in deeper area. Acquisition parameters were satisfactory (good quality of input parameters, length of input data was 5 s, fold was up to 4000 %) and preprocessed data was satisfied. GeoDepth is an integrated system for 3D velocity model building and for 3D seismic imaging. Input data for 3D seismic imaging consist of preprocessing data sorted to CMP gathers and RMS stacking velocity functions. Other type of input data are geological information derived from well data, time migrated images and time migrated maps. Workflow for this job was: loading and quality control the input data (CMP gathers and velocity), creating initial RMS Velocity Volume, PSTM, updating the RMS Velocity Volume, PSTM, building the Initial Interval Velocity Model, PSDM, updating the Interval Velocity Model, PSDM. In the first stage the attempt is to derive initial velocity model as simple as possible as.The higher frequency velocity changes are obtained in the updating stage. The next step, after running PSTM, is the time to depth conversion. After the model is built, we generate a 3D interval velocity volume and run 3D pre-stack depth migration. The main method for updating velocities is 3D tomography. The criteria used in velocity model determination are based on the flatness of pre-stack migrated gathers or the quality of the stacked image. The standard processing ended with poststack 3D time migration. Prestack depth migration is one of the powerful tool available to the interpretator to develop an accurate velocity model and get good seismic image. A comparison of a time and depth migrated sections highlights the improvements in imaging quality. On depth migrated section imaging and fault resolution is improved and is easer to get more complex and realistic geological model.
The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.
Three-dimensional (3-D) imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture and display true 3-D color images, has been seen as the right technology for 3-D viewing for audiences of more than one person. Due to the advanced degree of its development, InI technology could be
RaÚl Martinez-Cuenca; Genaro Saavedra; Manuel Martinez-Corral; Bahram Javidi
This work describes a method for filling holes in a 3D mesh based on 2D image restoration algorithms. Since these algorithms need an image as input, the first stage of the method concerns a 3D to 2D transformation. By pro- jecting the 3D surface onto a squared plane, a 2D image is generated in such a way that the depth
Santiago Salamanca; Pilar Merchan; Emiliano Perez; Antonio Adan; Carlos Cerrada
High-resolution microscopic imaging of biological samples often produces multiple 3Dimage tiles to cover a large field of view of specimen. Usually each tile has a large size, in the range of hundreds of megabytes to several gigabytes. For many of our image data sets, existing software tools are often unable to stitch those 3D tiles into a panoramic view,
This research opens a new window in the field of image processing by 3D Volume Representation of tumor through the use of Magnetic Resonant Imaging and an integrated software tool called 3D Slicer. The technique used for the piece of work is Label map segmentation and surface model making. It is used to segment the brain tumor from 2D images
Ayesha Amir Siddiqi; Attaullah Khawaja; Mashal Tariq
: This paper describes a method to automatically generate the mappingbetween a completely labeled reference image and the 3D medical image ofa patient. To achieve this, we combined three techniques: the extraction of 3Dfeature lines, their matching using 3D deformable line models, the extensionof the deformation to the whole image space using warping techniques.We present experimental results for the segmentation
Jérôme Declerck; Gérard Subsol; Jean-philippe Thirion; Nicholas Ayache
We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.
Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.
Conventional optical imaging systems perform both information sensing and image formation functions. The optical system is generally designed to implement processing for image formation with a goal of optimizing analog image quality measures. Digital image involves a fundamental paradigm shift in which the “image” is no longer synonymous with the focal plane field distribution. A digital system may be designed
Three-dimensional (3D) image processing provides a useful tool for machine vision applications. Typically a 3D vision system is divided into data acquisition, low-level processing, object representation and matching. In this paper, a 3D object pose estimation method is developed for an automated manufacturing assembly process. The experimental results show that the 3D pose estimation method produces accurate geometrical information for
Experiments were conducted to understand the basic characteristics of pointing to three-dimensional images generated by stereoscopic images. The experimental examination found that it was difficult to point to the correct position of the image and that the depth of the image was underestimated so that the object perceived by the subject was distorted. Subjects who were familiar with viewing 3-Dimages made less error in position and size perception than did subjects having little experience in viewing 3-Dimages. These results imply that the performance of 3-D pointing in stereoscopic 3-Dimages will improve with experience although it is difficult to perform a 3-D pointing precisely. PMID:8974886
There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we obtain a similar peanut, but without the line singularity.
Three dimensional (3-d) chemical mapping using angle scan tomography in a soft X-ray scanning transmission X-ray microscope (STXM) has been applied to quantitative chemical mapping in three dimensions of calcium carbonate biomineralization by planktonic freshwater cyanobacteria of the strain Synechococcus leopoliensis PCC 7942. Aspects of making and mounting grid sections, acquiring multi-energy, multi-angle sequences, and data analysis are discussed.
In a general framework for exploiting the automation capabilities in modern SEM, two main branches has been investigated:\\u000a the web-based remote microscopy  and the 3D metrology by Photometric Stereo (PS). Focusing on this second topic, in a previous\\u000a paper  an Automatic Alignment Procedure for a 4-Source Photometric Stereo (PS) technique was presented for metrically reconstructing\\u000a the third dimension
PhotoShop? the downloaded images are converted to a series of model sections by color coding the section periphery and the corpus callosum of each image blue and red, respectively. The colored regions are extracted from the original image and printed at a 1:1 scale on paper. A digital camera is used to record an optical image from each of the
Anna C. Crecelius; D. Shannon Cornett; Betsy Williams; Bobby Bodenheimer; Benoit Dawant; Richard M. Caprioli
The modeling of heads from image sequences is of great interest in the context of virtual reality, telecommunication and computerized animation systems. In this contribution a passive 3-D scanning system is described that automatically computes a complete 3-D surface model of a human head and shoulder part from a stereoscopic image sequence of a person rotating freely in front of
The automatic segmentation and labelling of anatomical structures in 3D medical images is a challenging task of practical importance. We describe a model-based approach which allows robust and accurate interpretation using explicit anatomical knowledge. Our method is based on the extension to 3D of Point Distribution Mo- dels (PDMs) and associated image search algorithms. A combination of global, Genetic Algorithm
Spectral-domain optical coherence tomography (SD-OCT) provides volumetric images of retinal structures with unprecedented detail. Accurate segmentation algorithms and feature quantification in these images, however, are needed to realize the full potential of SD-OCT. The fully automated segmentation algorithm, FloatingCanvas, serves this purpose and performs a volumetric segmentation of retinal tissue layers in three-dimensional image volume acquired around the optic nerve head without requiring any pre-processing. The reconstructed layers are analyzed to extract features such as blood vessels and retinal nerve fibre layer thickness. Findings from images obtained with the RTVue-100 SD-OCT (Optovue, Fremont, CA, USA) indicate that FloatingCanvas is computationally efficient and is robust to the noise and low contrast in the images. The FloatingCanvas segmentation demonstrated good agreement with the human manual grading. The retinal nerve fibre layer thickness maps obtained with this method are clinically realistic and highly reproducible compared with time-domain StratusOCT(TM). PMID:21164806
Zhu, Haogang; Crabb, David P; Schlottmann, Patricio G; Ho, Tuan; Garway-Heath, David F
Spectral-domain optical coherence tomography (SD-OCT) is a 3-Dimaging technique, allowing direct visualization of retinal morphology and architecture. The various layers of the retina may be affected differentially by various diseases. In this study, an automated graph-based multilayer approach was developed to sequentially segment eleven retinal surfaces including the inner retinal bands to the outer retinal bands in normal SD-OCT volume scans at three different stages. For stage 1, the four most detectable and/or distinct surfaces were identified in the four-times-downsampled images and were used as a priori positional information to limit the graph search for other surfaces at stage 2. Eleven surfaces were then detected in the two-times-downsampled images at stage 2, and refined in the original image space at stage 3 using the graph search integrating the estimated morphological shape models. Twenty macular SD-OCT (Heidelberg Spectralis) volume scans from 20 normal subjects (one eye per subject) were used in this study. The overall mean and absolute mean differences in border positions between the automated and manual segmentation for all 11 segmented surfaces were -0.20 +/- 0.53 voxels (-0.76 +/- 2.06 ?m) and 0.82 +/- 0.64 voxels (3.19 +/- 2.46 ?m). Intensity and thickness properties in the resultant retinal layers were investigated. This investigation in normal subjects may provide a comparative reference for subsequent investigations in eyes with disease.
Hu, Zhihong; Wu, Xiaodong; Hariri, Amirhossein; Sadda, SriniVas R.
We present a technique for mapping the complete 3D spatial intensity profile of a laser beam from its fluorescence in an atomic vapour. We propagate shaped light through a rubidium vapour cell and record the resonant scattering from the side. From a single measurement we obtain a camera limited resolution of 200 × 200 transverse points and 659 longitudinal points. In constrast to invasive methods in which the camera is placed in the beam path, our method is capable of measuring patterns formed by counterpropagating laser beams. It has high resolution in all 3 dimensions, is fast and can be completely automated. The technique has applications in areas which require complex beam shapes, such as optical tweezers, atom trapping and pattern formation. PMID:24104113
We present a technique for mapping the complete 3D spatial intensity profile of a laser beam from its fluorescence in an atomic vapour. We propagate shaped light through a rubidium vapour cell and record the resonant scattering from the side. From a single measurement we obtain a camera limited resolution of 200 x 200 transverse points and 659 longitudinal points. In constrast to invasive methods in which the camera is placed in the beam path, our method is capable of measuring patterns formed by counterpropagating laser beams. It has high resolution in all 3 dimensions, is fast and can be completely automated. The technique has applications in areas which require complex beam shapes, such as optical tweezers, atom trapping and pattern formation.
The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3Dimages was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.
Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria
Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.
We present novel methodologies for compounding large numbers of 3D echocardiography volumes. Our aim is to investigate the effect of using an increased number of images, and to compare the performance of different compounding methods on image quality. Three sets of 3D echocardiography images were acquired from three volunteers. Each set of data (containing 10+ images) were registered using external tracking followed by state-of-the-art image registration. Four compounding methods were investigated, mean, maximum, and two methods derived from phase-based compounding. The compounded images were compared by calculating signal-to-noise ratios and contrast at manually identified anatomical positions within the images, and by visual inspection by experienced echocardiographers. Our results indicate that signal-to-noise ratio and contrast can be improved using increased number of images, and that a coherent compounded image can be produced using large (10+) numbers of 3D volumes.
Yao, Cheng; Simpson, John M.; Jansen, Christian H. P.; King, Andrew P.; Penney, Graeme P.
This paper describes a set of methods that make it possible to estimate the position of a feature inside a three-dimensional (3D) space by starting from a sequence of two-dimensional (2D) acoustic images of the seafloor acquired with a sonar system. Typical sonar imaging systems are able to generate just 2D images, and the acquisition of 3D information involves sharp
The present study introduces a new three-dimensional (3D) surface image analysis technique in which white light illumination\\u000a from different incident angles is used to create 3D surfaces with a photometric approach. The three-dimensional features of\\u000a the surface images created are then used in the characterization of particle size distributions of granules. This surface\\u000a image analysis method is compared to sieve
Ira Soppela; Sari Airaksinen; Juha Hatara; Heikki Räikkönen; Osmo Antikainen; Jouko Yliruusi; Niklas Sandler
The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are
Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.
Integral imaging is employed as part of a three-dimensional imaging system, allowing the display of full colour images with continuous parallax within a wide viewing zone. A significant quantity of data is required to represent a captured integral 3Dimage with high resolution. In this paper, a novel approach to the problem of compressing the significant quantity of data required
We review image processing techniques for manipulating 2-D representations of facial images and show their potential for extension to 3-D. Techniques for the manipulation of 2-D images include those used for: (i) averaging images of different faces to form facial 'prototypes', (ii) automated caricature exaggeration of the way an individual face differs in shape and colour from a prototype and
Stereoscopic 3-Dimages are used with a lot of virtual reality systems because of their sense of reality. At the same time, in the training that heightens levels of self control of autonomic responses with images, it is thought that reality of software is important. Therefore, the authors investigated use of 3-Dimages as a technical tool to heighten the reality of software. In this study, a laboratory was transformed into a relaxation room using 3-Dimages with fragrances and an experiment was carried out to examine the psychological effect of it. From the results of this study, the authors reported on the effects of fragrances on psychological responses when viewing 3-Dimages and the possibilities of producing these effects for relaxation. PMID:8888646
We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.
The next generation of automated sorting machines for seedlings demands 3D models of the plants to be made at high speed and with high accuracy. In our system the 3D plant model is created based on the information of 24 RGB cameras. Our contribution is an image acquisition technique based on volumetric intersection which is capable of the required order
N. J. J. P. Koenderink; M. L. I. Wigham; F. B. T. F. Golbach; G. W. Otten; R. J. H. Gerlich; Zedde van de H. J
Advances in microscopy now enable researchers to easily acquire multi-channel three-dimensional (3D) images and 3D time series (4D). However, processing, analyzing, and displaying this data can often be difficult and time- consuming. We discuss some of the software tools and techniques that are available to accomplish these tasks.
Jeffrey L. Clendenon; Jason M. Byars; Deborah P. Hyink
We present novel intelligent tools for mining 3D medical images. We focus on detecting discriminative Regions of Interest (ROIs) and mining asso- ciations between their spatial distribution and other clinical assessment. To identify these highly informative regions, we propose utilizing statistical tests to selectively partition the 3D space into a number of hyper-rectangles. We apply quantitative characterization techniques to extract
The vast majority of information about cells and cell organelle structure were obtained by means of transmission electron microscopy investigation of cells serial thin sections. However often it is very difficult to derive information about 3D structure of specimens from such electron micrographs. A new program restoring 3Dimage of cells from the serial thin sections micrographs have been developed
Uryi P. Volkov; Nikolai P. Konnov; Olga V. Novikova; Roman A. Yakimenko
Computing power at low cost, highly accurate 2D, 3D data acquisition systems, advanced methods of 2D, 3D signal processing, images synthesis, artificial intelligence and huge data base management utilities are the basic tools that can be used for development of dedicated systems that can help the work of art historians and archaeologists. In this paper we describe the application of
Dragomir Milojevic; Cedric Laugerotte; P. Dunham; N. Warzee
Visualization of 3D data from ultrasound images is a challenging task due to the noisy and fuzzy nature of ultrasound image and large amounts of computation time. This paper presents an efficient volume rendering technique for visualization of 3D ultrasound image and large amounts of computation time. This paper present an efficient volume rendering technique for visualization of 3D ultrasound image using the noise-reduction filtering and extraction of the boundary surface with good image quality. The truncated- median filter in 2D ultrasound image is proposed for reducing speckle noise within acceptable processing time. In order to adapt the fuzzy nature of boundary surface of ultrasound image, an adaptive thresholding is also proposed. The decision of threshold is based on the idea that effective boundary surface is estimated by the gray level above an adequate noise threshold and width along the pixel ray. The proposed rendering method is simulated with 3D fetus ultrasound image of 148 X 194 X 112 voxels. Several preprocessing methods were tested and compared with respect to the computation time and the subjective image quality. According to the comparison study, the proposed volume rendering method shows good performance for volume visualization of 3D ultrasound image.
Data processing for three dimensional mass spectrometry (3D-MS) imaging was investigated, starting with a consideration of the challenges in its practical implementation using a series of sections of a tissue volume. The technical issues related to data reduction, 2D imaging data alignment, 3D visualization, and statistical data analysis were identified. Software solutions for these tasks were developed using functions in MATLAB. Peak detection and peak alignment were applied to reduce the data size, while retaining the mass accuracy. The main morphologic features of tissue sections were extracted using a classification method for data alignment. Data insertion was performed to construct a 3D data set with spectral information that can be used for generating 3D views and for data analysis. The imaging data previously obtained for a mouse brain using desorption electrospray ionization mass spectrometry (DESI-MS) imaging have been used to test and demonstrate the new methodology. PMID:22392622
Data processing for three dimensional mass spectrometry (3D-MS) imaging was investigated, starting with a consideration of the challenges in its practical implementation using a series of sections of a tissue volume. The technical issues related to data reduction, 2D imaging data alignment, 3D visualization, and statistical data analysis were identified. Software solutions for these tasks were developed using functions in MATLAB. Peak detection and peak alignment were applied to reduce the data size, while retaining the mass accuracy. The main morphologic features of tissue sections were extracted using a classification method for data alignment. Data insertion was performed to construct a 3D data set with spectral information that can be used for generating 3D views and for data analysis. The imaging data previously obtained for a mouse brain using desorption electrospray ionization mass spectrometry (DESI-MS) imaging have been used to test and demonstrate the new methodology.
This grant supported the acquisition of equipment towards the development of what has been termed a Postprocessing and 3D Graphical Imaging Facility. The primary function of the facility is in the analysis of numerical and experimental data, perhaps creat...
Motivation: Modern anatomical and developmental studies often require high-resolution imaging of large specimens in three dimensions (3D). Confocal microscopy produces high-resolution 3Dimages, but is limited by a relatively small field of view compared to the size of large biological specimens. Therefore, motorized stages that move the sample are used to create a tiled scan of the whole specimen. The
Stephan Preibisch; Stephan Saalfeld; Pavel Tomancak
In this paper, we propose a computational framework for 3D volume\\u000areconstruction from 2D histological slices using registration algorithms in\\u000afeature space. To improve the quality of reconstructed 3D volume, first,\\u000aintensity variations in images are corrected by an intensity standardization\\u000aprocess which maps image intensity scale to a standard scale where similar\\u000aintensities correspond to similar tissues. Second, a
This feature issue of Applied Optics on Digital Holography and 3DImaging is the sixth of an approximately annual series. Forty-seven papers are presented, covering a wide range of topics in phase-shifting methods, low coherence methods, particle analysis, biomedical imaging, computer-generated holograms, integral imaging, and many others. PMID:23292430
Kim, Myung K; Hayasaki, Yoshio; Picart, Pascal; Rosen, Joseph
In this paper, we introduce the harwdare/software technology used for implementing 3D stereo image capturing system which was built by using two OV3640 CMOS camera modules and camera interface hardware implemented in S3C6410 MCP. We also propose multi-segmented method to capture an image for better 3D depth feeling. An image is composed of 9 segmented sub images each of which is captured by using two degree of freedom in DC servo per each left and right CMOS cameras module for the improving the focusing problem in each segmented sub image. First, we analyze the whole image. We hope and sure that this new method will improve the comfortable 3D depth feeling even though its synthesizing method is a little complicated.
Ham, Woonchul; Badarch, Luubaatar; Tumenjargal, Enkhbaatar; Kwon, Hyeokjae
Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.
Automatic analysis of neuronal structure from wide-field-of-view 3Dimage stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.
Karakaya, Mahmut [ORNL; Kerekes, Ryan A [ORNL; Gleason, Shaun Scott [ORNL; Martins, Rodrigo [St. Jude Children's Research Hospital; Dyer, Michael [St. Jude Children's Research Hospital
Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.
Registration of retinalimages taken at different times frequently is required to measure changes caused by disease or to document retinal location of visual stimuli. Cross-correlation has been used previously for such registration, but it is computationally intensive. We have modified a faster algorithm, sequential similarity detection (SSD), to use only the portion of the template that contains retinal vessels.
We obtained stereoscopic 3D-CT images in maxillofacial bone fracture patients. These images are made at two different angles. One is equivalent to the view obtained by a subciliary incision during surgery. Another is equivalent to the view obtained by oral incision during surgery. A stereoscopic image is created with a pair of images that differ from each other by a 6 degree shift of the z axis. PMID:10096337
Seno, H; Mizunuma, M; Nishida, M; Inoue, M; Yanai, A; Irimoto, M
D registration of ultrasound images is an important and fast-growing research area with various medical applications, such as image-guided radiotherapy and surgery. However, this registration process remains extremely challenging due to the deformation of soft tissue and the existence of speckles in these images. This paper presents a technique for intra-subject, intra-modality elastic registration of 3D ultrasound images. Using the
In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3Dimages and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.
This MAP project is investigating and realising an alternative technology to provide 3-D bone images at very high (?m) resolutions, without ionising radiation, and potentially much more cost effective than ?CT. This new method uses time-resolved or coherence-gating imaging schemes with near-IR light sources, preferably compact solid-state lasers, cost-effective semiconductor lasers or light-emitting diodes. This system has the potential to resolve single osteoclast resorption pits and to allow 3-D biomedical imaging in significantly reduced time compared with ?CT.
Background: This paper presents an automated method for the identification of thin membrane tubes in 3D fluo- rescence images. These tubes, referred to as tunneling nanotubes (TNTs), are newly discovered intercellular structures that connect living cells through a membrane continuity. TNTs are 50-200 nm in diameter, crossing from one cell to another at their nearest distance. In mi- croscopic images,
This paper presents a robust approach for 3D point reconstruction based on a set of images taken from a static scene with known, but not necessarily exact or regular, camera parameters. The points to be reconstructed are chosen from the contours of images, and a world-based formulation of the reconstruction problem and associated epipolar geometry is used. The result is
Generic singularities can provide position-independent information about the qualitative shape of surfaces. The authors determine the singularities of the principal direction fields of a surface (its umbilic points) from a computation of the index of the fields. The authors present examples both for 3-D synthetic images to which noise has been added and for clinical magnetic resonance images
With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets,
The computation of sensor motion from sets of displacement vectors obtained from consecutive pairs of images is discussed. The problem is investigated with emphasis on its application to autonomous robots and land vehicles. The effects of 3D camera rotation and translation upon the observed image are discussed, particularly the concept of the focus of expansion (FOE). It is shown that
The acquirement of 3D-information is currently achieved by stereophotography, line and grid projection techniques or by laser scanning in combination with a fast distance measuring device. This paper describes a new principle using a single CCD-camera with an optical demodulator in front of it. The scene is illuminated by a high frequency intensity modulated light source. Demodulating the backscattered light by a gateable image intensifier yields a grey level image which directly corresponds to the object's form. Intensity variations within the image due to inhomogeneous object reflectivity or illumination intensity are overcome by a phase shift technology. Possible applications for such a 3D- camera industrial automation, medical and industrial endoscopic analyses, robotics or 3D-digitalization.
Hoefler, Heinrich; Jetter, Volker; Wagner, Elmar E.
A protocol for determining structural resolution using a potentially-traceable reference material is proposed. Where possible, terminology was selected to conform to those published in ISO JCGM 200:2008 (VIM) and ASTM E 2544-08 documents. The concepts of resolvability and edge width are introduced to more completely describe the ability of an optical non-contact 3Dimaging system to resolve small features. A distinction is made between 3D range cameras, that obtain spatial data from the total field of view at once, and 3D range scanners, that accumulate spatial data for the total field of view over time. The protocol is presented through the evaluation of a 3D laser line range scanner.
MacKinnon, David; Beraldin, J.-Angelo; Cournoyer, Luc; Carrier, Benjamin; Blais, François
A method is proposed for quantitative description of blood-vessel trees, which can be used for tree classification and/or physical parameters indirect monitoring. The method is based on texture analysis of 3Dimages of the trees. Several types of trees were defined, with distinct tree parameters (number of terminal branches, blood viscosity, input and output flow). A number of trees were computer-simulated for each type. 3Dimage was computed for each tree and its texture features were calculated. Best discriminating features were found and applied to 1-NN nearest neighbor classifier. It was demonstrated that (i) tree images can be correctly classified for realistic signal-to-noise ratio, (ii) some texture features are monotonously related to tree parameters, (iii) 2D texture analysis is not sufficient to represent the trees in the discussed sense. Moreover, applicability of texture model to quantitative description of vascularity images was also supported by unsupervised exploratory analysis. Eventually, the experimental confirmation was done, with the use of confocal microscopy images of rat brain vasculature. Several classes of brain tissue were clearly distinguished based on 3D texture numerical parameters, including control and different kinds of tumours - treated with NG2 proteoglycan to promote angiogenesis-dependent growth of the abnormal tissue. The method, applied to magnetic resonance imaging e.g. real neovasculature or retinalimages can be used to support noninvasive medical diagnosis of vascular system diseases. PMID:21803438
3Dimaging systems are currently being developed using liquid lens technology for use in medical devices as well as in consumer electronics. Liquid lenses operate on the principle of electrowetting to control the curvature of a buried surface, allowing for a voltage-controlled change in focal length. Imaging systems which utilize a liquid lens allow extraction of depth information from the object field through a controlled introduction of defocus into the system. The design of such a system must be carefully considered in order to simultaneously deliver good image quality and meet the depth of field requirements for image processing. In this work a corrective model has been designed for use with the Varioptic Arctic 316 liquid lens. The design is able to be optimized for depth of field while minimizing aberrations for a 3Dimaging application. The modeled performance is compared to the measured performance of the corrected system over a large range of focal lengths.
Bower, Andrew J.; Bunch, Robert M.; Leisher, Paul O.; Li, Weixu; Christopher, Lauren A.
This paper discusses the design and development of a miniature, high resolution 3-Dimaging sonar. The design utilizes frequency steered phased arrays (FSPA) technology. FSPAs present a small, low-power solution to the problem of underwater imaging sonars. The technology provides a method to build sonars with a large number of beams without the proportional power, circuitry and processing complexity. The design differs from previous methods in that the array elements are manufactured from a monolithic material. With this technique the arrays are flat and considerably smaller element dimensions are achievable which allows for higher frequency ranges and smaller array sizes. In the current frequency range, the demonstrated array has ultra high image resolution (1? range×1° azimuth×1° elevation) and small size (<3?×3?). The design of the FSPA utilizes the phasing-induced frequency-dependent directionality of a linear phased array to produce multiple beams in a forward sector. The FSPA requires only two hardware channels per array and can be arranged in single and multiple array configurations that deliver wide sector 2-D images. 3-Dimages can be obtained by scanning the array in a direction perpendicular to the 2-D image field and applying suitable image processing to the multiple scanned 2-D images. This paper introduces the 3-D FSPA concept, theory and design methodology. Finally, results from a prototype array are presented and discussed. PMID:21112066
Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model more quickly and efficiently than other methods.
Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3Dimage rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.
Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David
In this research, novel architectures based on different design approaches and arithmetic techniques such as direct mapping implementation, dynamic partial reconfiguration (DPR) mechanism, distributed arithmetic (DA) and systolic array (SA) will be developed for three dimensional (3D) medical image compression system. Moreover, solutions for processing large medical volumes will be investigated and power modelling of the architectures developed will be
This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear.
This survey paper discusses the 3Dimage processing challenges posed by present and future immersive telecommunications, especially immersive video conferencing and television. We introduce the concepts of presence, immersion and co-presence, and discuss their relation with virtual collaborative environments in the context of communications. Several examples are used to illustrate the current state of the art. We highlight the crucial
Francesco Isgrò; Emanuele Trucco; Peter Kauff; Oliver Schreer
Phased array ultrasound transducers have been fabricated in our laboratories at Duke University since 1970. In 1986, we began the development of 2-D arrays with a 20 × 20 element Mills cross array including 64 active channels operating at 1 MHz which produced the first real time 3-D ultrasound images. In our more recent arrays we have progressed to 108
Stephen W. Smith; Warren Lee; Edward D. Light; Jesse T. Yen; Patrick Wolf; Salim Idriss
Given a 3D range image of a scene containing multiple arbitrarily shaped objects, the authors segment the scene into homogeneous surface patches. A novel modular framework for the segmentation task is proposed. In the first module, over-segmentation is achieved using zeroth and first order local surface properties. The segmentation is then refined in the second module using high order surface
Given a 3-D range image of a scene containing multiple arbi- trarily shaped objects, we segment the scene into homogeneous surface patches. A new modular framework for the segment& tion task is proposed. In the first module, over-segmentation is achieved wing zeroth and first order local surface properties. The segmentation is then relined in the second module using high order
In this protocol, we describe how to make and analyze four dimensional (4D) movies of retinal lineage in the zebrafish embryo in vivo. 4D consists of three spatial dimensions (3D) reconstructed from stacks of confocal planes plus one time dimension. Our imaging is performed on transgenic cells that express fluorescent proteins under the control of cell-specific promoters or on cells that transiently express such reporters in specific retinal cell progenitors. An important aspect of lineage tracing is the ability to follow individual cells as they undergo multiple cell divisions, final migration, and differentiation. This may mean many hours of 4D imaging, requiring that cells be kept healthy and maintained under conditions suitable for normal development. The longest movies we have made are ?50 h. By analyzing these movies, we can see when a specific cell was born and who its sister was, allowing us to reconstruct its retinal lineages in vivo. PMID:23457345
The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.
Modern imaging techniques are able to generate high-resolution multimodal angiographic scans. The analysis of vasculature using numerous 2D tomographic images is time consuming and tedious, while 3D modeling and visualization enable presentation of the vasculature in a more convenient and intuitive way. This calls for development of interactive tools facilitating processing of angiographic scans and enabling creation, editing, and manipulation of 3D vascular models. Our objective is to develop a vascular editor (VE) which provides a suitable environment for experts to create and manipulate 3D vascular models correlated with surrounding anatomy. The architecture, functionality, and user interface of the VE are presented. The VE includes numerous interactive tools for building a vascular model from multimodal angiographic scans, editing, labeling, and manipulation of the resulting 3D model. It also provides comprehensive tools for vessel visualization, correlation of 2D and 3D representations, and tracing of small vessels of subpixel size. Education, research, and clinical applications of the VE are discussed, including the atlas of cerebral vasculature. To our best knowledge, there are no other systems offering similar functionality as the VE does. PMID:19350326
Marchenko, Yevgen; Volkau, Ihar; Nowinski, Wieslaw L
We present a novel approach to still image denoising based on effective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a
Kostadin Dabov; Alessandro Foi; Vladimir Katkovnik; Karen Egiazarian
Background We compared 2D, 3D high dose (HD) and 3D low dose (LD) gated myocardial Rb-82 PET imaging in 16 normal human studies. The main goal in the paper is to evaluate whether the images obtained by a 3D LD studies are still of comparable clinical quality to the images obtained with the 2D HD or 3D HD studies. Methods All 2D and 3D HD studies were performed with 2220 MBq of Rb-82. The 3D LD were performed with 740 MBq of Rb-82. A GE Advance PET system was used for acquisition. Polar maps were created and used to calculate noise among (NAS) and within (NWS) the segments in the noise analysis. In addition, the contrast between left ventricular (LV) wall and LV cavity was also analysed. For 13 subjects, ejection fraction (EF) on 2D and 3D studies was calculated using QGS program. Results For the H20 reconstruction filter, the mean contrast in mid-ventricular short-axis slice was 0.33 ± 0.06 for 2D studies. The same contrast for the 3D HD studies was 0.38 ± 0.07 and for 3D LD, it was 0.34 ± 0.08. For the 6 volunteers where 3D HD was used, NAS was 3.64*10-4 and NWS was 1.79*10-2 for 2D studies, and NAS was 3.70*10-4 and NWS was 1.85*10-2 for 3D HD studies, respectively. For the other 10 volunteers where 3D LD was used, NAS was 3.85*10-4 and NWS was 1.82*10-2 for the 2D studies, and NAS was 5.58*10-4 and NWS was 1.91*10-2 for the 3D LD studies, respectively. For the sharper H13 filter, the data followed the same pattern, with slightly higher values of contrast and noise. EF values in 2D and 3D were close. The Pearson's correlation coefficient was 0.90. The average difference from 13 subjects was 8.3%. Conclusion 2D and 3D HD gating Rb-82 PET cardiac studies have similar contrast, ejection fractions and noise levels. 3D LD gating imaging, gave comparable results in terms of contrast, EF and noise to either 2D or 3D HD gating PET imaging. 3D LD PET gated imaging can make Rb-82 PET cardiac imaging more affordable with significantly less radiation exposure to the patients.
A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3Dimage may be taken using e.g. a laser scanning device. By projecting the 3Dimage onto a 2D image with the suspect's head in the same pose as that of the perpetrator, using the same focal length and pixel aspect ratio, numerical comparison of (ratios of) distances between fixed points becomes feasible. An experiment was performed in which, starting from two 3D scans and one 2D image of two colleagues, male and female, and using seven fixed anatomical locations in the face, comparisons were made for the matching and non-matching case. Using this method, the non-matching pair cannot be distinguished from the matching pair of faces. Facial expression and resolution of images were all more or less optimal, and the results of the study are not encouraging for the use of anthropometric arguments in the identification process. More research needs to be done though on larger sets of facial comparisons. PMID:16337353
Goos, Mirelle I M; Alberink, Ivo B; Ruifrok, Arnout C C
Breast tomosynthesis is an emerging state-of-the-art three-dimensional (3D) imaging technology that demonstrates significant early promise in screening and diagnosing breast cancer. However, this kind of image has significant out-of-plane artifacts due to its limited tomography nature, which affects the image quality and further would interrupt interpretation. In this paper, we develop a robust deblurring method to remove or suppress blurry artifacts by applying three-dimensional (3D) nonlinear anisotropic diffusion filtering method. Differential equation of 3D anisotropic diffusion filtering is discretized using explicit and implicit numerical methods, respectively, combined by first (fixed grey value) and second (adiabatic) boundary conditions under ten nearest neighbor grids configuration of finite difference scheme. The discretized diffusion equation is applied in the breast volume reconstructed from the entire tomosynthetic images of breast. The proposed diffusion filtering method is evaluated qualitatively and quantitatively on clinical tomosynthesis images. Results indicate that the proposed diffusion filtering method is very powerful in suppressing the blurry artifacts, and the results also indicate that implicit numerical algorithm with fixed value boundary condition has better performance in enhancing the contrast of tomosynthesis image, demonstrating the effectiveness of the proposed filtering method in deblurring the out-of-plane artifacts.
The authors unintentionally omitted to mention that parallel work on 3D ECT has been developed by Warsito and Fan (2003) but using the artificial neural networks to 3Dimage reconstruction (where our group concentrated on algebraic methods) and that the rectangular shape of electrodes was also used by Warsito and Fan (2003) but in a twin-plane sensor (where our group used three or more planes). Reference Warsito W and Fan L S 2003 Development of three-dimensional electrical capacitance tomography Proc. 3rd World Congress on Industrial Process Tomography (Banff, Canada, 2-5 September 2003) pp 391-6
Wajman, R.; Banasiak, R.; Mazurkiewicz, L.; Dyakowski, T.; Sankowski, D.
Laser imaging offers potential for observation, for 3D terrain-mapping and classification as well as for target identification, including behind vegetation, camouflage or glass windows, at day and night, and under all-weather conditions. First generation systems deliver 3D point clouds. The threshold detection is largely affected by the local opto-geometric characteristics of the objects, leading to inaccuracies in the distances measured, and by partial occultation, leading to multiple echos. Second generation systems circumvent these limitations by recording the temporal waveforms received by the system, so that data processing can improve the telemetry and the point cloud better match the reality. Future algorithms may exploit the full potential of the 4D full-waveform data. Hence, being able to simulate point-cloud (3D) and full-waveform (4D) laser imaging is key. We have developped a numerical model for predicting the output data of 3D or 4D laser imagers. The model does account for the temporal and transverse characteristics of the laser pulse (i.e. of the "laser bullet") emitted by the system, its propagation through turbulent and scattering atmosphere, its interaction with the objects present in the field of view, and the characteristics of the optoelectronic reception path of the system.
Wireless video capsules can now carry out gastroenterological examinations. The images make it possible to analyze some diseases during postexamination, but the gastroenterologist could make a direct diagnosis if the video capsule integrated vision algorithms. The first step toward in situ diagnosis is the implementation of 3-Dimaging techniques in the video capsule. By transmitting only the diagnosis instead of the images, the video capsule autonomy is increased. This paper focuses on the Cyclope project, an embedded active vision system that is able to provide 3-D and texture data in real time. The challenge is to realize this integrated sensor with constraints on size, consumption, and processing, which are inherent limitations of the video capsule. We present the hardware and software development of a wireless multispectral vision sensor which enables the transmission of the 3-D reconstruction of a scene in real time. An FPGA-based prototype has been designed to show the proof of concept. Experiments in the laboratory, in vitro, and in vivo on a pig have been performed to determine the performance of the 3-D vision system. A roadmap towardthe integrated system is set out. PMID:23853370
In this paper, we present approaches toward an interactive visualization of a real time input, applied to 3-D visualizations of 2-D ultrasound echography data. The first, 3 degrees-of- freedom (DOF) incremental system visualizes a 3-D volume acquired as a stream of 2-D slices with location and orientation with 3 DOF. As each slice arrives, the system reconstructs a regular 3-D volume and renders it. Rendering is done by an incremental image-order ray- casting algorithm which stores and reuses the results of expensive resampling along the rays for speed. The second is our first experiment toward real-time 6 DOF acquisition and visualization. Two-dimensional slices with 6 DOF are reconstructed off-line, and visualized at an interactive rate using a parallel volume rendering code running on the graphics multicomputer Pixel-Planes 5.
Three dimensional (3D) shape lies an unresolved and active research topic on the cross-section of computer vision and digital photogrammetry. In this paper, we focus on 3D object shape reconstruction from uncalibrated images and put forward a hybrid method. The recovering 3D shape comprises two steps, firstly, calculate homography transformation to obtain the outlines, secondly, calculate the reconstructed object height by vanishing point and vanishing line from reference height. This hybrid method requires no camera calibration or the estimation of the fundamental matrix; hence, it reduces the computational complexity by eliminating the requirement for abundant conjugate points. The experiment shows that the method is much validated and something useful is obtained.
Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.
The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.
This paper describes a set of methods that make it possible to estimate the position of a feature inside a three-dimensional (3D) space by starting from a sequence of two-dimensional (2D) acoustic images of the seafloor acquired with a sonar system. Typical sonar imaging systems are able to generate just 2D images, and the acquisition of 3D information involves sharp increases in complexity and costs. The front-scan sonar proposed in this paper is a new equipment devoted to acquiring a 2D image of the seafloor to sail over, and allows one to collect a sequence of images showing a specific feature during the approach of the ship. This fact seems to make it possible to recover the 3D position of a feature by comparing the feature positions along the sequence of images acquired from different (known) ship positions. This opportunity is investigated in the paper, where it is shown that encouraging results have been obtained by a processing chain composed of some blocks devoted to low-level processing, feature extraction and analysis, a Kalman filter for robust feature tracking, and some ad hoc equations for depth estimation and averaging. A statistical error analysis demonstrated the great potential of the proposed system also if some inaccuracies affect the sonar measures and the knowledge of the ship position. This was also confirmed by several tests performed on both simulated and real sequences, obtaining satisfactory results on both the feature tracking and, above all, the estimation of the 3D position. PMID:18238218
Adaptive optics imaging of cone photoreceptors has provided unique insight into the structure and function of the human visual system and has become an important tool for both basic scientists and clinicians. Recent advances in adaptive optics retinalimaging instrumentation and methodology have allowed us to expand beyond cone imaging. Multi-wavelength and fluorescence imaging methods with adaptive optics have allowed multiple retinal cell types to be imaged simultaneously. These new methods have recently revealed rod photoreceptors, retinal pigment epithelium (RPE) cells, and the smallest retinal blood vessels. Fluorescence imaging coupled with adaptive optics has been used to examine ganglion cells in living primates. Two-photon imaging combined with adaptive optics can evaluate photoreceptor function non-invasively in the living primate retina.
Rossi, E A; Chung, M; Dubra, A; Hunter, J J; Merigan, W H; Williams, D R
A method of computing the three-dimensional (3-D) velocity field from 3-D cine computer tomographs (CTs) of a beating heart is proposed. Using continuum theory, the authors develop two constraints on the 3-D velocity field generated by a beating heart. With these constraints, the computation of the 3-D velocity field is formulated as an optimization problem and a solution to the
We introduce a new 3D curved tubular intensity model in conjunction with a model fitting scheme for accurate segmentation and quantification of thin vessels in 3D tomographic images. The curved tubular model is formulated based on principles of the image formation process, and we have derived an analytic solution for the model function. In contrast to previous straight models, the new model allows to accurately represent curved tubular structures, to directly estimate the local curvature by model fitting, as well as to more accurately estimate the shape and other parameters of tubular structures. We have successfully applied our approach to 3D synthetic images as well as 3D MRA and 3D CTA vascular images of the human. It turned out that we achieved more accurate segmentation results in comparison to using a straight model.
Wörz, Stefan; von Tengg-Kobligk, Hendrik; Rohr, Karl
Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.
This paper describes the use of microwaves to accurately image objects behind dielectric walls. The data are first simulated by using a finite-difference time-domain code. A large model of a room with walls and objects inside is used as a test case. Since the model and associated volume are big compared to wavelengths, the code is run on a parallel supercomputer. A fixed 2-D receiver array captures all the return data simultaneously. A time-domain backprojection algorithm with a correction for the time delay and refraction caused by the front wall then reconstructs high-fidelity 3-Dimages. A rigorous refraction correction using Snell's law and a simpler but faster linear correction are compared in both 2-D and 3-D. It is shown that imaging in 3-D and viewing an image in the plane parallel to the receiver array is necessary to identify objects by shape. It is also shown that a simple linear correction for the wall is sufficient.
The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples and tests.
Stereotactic neurosurgery planning, an intrinsically three-dimensional procedure, is generally performed on the basis of two-dimensional tomographic or projection images. We present extensions to these conventional approaches that use stereoscopic digital subtraction angiography, three-dimensional volume rendered computed tomography or magnetic resonance images, or a combination of these modalities. The stereoscopic DSA images are analysed interactively on a 3-D workstation. This system employs a liquid-crystal polarizing shutter to display alternate left- and right-eye views to a user wearing polarized glasses. Quantitative planning operations may be performed on the basis of the angiograms alone, or in conjunction with tomographic images of the anatomy. We also describe the procedures used to produce volume-rendered three-dimensional images from MR and CT data-sets, as well as the methodology for combining the stereoscopic angiograms with the volumetric anatomical images. PMID:2285371
Peters, T M; Henri, C; Collins, L; Pike, B; Olivier, A
All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.
Firle, Evelyn A.; Wesarg, Stefan; Karangelis, Grigoris; Dold, Christian
Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.
The 3-D shape of each part of the heart is reconstructed in a voxel space (32×32×32) by using 11 inner and\\/or outer boundary curves on 7 transverse, 2 coronal, and 2 sagittal images. Such 3-D shapes can be obtained at 23 cardiac phases in a cardiac cycle. Some quantitative cardiac parameters are calculated from the 3-D data and displayed as
A quantitative on-line analysis of electrical activity in the pallidum of Parkinsonian patients has been developed to determine the focal point of lesioning. A 3D volume image system has been developed to display basal ganglia anatomy and coregister the electrophysiological data within the globus pallidus. Thirty patients undergoing 41 pallidotomies are presented. Neuronal activity from the pallidum is recorded using a semi-microelectrode. Based on this activity, lesioning is performed. Postlesion recordings are made to determine the necessity of additional lesioning. A stereoscopic 3D volume MR image system has been developed that along with on-line signal processing allows visualization of high neural activity in the pallidum and postlesion residual activity. PMID:10853076
Lehman, R M; Micheli-Tzanakou, E; Zheng, J; Hamilton, J L
With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. PMID:21602004
A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.
Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron
Today’s markets and economies are becoming increasingly volatile, unpredictable, they are changing radically and even the\\u000a innovation speed is accelerating. Manufacturing and production technology and systems must keep pace with this trend. The\\u000a impact of novel innovative 3Dimaging technology to counter these radical changes is exemplarily shown on the robot paint\\u000a process. One has to keep in mind that
A. Pichler; H. Bauer; C. Eberst; C. Heindl; J. Minichberger
The new MPE near infrared imaging spectrometer 3D represents a new generation of astronomical instrumentation. It is based on a 256^2^ NICMOS-3 Rockwell array and can simultaneously obtain 256 H- or K-band spectra at R=1100 or 2100 from a square 16x16 pixel field on the sky. Typical pixel scales are 0.3"/pixel or 0.5"/pixel. 3D is a combination of a novel image slicer and a liquid nitrogen cooled long slit spectrometer. It includes high definition on-axis lens optics, a high efficiency directly ruled KRS-5 grism as well as a cold closed-loop piezo-driven tilt mirror allowing full spectral sampling. The instrument efficiency including detector is 15%. Combining the advantages of imaging and spectroscopy increases the observing efficiency on key astronomical objects (e.g. galactic nuclei) by such a large factor over existing grating or Fabry-Perot spectrometers that subarcsecond near-IR spectroscopy of faint Seyferts, starbursts, quasars, or distant galaxy clusters becomes feasible for the first time with 4m-class telescopes. As a portable instrument 3D has already been successfully deployed on several 2 and 4m-class telescopes.
Weitzel, L.; Krabbe, A.; Kroker, H.; Thatte, N.; Tacconi-Garman, L. E.; Cameron, M.; Genzel, R.
The recent development of real-time 3D ultrasound enables intracardiac beating heart procedures, but the distorted appearance of surgical instruments is a major challenge to surgeons. In addition, tissue and instruments have similar gray levels in US images and the interface between instruments and tissue is poorly defined. We present an algorithm that automatically estimates instrument location in intracardiac procedures. Expert-segmented images are used to initialize the statistical distributions of blood, tissue and instruments. Voxels are labeled of voxels through an iterative expectation-maximization algorithm using information from the neighboring voxels through a smoothing kernel. Once the three classes of voxels are separated, additional neighboring information is used to give spatial information based on the shape of instruments in order to correct for misclassifications. We analyze the major axis of segmented data through their principal components and refine the results by a watershed transform, which corrects the results at the contact between instrument and tissue. We present results on 3D in-vitro data from a tank trial, and 3D in-vivo data from a cardiac intervention on a porcine beating heart. The comparison of algorithm results to expert-annotated images shows the correct segmentation and position of the instrument shaft.
We propose a method and present applications of this method that converts a diffraction pattern into an elemental image set in order to display them on an integral imaging based display setup. We generate elemental images based on diffraction calculations as an alternative to commonly used ray tracing methods. Ray tracing methods do not accommodate the interference and diffraction phenomena. Our proposed method enables us to obtain elemental images from a holographic recording of a 3D object/scene. The diffraction pattern can be either numerically generated data or digitally acquired optical data. The method shows the connection between a hologram (diffraction pattern) and an elemental image set of the same 3D object. We showed three examples, one of which is the digitally captured optical diffraction tomography data of an epithelium cell. We obtained optical reconstructions with our integral imaging display setup where we used a digital lenslet array. We also obtained numerical reconstructions, again by using the diffraction calculations, for comparison. The digital and optical reconstruction results are in good agreement. PMID:23187181
Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.
We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen. PMID:23938645
The most common intraocular vascular tumours are choroidal haemangiomas, vasoproliferative tumours, and retinal haemangioblastomas. Rarer conditions include cavernous retinal angioma and arteriovenous malformations. Options for ablating the tumour include photodynamic therapy, argon laser photocoagulation, trans-scleral diathermy, cryotherapy, anti-angiogenic agents, plaque radiotherapy, and proton beam radiotherapy. Secondary effects are common and include retinal exudates, macular oedema, epiretinal membranes, retinal fibrosis, as well as serous and tractional retinal detachment, which are treated using standard methods (ie, intravitreal anti-angiogenic agents or steroids as well as vitreoretinal procedures, such as epiretinal membrane peeling and release of retinal traction). The detection, diagnosis, and monitoring of vascular tumours and their complications have improved considerably thanks to advances in imaging. These include spectral domain and enhanced depth imaging optical coherence tomography (SD-OCT and EDI-OCT, respectively), wide-angle photography and angiography as well as wide-angle fundus autofluorescence. Such novel imaging has provided new diagnostic clues and has profoundly influenced therapeutic strategies so that vascular tumours and secondary effects are now treated concurrently instead of sequentially, enhancing any opportunities for conserving vision and the eye. In this review, we describe how SD-OCT, EDI-OCT, autofluorescence, wide-angle photography and wide-angle angiography have facilitated the evaluation of eyes with the more common vascular tumours, that is, choroidal haemangioma, retinal vasoproliferative tumours, and retinal haemangioblastoma. PMID:23196648
Retinal clinicians and researchers make extensive use of images, and the current emphasis is on digital imaging of the retinal fundus. The goal of this paper is to introduce a system, known as retinalimage vessel extraction and registration system, which provides the community of retinal clinicians, researchers, and study directors an integrated suite of advanced digital retinalimage analysis tools over the Internet. The capabilities include vasculature tracing and morphometry, joint (simultaneous) montaging of multiple retinal fields, cross-modality registration (color/red-free fundus photographs and fluorescein angiograms), and generation of flicker animations for visualization of changes from longitudinal image sequences. Each capability has been carefully validated in our previous research work. The integrated Internet-based system can enable significant advances in retina-related clinical diagnosis, visualization of the complete fundus at full resolution from multiple low-angle views, analysis of longitudinal changes, research on the retinal vasculature, and objective, quantitative computer-assisted scoring of clinical trials imagery. It could pave the way for future screening services from optometry facilities. PMID:18632328
Tsai, Chia-Ling; Madore, Benjamin; Leotta, Matthew J; Sofka, Michal; Yang, Gehua; Majerovics, Anna; Tanenbaum, Howard L; Stewart, Charles V; Roysam, Badrinath
A hyperspectral imaging system has been set up and used to capture hyperspectral image cubes from various samples in the 400-1000 nm spectral region. The system consists of an imaging spectrometer attached to a CCD camera with fiber optic light source as the illuminator. The significance of this system lies in its capability to capture 3D spectral and spatial data that can then be analyzed to extract information about the underlying samples, monitor the variations in their response to perturbation or changing environmental conditions, and compare optical properties. In this paper preliminary results are presented that analyze the 3D spatial and spectral data in reflection mode to extract features to differentiate among different classes of interest using biological and metallic samples. Studied biological samples possess homogenous as well as non-homogenous properties. Metals are analyzed for their response to different surface treatments, including polishing. Similarities and differences in the feature extraction process and results are presented. The mathematical approach taken is discussed. The hyperspectral imaging system offers a unique imaging modality that captures both spatial and spectral information that can then be correlated for future sample predictions.
3-D reconstruction from medical images is an important application of computer graphics and biomedicine image processing. Image segmentation is a crucial step in 3-D reconstruction. In this paper, an improved image segmentation method which is suitable for 3-D reconstruction is put forward. A 3-D reconstruction algorithm is used to reconstruct the 3-D model from images. First, rough edge is extracted
Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of readout. Noise was low at ˜2% for 2mm reconstructions. The DLOS/PRESAGERTM benchmark tests show consistently excellent performance, with very good agreement to simple known distributions. The telecentric design was critical to enabling fast (~15mins) imaging with minimal stray light artifacts. The system produces accurate isotropic 2mm3 dose data over clinical volumes (e.g. 16cm diameter phantoms, 12 cm height), and represents a uniquely useful and versatile new tool for commissioning complex radiotherapy techniques. The system also has wide versatility, and has successfully been used in preliminary tests with protons and with kV irradiations. Biology. Attenuation corrections for optical-emission-CT were done by modeling physical parameters in the imaging setup within the framework of an ordered subset expectation maximum (OSEM) iterative reconstruction algorithm. This process has a well documented history in single photon emission computed tomography (SPECT), but is inherently simpler due to the lack of excitation photons to account for. Excitation source strength distribution, excitation and emission attenuation were modeled. The accuracy of the correction was investigated by imaging phantoms containing known distributions of attenuation and fluorophores. The correction was validated on a manufactured phantom designed to give uniform emission in a central cuboidal region and later applied to a cleared mouse brain with GFP (green-fluorescentprotein) labeled vasculature and a cleared 4T1 xenograft flank tumor with constitutive RFP (red-fluorescent-protein). Reconstructions were compared to corresponding slices imaged with a fluorescent dissection microscope. Significant optical-ECT attenuation artifacts were observed in the uncorrected phantom images and appeared up to 80% less intense than the verification image in the central region. The corrected phantom images showed excellent agreement with the verification image with only slight variations. The corrected tissue sample reconstructions showed general agreement between the verification images. Comp
One of the main challenges in 3D integral imaging (InI) is to overcome the limited depth of field of displayed 3Dimages. Although this limitation can be due to many factors, the phenomenon that produces the strongest deterioration of out-of-focus images is the facet braiding. In fact, the facet braiding is an essential problem, since InI 3D monitors are not
In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially,
Nils Daniel Forkert; Dennis Säring; Jens Fiehler; Till Illies; Dietmar Möller; Heinz Handels
We have developed a non-invasive photoacoustic ophthalmoscopy (PAOM) for in vivo retinalimaging. PAOM detects the photoacoustic signal induced by pulsed laser light shined onto the retina. By using a stationary ultrasonic transducer in contact with the eyelids and scanning only the laser light across the retina, PAOM provides volumetric imaging of the retinal micro-vasculature and retinal pigment epithelium at a high speed. For B-scan frames containing 256 A-lines, the current PAOM has a frame rate of 93 Hz, which is comparable with state-of-the-art commercial spectral-domain optical coherence tomography (SD-OCT). By integrating PAOM with SD-OCT, we further achieved OCT-guided PAOM, which can provide multi-modal retinalimaging simultaneously. The capabilities of this novel technology were demonstrated by imaging both the microanatomy and microvasculature of the rat retina in vivo.
Jiao, Shuliang; Jiang, Minshan; Hu, Jianming; Fawzi, Amani; Zhou, Qifa; Shung, K. Kirk; Puliafito, Carmen A.; Zhang, Hao F.
A novel technique for 3D gamma-ray imaging is presented. This method combines the positron annihilation Compton scattering imaging technique with a supplementary position sensitive detector, which registers gamma-rays scattered in the object at angles of about 90°. The 3D coordinates of the scattering location can be determined rather accurately by applying the Compton principle. This method requires access to the object from two orthogonal sides and allows one to achieve a position resolution of few mm in all three space coordinates. A feasibility study for a 3D camera is presented based on Monte Carlo calculations.
This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a Photometric Stereo framework.
Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian
The ability to image complex geologies such as salt domes in the Gulf of Mexico and thrusts in mountainous regions is a key to reducing the risk and cost associated with oil and gas exploration. Imaging these structures, however, is computationally expensive. Datasets can be terabytes in size, and the processing time required for the multiple iterations needed to produce a velocity model can take months, even with the massively parallel computers available today. Some algorithms, such as 3D, finite-difference, prestack, depth migration remain beyond the capacity of production seismic processing. Massively parallel processors (MPPs) and algorithms research are the tools that will enable this project to provide new seismic processing capabilities to the oil and gas industry. The goals of this work are to (1) develop finite-difference algorithms for 3D, prestack, depth migration; (2) develop efficient computational approaches for seismic imaging and for processing terabyte datasets on massively parallel computers; and (3) develop a modular, portable, seismic imaging code.
Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO.
Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965
Laser induced retinal lesions are used to treat a variety of eye diseases such as diabetic retinopathy and retinal detachment. An instrumentation system has been developed to track a specific lesion coordinate on the retinal surface and provide corrective signals to maintain laser position on the coordinate. High resolution retinalimages are acquired via a CCD camera coupled to a fundus camera and video frame grabber. Optical filtering and histogram modification are used to enhance the retinal vessel network against the lighter retinal background. Six distinct retinal landmarks are tracked on the high contrast image obtained from the frame grabber using two-dimensional blood vessel templates. The frame grabber is hosted on a 486 PC. The PC performs correction signal calculations using an exhaustive search on selected image portions. An X and Y laser correction signal is derived from the landmark tracking information and provided to a pair of galvanometer steered mirrors via a data acquisition and control subsystem. This subsystem also responds to patient inputs and the system monitoring lesion growth. This paper begins with an overview of the robotic laser system design followed by implementation and testing of a development system for proof of concept. The paper concludes with specifications for a real time system.
Barrett, Steven F.; Jerath, Maya R.; Rylander, Henry G.; Welch, Ashley J.
MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.
Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.
A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-Dimages. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-Dimages with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.
We present a new class of approaches for rigid-body registration and their evaluation in studying Multiple Sclerosis via multi protocol MRI. Two pairs of rigid-body registration algorithms were implemented, using cross- correlation and mutual information, operating on original gray-level images and on the intermediate images resulting from our new scale-based method. In the scale image, every voxel has the local scale value assigned to it, defined as the radius of the largest sphere centered at the voxel with homogeneous intensities. 3D data of the head were acquired from 10 MS patients using 6 MRI protocols. Images in some of the protocols have been acquired in registration. The co-registered pairs were used as ground truth. Accuracy and consistency of the 4 registration methods were measured within and between protocols for known amounts of misregistrations. Our analysis indicates that there is no best method. For medium and large misregistration, methods using mutual information, for small misregistration, and for the consistency tests, correlation methods using the original gray-level images give the best results. We have previously demonstrated the use of local scale information in fuzzy connectedness segmentation and image filtering. Scale may also have considerable potential for image registration as suggested by this work.
Nyul, Laszlo G.; Udupa, Jayaram K.; Saha, Punam K.
Some of the important features of how pulsatile flow generates artifacts in three-dimensional magnetic resonance imaging are analyzed and demonstrated. Time variations in the magnetic resonance signal during the heart cycle lead to more complex patterns of artifacts in 3Dimaging than in 2D imaging. The appearance and location of these artifacts within the image volume are shown to be describable as displacements along a line in a plane parallel to that defined by the phase and volume encode directions. The angle of the line in the plane depends solely upon the imaging parameters while the ghost displacement along the line is proportional to the signal modulation frequency. Aliasing of these ghosts leads to a variety of artifact patterns which are sensitive to the pulsation period and repetition time of the pulse sequence. Numerical simulations of these effects were found to be in good agreement with experimental images of an elastic model of a human carotid artery under simulated physiological conditions and with images of two human subjects. PMID:8412600
In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).
Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.
This paper describes the design and implementation of 3D Haar wavelet transform (HWT) with transpose based computation and dynamic partial reconfiguration (DPR). As a result of the separability property of the multi-dimensional HWT, the proposed architecture has been implemented using a cascade of three N-point 1D HWT and two transpose memory for a 3D volume of N ?? N ??
Afandi Ahmad; Benjamin Krill; Abbes Amira; Hassan Rabah
In this paper, we will analyze the depth factor in general monocular video sequence. We can distinguish foreground from background without additional information and then create the binocular image by shifting foreground pixels. By applying CID method, we will get the strength of the sharpness and contrast from image by evaluating the farness of the region based on property of
The availability of reliable ultrafast laser systems and their unique properties for material processing are the basis for new lithographic methods in the sector of micro- and nanofabrication processes such as two-photon 3D-lithography. Beside its flexibility, one of the most powerful features of this technology is the true 3D structuring capability, which allows fabrication with higher efficiency and with higher resolution compared to a sequential layer-by-layer structuring and build-up technique. Up to now, the two-photon method was mainly used for writing 3D structures quasi anywhere inside a bulk volume. In combination with a sophisticated and versatile machine vision support, the two-photon 3D-lithography is now targeting for micro- and nano-optical applications and the integration of optical and photonic components into optical microsystems. We report on a disruptive improvement of this lithographic method by means of an optical detection system for optical components (e.g. laser diode chips / LEDs and photo diodes) that are already assembled on an optical micropackage. The detection system determines the position coordinates of features of the optical microsystem in all three dimensions with micrometer resolution, combining digital image processing and evaluation of back reflected laser light from the surface of the system. This information is subsequently processed for controlling the fabrication of directly laser written optical and photonic structures inside and around such an optical microsystem. The strong advantage of this approach lies in its adaptation of laser written structures to existing features and structures, which also permits to compensate for misalignments and imperfections of preconfigured packages.
\\u000a 3D holoscopic imaging is employed as part of a three-dimensional imaging system, allowing the display of full colour images\\u000a with continuous parallax within a wide viewing zone. A review of the 3D holoscopic imaging technology from the point of view\\u000a optical systems and 3Dimage processing including 3Dimage coding, depth map computation and computer generated graphics is\\u000a discussed.
There is currently a considerable interest in methods of invariant 3Dimage recognition. Indeed, very often information about 3D objects can be obtained by computer tomographic reconstruction, 3D magnetic resonance imaging, passive 3D sensors or active range finders. Due to that algorithms of systematic derivation of 3D moment invariants should be developed for 3D color object recognition. In this work
Valeri Labounets; Ekaterina V. Labunets-Rundblad; Jaakko T. Astola
Granulomatous prostatitis is a benign inflammatory condition of the prostate which can be mistaken for prostatic carcinoma both clinically and on imaging findings. In MR imaging, granulomatous prostatitis has been shown to cause signal intensity decrease in the prostatic peripheral zone similar to carcinoma on T2-weighted sequences. The recent introduction of 3D MR spectroscopy (3D MRS) into clinical practice adds
This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation.
The reconstruction of histology sections into a 3-D volume receives increased attention due to its various applications in modern medical image analysis. To guarantee a geometrically coherent reconstruction, we propose a new way to register histological sections simuItaneously to previously acquired reference images and to neighboring slices in the stack. To this end, we formulate two potential functions and associate them to the same Markov random field through which we can efficiently find an optimal solution. Due to our simultaneous formulation and the absence of any segmentation step during the reconstruction we can dramatically reduce error propagation effects. This is illustrated by experiments on carefully created synthetic as well as real data sets. PMID:21995076
Feuerstein, Marco; Heibel, Hauke; Gardiazabal, José; Navab, Nassir; Groher, Martin
A fast and flexible technique of 3D modeling of building based on image sequence has been proposed in this paper. It firstly describes the importance of the study in this area, then gives a detailed analysis of each step of the whole reconstruction process, including homonymy points selection, determination of key points' relationship, the solution method of splicing among points appeared on different stereo images, texture mapping and so on. Finally, real data has been used to validate the proposed technique, using VC++6.0 and OpenGL to realize the visualization of the buildings artificial interactively, and satisfied results have been obtained demonstrating the effectiveness and flexibility of the approach.
An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.
Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.
This paper presents a method for extracting lymph node regions from 3-D abdominal CT images using 3-D minimum directional\\u000a difference filter. In the case of surgery of colonic cancer, resection of metastasis lesions is performed with resection of\\u000a a primary lesion. Lymph nodes are main route of metastasis and are quite important for deciding resection area. Diagnosis\\u000a of enlarged lymph
SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation/interpolation methods such as Least-Squares SuperResolution (LSSR) and Least-Squares CLEAN (LSCLEAN), one can achieve resolutions that are commensurate with the carrier frequency (?/4) rather than the bandwidth (c/2B). Furthermore, extreme angle diversity affords complete coverage of a target's backscatter, and a correspondingly more literal image. To realize these benefits, however, one must image the scene in 3-D; otherwise layover-induced misregistration compromises the coherent summation that yields improved resolution. Practically, one is limited to very sparse elevation apertures, i.e. a small number of circular passes. Here we demonstrate that both LSSR and LSCLEAN can reduce considerably the sidelobe and alias artifacts caused by these sparse elevation apertures. Further, we illustrate how a hypothetical multi-static geometry consisting of six vertical real-aperture receive apertures, combined with a single circular transmit aperture provide effective, though sparse and unusual, 3-D k-space support. Forward scattering captured by this geometry reveals horizontal scattering surfaces that are missed in monostatic backscattering geometries. This paper illustrates results based on LucernHammer UHF and L-band mono- and multi-static simulations of a backhoe.
In this paper we present a procedure to extract bronchus area from 3D CT images of lung taken by helical CT scanner and to visualize it as a 3D shaded image. The extraction procedure consists of 3D region growing with the parameters adjusted automatically and is performed fast by using 3D painting algorithm. The result is visualized by computer graphics
A 3-D augmented reality navigation system using autostereoscopic images was developed for MRI-guided surgery. The 3-Dimages are created by employing an animated autostereoscopic image, integral videography (IV), which provides geometrically accurate 3-D spatial images and reproduces motion parallax without using any supplementary eyeglasses or tracking devices. The spatially projected 3-Dimages are superimposed onto the surgical area and viewed
The site selection for a deep ice core is critical because of the high cost of drilling, extracting, and analyzing the ice cores. CReSIS has developed several multichannel radar systems which provide information at a much higher level of detail than was possible from previous radar surveys for ice core sites. Among the inputs used in site selection, depth sounding radars provide information about the internal stratigraphy, bed topography, and basal conditions. The internal stratigraphy and bed topography are ideally smooth and flat-lying - an indication that there are no ice flow disturbances. The chronological order must be preserved in the stratigraphy and lack of flow disturbances helps ensure that. Also, internal layers that can be traced to an existing ice core to be dated allow paleo-accumulation rates to be estimated when coupled with an ice flow model. Determining the basal conditions, specifically whether or not the bed is wet or dry, helps determine if the bottom layers (i.e. the oldest ice) are melting. In 2005 and 2008, CReSIS conducted two ground based radar surveys covering the GRIP, GISP2, and NEEM ice core sites. Unlike traditional depth sounders, these radar systems are multichannel making 3Dimaging possible. In 2006 an airborne version of the ground based system was fielded for the first time and 3D tomographic images have been produced with that system as well. This work will present results from these ground and airborne surveys and how the information provided from these data can be used to enable optimal site selections in the future. Fig 1 shows an example of how 3Dimaging resolves the englacial features that indicate the flow disturbances discovered by the GRIP and GISP2 ice core analysis. In Fig 1a, the bed is the bright mass of targets from 0-4 km along-track at the bottom of the image. Note the distinct change in texture of the englacial scatterers, from specular layers to point targets, around 2750 m and below. Fig 1b shows the cross-track position of the dominant englacial scatterers. The scattering centers for the flat internal layers above 2750 m are located directly beneath the platform while the disturbed layers below 2750 m are spread out. Similar englacial targets are seen at the GISP2 site. a) Radar profile with GRIP core high-lighted by vertical line. b) Cross-track position of the englacial scatterers.
Paden, J. D.; Blake, W.; Gogineni, P. S.; Leuschen, C.; Allen, C.; Dahl-Jensen, D.
In this paper, a new solution is introduced for the efficient compression of 3D video based on color and depth maps. While standard video codecs are designed for coding monoscopic videos, their application to depth maps is found to be suboptimal. With regard to the special properties of the depth maps, we propose an extension to conventional video coding in
B. Kamolrat; W. A. C. Fernando; M. Mrak; A. Kondoz
This paper and its companion are concerned with the problems of 3-D object recognition and shape estimation from image curves using a 3-D object curve model that is invariant to affine transformation onto the image space, and a binocular stereo imaging system. The objects of interest here are the ones that have markings (e.g., characters, letters, special drawings and symbols,
20 Mega-Hertz-switching high speed image shutter device for 3Dimage capturing and its application to system prototype are presented. For 3Dimage capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3Dimage sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.
Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan
Creation of 3Dimages through remote sensing is a topic of interest in many applications such as terrain / building modeling and automatic target recognition (ATR). Several photogrammetry-based methods have been proposed that derive 3D information from digital images from different perspectives, and lidar- based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registra tion alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and lack of proper convergence in the merging process. This paper presents a method to create 3Dimages that uses the unique properties of texel images (pixel fused lidar and digital imagery) to improve the quality and robustness of fused 3Dimages. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3D points are fused at the sensor level, more accurate 3Dimages are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods.
Dynamic friction experiments in granitoid or gabbroic rocks that achieve earthquake slip velocities reveal significant weakening by melt-lubrication of the sliding surfaces. Extrapolation of these experimental results to seismic source depths (> 7 km) suggests that the slip weakening distance (Dw) over which this transition occurs is < 10 cm. The physics of this lubrication in the presence of a fluid (melt) is controlled by surface micro-topography. In order to characterize fault surface microroughness and its evolution during dynamic slip events on natural faults, we have undertaken an analysis of three-dimensional (3D) fault surface microtopography and its causes on a suite of pseudotachylyte-bearing fault strands from the Gole Larghe fault zone, Italy. The solidification of frictional melt soon after seismic slip ceases "freezes in" earthquake source geometries, however it also precludes the development of extensive fault surface exposures that have enabled direct studies of fault surface roughness. We have overcome this difficulty by imaging the intact 3D geometry of the fault using high-resolution X-ray computed tomography (CT). We collected a suite of 2-3.5 cm diameter cores (2-8 cm long) from individual faults within the Gole Larghe fault zone with a range of orientations (+/- 45 degrees from average strike) and slip magnitudes (0-1 m). Samples were scanned at the University of Texas High Resolution X-ray CT Facility, using an Xradia MicroCT scanner with a 70 kV X-ray source. Individual voxels (3D pixels) are ~36 ?m across. Fault geometry is thus imaged over ~4 orders of magnitude from the micron scale up to ~Dw. Pseudotachylyte-bearing fault zones are imaged as tabular bodies of intermediate X-ray attenuation crosscutting high attenuation biotite and low attenuation quartz and feldspar of the surrounding tonalite. We extract the fault surfaces (contact between the pseudotachylyte bearing fault zone and the wall rock) using integrated manual mapping, automated edge detection, and statistical evaluation. This approach results in a digital elevation model for each side of the fault zone that we use to quantify melt thickness and volume as well as surface microroughness and explore the relationship between these properties and the geometry, slip magnitude, and wall rock mineralogy of the fault.
Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.
Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J. [Canadian Light Source Inc., University of Saskatchewan, Saskatoon, SK S7N 0X4 (Canada); Hitchcock, A. P. [BIMR, McMaster University, Hamilton, ON L8S 4M1 (Canada); Prange, A. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Institute for Microbiology and Virology, University of Witten/Herdecke, Witten (Germany); Center for Advanced Microstructures and Devices (CAMD), Louisiana State University, Baton Rouge, LA (United States); Franz, B. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Harkness, T. [College of Medicine, University of Saskatchewan, Saskatoon, SK S7N 5E5 (Canada); Obst, M. [Center for Applied Geoscience, Tuebingen University, Tuebingen (Germany)
Stress echocardiography is a routinely used clinical procedure to diagnose cardiac dysfunction by comparing wall motion information in prestress and poststress ultrasound images. Incomplete data, complicated imaging protocols and misaligned prestress and poststress views, however, are known limitations of conventional stress echocardiography. We discuss how the first two limitations are overcome via the use of real-time three-dimensional (3-D) ultrasound imaging,
Raj Shekhar; Vladimir Zagrodsky; Mario J. Garcia; James D. Thomas
The aim of the study was to create an anatomical correct 3D rapid prototyping model (RPT) for patients with complex heart disease and altered geometry of the atria or ventricles to facilitate planning and execution of the surgical procedure. Based on computer tomography (CT) and magnetic resonance imaging (MRI) images, regions of interest were segmented using the Mimics 9.0 software (Materialise, Leuven, Belgium). The segmented regions were the target volume and structures at risk. After generating an STL-file (StereoLithography file) out of the patient's data set, the 3D printer Ztrade mark 510 (4D Concepts, Gross-Gerau, Germany) created a 3D plaster model. The patient individual 3D printed RPT-models were used to plan the resection of a left ventricular aneurysm and right ventricular tumor. The surgeon was able to identify risk structures, assess the ideal resection lines and determine the residual shape after a reconstructive procedure (LV remodelling, infiltrating tumor resection). Using a 3D-print of the LV-aneurysm, reshaping of the left ventricle ensuring sufficient LV volume was easily accomplished. The use of the 3D rapid prototyping model (RPT-model) during resection of ventricular aneurysm and malignant cardiac tumors may facilitate the surgical procedure due to better planning and improved orientation. PMID:17925319
Jacobs, Stephan; Grunert, Ronny; Mohr, Friedrich W; Falk, Volkmar
Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer.
As digital imaging and computing power increasingly develop, so too does the potential to use these technologies in ophthalmology. Image processing, analysis and computer vision techniques are increasing in prominence in all fields of medical science, and are especially pertinent to modern ophthalmology, as it is heavily dependent on visually oriented signs. The retinal microvasculature is unique in that it
Niall Patton; Tariq M. Aslam; Thomas MacGillivray; Ian J. Deary; Baljean Dhillon; Robert H. Eikelboom; Kanagasingam Yogesan; Ian J. Constable
In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.
Digital breast tomosynthesis (DBT) is a new volumetric breast cancer screening modality. It is based on the principles of computed tomography (CT) and shows promise for improving sensitivity and specificity compared to digital mammography, which is the current standard protocol. A barrier to critically evaluating any new modality, including DBT, is the lack of patient data from which statistically significant conclusions can be drawn; such studies require large numbers of images from both diseased and healthy patients. Since the number of detected lesions is low in relation to the entire breast cancer screening population, there is a particular need to acquire or otherwise create diseased patient data. To meet this challenge, we propose a method to insert 3D lesions in the DBT images of healthy patients, such that the resulting images appear qualitatively faithful to the modality and could be used in future clinical trials or virtual clinical trials (VCTs). The method facilitates direct control of lesion placement and lesion-to-background contrast and is agnostic to the DBT reconstruction algorithm employed.
Vaz, Michael S.; Besnehard, Quentin; Marchessoux, Cédric
Optical coherence tomography (OCT), being a noninvasive imaging modality, has begun to find vast use in the diagnosis and management of ocular diseases such as glaucoma, where the retinal nerve fiber layer (RNFL) has been known to thin. Furthermore, the recent availability of the considerably larger volumetric data with spectral-domain OCT has increased the need for new processing techniques. In this paper, we present an automated 3-D graph-theoretic approach for the segmentation of 7 surfaces (6 layers) of the retina from 3-D spectral-domain OCT images centered on the optic nerve head (ONH). The multiple surfaces are detected simultaneously through the computation of a minimum-cost closed set in a vertex-weighted graph constructed using edge/regional information, and subject to a priori determined varying surface interaction and smoothness constraints. The method also addresses the challenges posed by presence of the large blood vessels and the optic disc. The algorithm was compared to the average manual tracings of two observers on a total of 15 volumetric scans, and the border positioning error was found to be 7.25 +/- 1.08 ?m and 8.94 +/- 3.76 ?m for the normal and glaucomatous eyes, respectively. The RNFL thickness was also computed for 26 normal and 70 glaucomatous scans where the glaucomatous eyes showed a significant thinning (p < 0.01, mean thickness 73.7 +/- 32.7 ?m in normal eyes versus 60.4 +/- 25.2 ?m in glaucomatous eyes).
The basic concept of Triangular SPECT System for 3-D organ volume imaging was reported last year. Further stringent imaging tests with the initial experimental system revealed that the composite SPECT images from three detectors remove small artifacts present in the individual detector SPECT images by their crosscancellation. Since then we have developed a clinical prototype with dynamic SPECT capability for
C. B. Lim; R. Walker; C. Pinkstaff; K. I. Kim; J. Anderson; S. King; J. Janzso; D. Shand; F. Valentino; K. Coulman
Theoretical and experimental investigations on the real-time transmission of 3-Dimages formed by parallax panoramagrams are described. At the image-taking station, a 3-Dimage is optically decomposed into line images, which can be directly handled by a high resolution TV set. By registering a lenticular sheet to the reproduced line images on a monitor screen, the reconstructed 3-Dimage is
Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.
The site selection for a deep ice core is critical because of the high cost of drilling, extracting, and analyzing the ice cores. CReSIS has developed several multichannel radar systems which provide information at a much higher level of detail than was possible from previous radar surveys for ice-core sites. Among the inputs used in site selection, depth sounding radars provide information about the internal stratigraphy, bed topography, and basal conditions. The internal stratigraphy and bed topography are ideally smooth and flat-lying - an indication that there are no ice flow disturbances. The chronological order must be preserved in the stratigraphy and lack of flow disturbances helps ensure that. Also, internal layers that can be traced to an existing ice core to be dated allow paleo-accumulation rates to be estimated when coupled with an ice flow model. Determining the basal conditions, specifically whether or not the bed is wet or dry, helps determine if the bottom layers (i.e. the oldest ice) are melting. CReSIS has conducted several ground and airborne radar surveys around GRIP, GISP2, NGRIP, and NEEM ice core sites. Unlike traditional depth sounders, the radar systems are multichannel making 3Dimaging possible. This work will present results from these ground and airborne surveys and how the information provided from these data can be used to enable optimal site selections in the future. Fig 1 shows an example of how 3Dimaging resolves the englacial features that indicate the flow disturbances discovered by the GRIP and GISP2 ice core analysis. In Fig 1a, the bed is the bright mass of targets from 0-4 km along-track at the bottom of the image. Note the distinct change in texture of the englacial scatterers, from specular layers to point targets, around 2750 m and below. Fig 1b shows the cross-track position of the dominant englacial scatterers. The scattering centers for the flat internal layers above 2750 m are located directly beneath the platform while the disturbed layers below 2750 m are spread out. Similar englacial targets are seen at the GISP2 site.
Paden, J. D.; Li, J.; Gogineni, P. S.; Leuschen, C.; Dahl-Jensen, D.
Holography is an ideal 3D display technique, which shows us beautiful 3Dimages. It is hoped that 3D holo TV that is a 3D television system using holographic principle will be put to practical use. However, there are some difficulties to realize it. One of them is a capturing method under a natural light, because it required a laser light
M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures – each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported.
Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.
We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.
Gradient-echo MRI has revealed anisotropic magnetic susceptibility in the brain white matter. This magnetic susceptibility anisotropy can be measured and characterized with susceptibility tensor imaging (STI). In this study, a method of fiber tractography based on STI is proposed and demonstrated in the mouse brain. STI experiments of perfusion-fixed mouse brains were conducted at 7.0T. The magnetic susceptibility tensor was calculated for each voxel with regularization and decomposed into its eigensystem. The major eigenvector is found to be aligned with the underlying fiber orientation. Following the orientation of the major eigenvector, we are able to map distinctive fiber pathways in 3D. As a comparison, diffusion tensor imaging (DTI) and DTI fiber tractography were also conducted on the same specimens. The relationship between STI and DTI fiber tracts was explored with similarities and differences identified. It is anticipated that the proposed method of STI tractography may provide a new way to study white matter fiber architecture. As STI tractography is based on physical principles that are fundamentally different from DTI, it may also be valuable for the ongoing validation of DTI tractography. PMID:21867759
Liu, Chunlei; Li, Wei; Wu, Bing; Jiang, Yi; Johnson, G Allan
Gradient-echo MRI has revealed anisotropic magnetic susceptibility in the brain white matter. This magnetic susceptibility anisotropy can be measured and characterized with susceptibility tensor imaging (STI). In this study, a method of fiber tractography based on STI is proposed and demonstrated in the mouse brain. STI experiments of perfusion-fixed mouse brains were conducted at 7.0 T. The magnetic susceptibility tensor was calculated for each voxel with regularization and decomposed into its eigensystem. The major eigenvector is found to be aligned with the underlying fiber orientation. Following the orientation of the major eigenvector, we are able to map distinctive fiber pathways in 3D. As a comparison, diffusion tensor imaging (DTI) and DTI fiber tractography were also conducted on the same specimens. The relationship between STI and DTI fiber tracts was explored with similarities and differences identified. It is anticipated that the proposed method of STI tractography may provide a new way to study white matter fiber architecture. As STI tractography is based on physical principles that are fundamentally different from DTI, it may also be valuable for the ongoing validation of DTI tractography.
Liu, Chunlei; Li, Wei; Wu, Bing; Jiang, Yi; Johnson, G. Allan
We present a new design for a reflective afocal AO-OCT retinalimaging system. The optical performance of this instrument is compared to our previous multimodal AO-OCT/AO-SLO retinalimaging system. The feasibility of new instrumentation for improved visualization of microscopic retinal structures will be discussed. Examples of images acquired with this new AO-OCT instrument will be presented.
Lee, Sang Hyuck; Werner, John S.; Zawadzki, Robert J.
Retinalimage registration is commonly required in order to combine the complementary information in different retinal modalities. In this paper, a new automatic scheme to register retinalimages is presented and is currently tested in a clinical environment. The scheme considers the suitability and efficiency of different image transformation models and function optimization techniques, following an initial preprocessing stage. Three
George K. Matsopoulos; Nicolaos A. Mouravliansky; Konstantinos K. Delibasis; Konstantina S. Nikita
The paper approaches an educational application of 3D dynamic interactive images of medical devices. Manuals containing 3Dimages are considered more efficient than classic manuals. Combination of 3D interactive images with World Wide Web technologies makes the educational activities and especially the e-learning more attractive. Some Web technologies used in creation of 3D interactivity were compared in order to choose
In this paper a three dimensional (3D) image processing expert system called 3D-IMPRESS is presented. This system can automatically construct a 3Dimage processing procedure by using pairs of an original input image and a desired output figure called sample figure given by a user This paper describes the outline of 3D-IMPRESS and presents a method of procedure consolidation for
In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.
One of the main advantages of nonlinear microscopy is that it provides 3Dimaging capability. Second harmonic generation is widely used to image the 3D structure of collagen fibers, and several works have highlighted the modification of the collagen fiber fabric in important diseases. By using an ellipsoidal specific fitting technique on the Fourier transformed image, we show, using both synthetic images and SHG images from cartilage, that the 3D direction of the collagen fibers can be robustly determined.
It is often necessary to register partial objects in medical imaging. Due to limited field of view (FOV), the entirety of an object cannot always be imaged. This study presents a novel application of an existing registration algorithm to this problem. The spin-image algorithm  creates pose-invariant representations of global shape with respect to individual mesh vertices. These `spin-images,' are then compared for two different poses of the same object to establish correspondences and subsequently determine relative orientation of the poses. In this study, the spin-image algorithm is applied to 4DCT-derived capitate bone surfaces to assess the relative accuracy of registration with various amounts of geometry excluded. The limited longitudinal coverage under the 4DCT technique (38.4mm, ), results in partial views of the capitate when imaging wrist motions. This study assesses the ability of the spin-image algorithm to register partial bone surfaces by artificially restricting the capitate geometry available for registration. Under IRB approval, standard static CT and 4DCT scans were obtained on a patient. The capitate was segmented from the static CT and one phase of 4DCT in which the whole bone was available. Spin-image registration was performed between the static and 4DCT. Distal portions of the 4DCT capitate (10-70%) were then progressively removed and registration was repeated. Registration accuracy was evaluated by angular errors and the percentage of sub-resolution fitting. It was determined that 60% of the distal capitate could be omitted without appreciable effect on registration accuracy using the spin-image algorithm (angular error < 1.5 degree, sub-resolution fitting < 98.4%).
Breighner, Ryan; Holmes, David R.; Leng, Shuai; An, Kai-Nan; McCollough, Cynthia; Zhao, Kristin
The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid
Zongcai Ruan; Fuhui Long; Julie H Simpson; Eugene W Myers; Hanchuan Peng
The cognitive strength of crosstalk in stereoscopic 3-D displays is investigated, and new quantitative analysis methods based on color difference and grayscale levels are developed. Unlike results using existing metrics, results by the new methods agree well with the perceived crosstalk strength in achromatic images with various levels of grayscale. The crosstalk in color images, which has not been studied before, exhibits interesting results in that the crosstalk metric based on the lightness difference expresses the best fit with the perceptual crosstalk when the intended image is black and the chroma value of the counterpart image is large, but the metric using the color difference works better when the intended image is not black. The new metrics reveal that the difference between active and passive 3-D displays is not as large as suggested by conventional crosstalk metrics, and the crosstalk in color images cannot be simply estimated by averaging the crosstalk of red, green, and blue subpixels. The new metrics will be useful in the development of new image processing technology and display technology for better image quality. PMID:22389147
Nowadays, the techniques for 3D computer graphics are developing dynamically. They are finding application not only for creating computer games, but often for three-dimensional data visualization systems. Applying these techniques make graphical operations more effective and consequently these systems are more efficient. The paper presents the system for 3D seafloor visualization using multibeam sonar data. In the presented system three
KRZYSZTOF BIKONIS; MAREK MOSZYNSKI; ANDRZEJ CHYBICKI; PIOTR KOCINSKI
In recent years 3D information has become more easily available. Users' needs are constantly increasing, adapting to this reality and 3D maps are in more demand. 3D models of the terrain in CAD or other environments have already been common practice; however one is bound by the computer screen. This is why contemporary digital methods have been developed in order to produce portable and, hence, handier 3D maps of various forms. This paper deals with the implementation of the necessary procedures to produce holographic 3D maps and three dimensionally printed maps. The main objective is the production of three dimensional maps from high resolution aerial and/or satellite imagery with the use of holography and but also 3D printing methods. As study area the island of Antiparos was chosen, as there were readily available suitable data. These data were two stereo pairs of Geoeye-1 and a high resolution DTM of the island. Firstly the theoretical bases of holography and 3D printing are described, and the two methods are analyzed and there implementation is explained. In practice a x-axis parallax holographic map of the Antiparos Island is created and a full parallax (x-axis and y-axis) holographic map is created and printed, using the holographic method. Moreover a three dimensional printed map of the study area has been created using 3dp (3d printing) method. The results are evaluated for their usefulness and efficiency.
Retinalimages are routinely acquired and assessed to provide diagnostic evidence for many important diseases, e.g. diabetes or hypertension. Because of the acquisition process, very often these images are non-uniformly illuminated and exhibit local luminosity and contrast variability. This problem may seriously affect the diagnostic process and its outcome, especially if an automatic computer-based procedure is used to derive diagnostic
IMAGE BASED DIGITIZING: 3D Rendering tasks are now made easier thanks to the numerous 3D graphics cards available on the marketplace. But automatic 3D objects digitizing is the key challenge for today 3D computer graphics development. The goal of this paper is to present research results and new avenues towards direct 3D digitizing with multi-views cameras, in order to design
The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.
Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.
SPECT systems based on 2-D detectors for projection data collection and filtered back-projection image reconstruction have the potential for true 3-Dimaging, providing contiguous slice images in any orientation. Anger camera-based SPECT systems have the natural advantage supporting planar imaging clinical procedures. However, current systems suffer from two drawbacks; poor utilization of emitted photons, and inadequate system design for SPECT.
C. B. Lim; S. Gottschalk; R. Walker; R. Schreiner; F. Valentino; C. Pinkstaff; J. Janzo; J. Covic; A. Perusek; J. Anderson; K. I. Kim; D. Shand; K. Coulman; S. King; D. Styblo
Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according
Armando Chiari; Bruno Ciciani; Milton Romero; Riccardo Rossi
The utilisation of 3D computer graphics technologies in the domain of pottery analysis can enhance archaeological research in terms of data management, indexing and shape matching. In this paper, we attempt to reduce the dimensions of the 3D vessel shape matching problem in order to create Web-enabled compact shape descriptors applicable for content-based retrieval systems. This is achieved by exploiting
Aims To demonstrate ultrahigh-resolution, three-dimensional optical coherence tomography (3D-OCT) and projection OCT fundus imaging for enhanced visualisation of outer retinal pathology in non-exudative age-related macular degeneration (AMD). Methods A high-speed, 3.5 ?m resolution OCT prototype instrument was developed for the ophthalmic clinic. Eighty-three patients with non-exudative AMD were imaged. Projection OCT fundus images were generated from 3D-OCT data by selectively summing different retinal depth levels. Results were compared with standard ophthalmic examination, including fundus photography and fluorescein angiography, when indicated. Results Projection OCT fundus imaging enhanced the visualisation of outer retinal pathology in non-exudative AMD. Different types of drusen exhibited distinct features in projection OCT images. Photoreceptor disruption was indicated by loss of the photoreceptor inner/outer segment (IS/OS) boundary and external limiting membrane (ELM). RPE atrophy can be assessed using choroid-level projection OCT images. Conclusions Projection OCT fundus imaging facilities rapid interpretation of large 3D-OCT data sets. Projection OCT enhances contrast and visualises outer retinal pathology not visible with standard fundus imaging or OCT fundus imaging. Projection OCT fundus images enable registration with standard ophthalmic diagnostics and cross-sectional OCT images. Outer retinal alterations can be assessed and drusen morphology, photoreceptor impairment and pigmentary abnormalities identified.
Gorczynska, I; Srinivasan, V J; Vuong, L N; Chen, R W S; Liu, J J; Reichel, E; Wojtkowski, M; Schuman, J S; Duker, J S; Fujimoto, J G
A method for correction of nonhomogenous illumination based on optimization of parameters of B-spline shading model with respect to Shannon's entropy is presented. The evaluation of Shannon's entropy is based on Parzen windowing method (Mangin, 2000) with the spline-based shading model. This allows us to express the derivatives of the entropy criterion analytically, which enables efficient use of gradient-based optimization algorithms. Seven different gradient- and nongradient-based optimization algorithms were initially tested on a set of 40 simulated retinalimages, generated by a model of the respective image acquisition system. Among the tested optimizers, the gradient-based optimizer with varying step has shown to have the fastest convergence while providing the best precision. The final algorithm proved to be able of suppressing approximately 70% of the artificially introduced non-homogenous illumination. To assess the practical utility of the method, it was qualitatively tested on a set of 336 real retinalimages; it proved the ability of eliminating the illumination inhomogeneity substantially in most of cases. The application field of this method is especially in preprocessing of retinalimages, as preparation for reliable segmentation or registration.
Objective: To segment a 3D ultrasound image data that comprises extraction of surface of interests, smoothing of segmented image, thereby to estimate the surface area and volume of segmented 3D objects (e.g. fetus). Method: a) Seeded Region Growing (SRG) together with connectivity and marching cubes algorithms are used to segment the three dimensional (3D) ultrasound image data (1) b) Using
A comprehensive series of routines has been developed to extract structural and topological information from 3Dimages of porous media. The main application aims at feeding a pore network approach to simulate unsaturated hydraulic properties from soil core images. Beyond the application example, the successive algorithms presented in the paper allow, from any 3D object image, the extraction of the
It is difficult to directly coregister the 3-D fluorescence molecular tomography (FMT) image of a small tumor in a mouse whose maximal diameter is only a few millimeters with a larger CT image of the entire animal that spans about 10 cm. This paper proposes a new method to register 2-D flat and 3-D CT image first to facilitate the
Zheng Xia; Xishi Huang; Xiaobo Zhou; Youxian Sun; V. Ntziachristos; Stephen Wong
Three-dimensional (3D) acoustic imaging is a highly developed technology that has produced a detailed image of the subsurface, at over 30 hazardous waste sites. 3Dimaging has been used to provide the density of data necessary to analyze the pathways for fluid transport, whether in free phase or as a dissolved plume. This information has then been used to optimally
3Dimaging has been widely used within various fields of dentistry to aid diagnosis, in treatment planning and appliance construction. Whereas traditionally this has involved the use of impression materials together with plaster or stone models, modern techniques are continually evolving which use virtual 3Dimages. These electronic virtual images are created using either contact or non-contact optical scanning techniques,
C. McNamara; M. J. Clover; K. House; N. Wenger; M. E. Barbour; K. Alemzadeh; L. Zhang; J. R. Sandy; A. J. Ireland
in this paper we present a system which automatically generates a 3D face model from a single frontal image of a face with the help of generic 3D model. Our system consists of three components. The first component detects the features like eyes, mouth, eyebrow and contour of face. After detecting features the second component automatically adapts the generic 3d
In this paper we present a system which automatically generates 3D face model from a single frontal image of a face with the help of generic 3D model. Our system consists of three components. The first component detects the features like eyes, mouth, eyebrow and contour of face. After detecting features the second component automatically adapts the generic 3d model
We present a technique for automatic intensity-based image-to-physical registration of a 3-D segmentation for image-guided\\u000a interventions. The registration aligns the segmentation with tracked and calibrated 3-D ultrasound (US) images of the target\\u000a region. The technique uses a probabilistic framework and explicitly incorporates a model of the US image acquisition process.\\u000a The rigid body registration parameters are varied to maximise the
Andrew P. King; Ying-liang Ma; Cheng Yao; Christian Jansen; Reza Razavi; Kawal S. Rhode; Graeme P. Penney
In this paper, we propose an improved AT-3D SPIHT algorithm for hyperspectral image lossy-to-lossless compression. According to the characteristic of Xiongpsilas 3D integer wavelet packet transform and the AT-3D SPIHTpsilas zerotree structure, we construct a more effective asymmetric 3D zerotree structure which not only has longer wavelet zerotree but also can cluster wavelet zero coefficients more efficiently. Experimental results show
The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.
Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.
We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes
Maxime Berar; Michel Desvignes; Gérard Bailly; Yohan Payan
A compact and high-speed 3D human body measurement system is proposed. Whole human body data which we can take from systems that have been developed with a few viewpoints is not successfully acquired due to occlusion. It is proposed a method that can obtain successful data by allocating multiple rangefinders correctly. Four compact rangefinders are installed in a pole. Those
M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to the modeling of anatomic objects, producing models that can be used to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, defined at coarse scale by a hierarchy of figures
Stephen M. Pizer; P. Thomas; Yonatan Fridman Fletcher; Daniel S. Fritsch; A. Graham Gash; John M. Glotzer; Sarang Joshi; Andrew Thall; Gregg Tracton; Paul Yushkevich; Edward L. Chaney
The optics of the eye form an image on a surface at the back of the eyeball called the retina. The retina contains the photoreceptors that sample the image and convert it into a neural signal. The spacing of the photoreceptors in the retina is not uniform...
BACKGROUND: The morphological changes of the retinal blood vessels in retinalimages are important indicators for diseases like diabetes, hypertension and glaucoma. Thus the accurate segmentation of blood vessel is of diagnostic value. METHODS: In this paper, we present a novel method to segment retinal blood vessels to overcome the variations in contrast of large and thin vessels. This method
We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 ?m thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 ?m). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.
In this paper, we present methods to improve resolution, viewing angle, and image depth in II. In integral imaging (II), three-dimensional (3-D) images are formed by integrating the rays coming from two-dimensional (2-D) elemental images using a lenslet (or pinhole) array. In II, one of the most fundamental factors that limits the resolution of the reconstructed 3Dimage is the
The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade from existing 2D geographic information systems. Possibility of site model generation with small details depends on two main factors: available source dataset and computer power resources. In this paper PC based technology is presented, so the scenes of middle resolution (scale of 1:1000) be constructed. Types of datasets are the gray level aerial stereo pairs of photographs (scale of 1:14000) and true color on ground photographs of buildings (scale ca.1:1000). True color terrestrial photographs are also necessary for photorealistic rendering, that in high extent improves human perception of the scene.
Zheltov, Sergei Y.; Blokhinov, Yuri B.; Stepanov, Alexander A.; Skryabin, Sergei V.; Sibiryakov, Alexander V.
M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures—each figure generally a
Stephen M. Pizer; P. Thomas Fletcher; Sarang C. Joshi; Andrew Thall; James Z. Chen; Yonatan Fridman; Daniel S. Fritsch; A. Graham Gash; John M. Glotzer; Michael R. Jiroutek; Conglin Lu; Keith E. Muller; Gregg Tracton; Paul A. Yushkevich; Edward L. Chaney
Hyperspectral images are generated by collecting hundreds of narrow and contiguously spaced spectral bands of data producing a highly correlated long sequence of images. Some application specific data compression techniques may be applied advantageously before we process, store or transmit hyperspectral images. This paper applies asymmetric tree 3DSPIHT (AT-3DSPIHT) for hyperspectral image compression; it also investigates and compares the performance
We developed an action game which is based on the interaction between real-time 3D avatar of the player and the virtual game characters. The 3D avatar is reconstructed from multi-view images of the player with image-based modeling and rendering techniques. The 3D avatar can be dynamically reconstructed and rendered in real time by using the Hardware-accelerated Visual Hull(HAVH) method. The
The purpose of this paper is to develop an advanced 3-D profile sensor system which will be able to accurately measure 3-D free-form machined metal surfaces. The proposed sensor system has many advantages as compared with conventional measuring systems. First, a new detecting system of optical ring images utilized by the rotating image detector is developed to measure 3-D profiles
Saburo Okada; Masaaki Imade; H. Miyauchi; K. Isomoto; T. Miyoshi; Tetsuhiro Sumimoto; Hideki Yamamoto
This paper highlights recent advances in image compression aided by 3-D geometry information. As two examples, we present a model-aided video coder for efficient compression of head-and-shoulder scenes and a geometry-aided coder for 4-D light fields for image-based rendering. Both examp les illustrate that an explicit representation of 3-D geome- try is advintageous if many views of the same 3-D
Bernd Girod; Peter Eisert; Marcus A. Magnor; Eckehard G. Steinbach; Thomas Wiegand
In this paper, we propose a three-dimensional (3D) video player system that provides haptic interaction with objects in a video scene based on a depth image-based representation. In order to represent dynamic 3D scenes, 3D video media combining general color video and synchronized depth video (depth image sequences) containing per-pixel depth information were exploited. With the proposed system, viewers can
Aim: To identify retinal exudates automatically from colour retinalimages.Methods: The colour retinalimages were segmented using fuzzy C-means clustering following some key preprocessing steps. To classify the segmented regions into exudates and non-exudates, an artificial neural network classifier was investigated.Results: The proposed system can achieve a diagnostic accuracy with 95.0% sensitivity and 88.9% specificity for the identification of images
Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.
This paper describes a model based vision system in which a commercial 3-D computer graphics system has been used for object modeling and visual clue generation. Given the computer generated model image (i.e., color, depth, ...) a conventional CCD camera image and the corresponding scanned 3-D dense range map of the real scene, the object can be located in it.
A Monte Carlo simulation has been developed to simulate and correct for the effect of Compton scatter in 3-D acquired PET brain scans. The method utilizes the 3-D reconstructed image volume as the source intensity distribution for a photon-tracking Monte Carlo simulation. It is assumed that the number of events in each pixel of the image represents the isotope concentration
Automatic and accurate segmentation of the pulmonary vessels in 3D computed tomographic angiographic images (CTPA) is an essential step for computerized detection of pulmonary embolism (PE) because PEs only occur inside the pulmonary arteries. We are developing an automated method to segment the pulmonary vessels in 3D CTPA images. The lung region is first extracted using thresholding and morphological operations.
Chuan Zhou; Heang-Ping Chan; Lubomir M. Hadjiiski; Smita Patel; Philip N. Cascade; Berkman Sahiner; Jun Wei; Jun Ge; Ella A. Kazerooni
Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from
Utilization of an acoustic camera for range measure- ments is a key advantage for 3-D shape recovery of under- water targets by opti-acoustic stereo imaging, where the as- sociated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sec- tions. In this paper, we propose methods for system cali- bration and 3-D scene reconstruction
This paper provides a framework for generating high resolution time sequences of 3Dimages that show the dynamics of cerebral blood flow. These sequences have the potential to allow image feedback during medical procedures that facilitate the detection and observation of pathological abnormalities such as stenoses, aneurysms, and blood clots. The 3D time series is constructed by fusing a single
Andrew D. Copeland; Rami S. Mangoubi; Mukund N. Desai; Sanjoy K. Mitter; Adel M. Malek
We have evaluated eight different similarity measures used for rigid body registration of serial magnetic resonance (MR) brain scans. To assess their accuracy we used 33 clinical three- dimensional (3-D) serial MR images, with deformable extradural tissue excluded by manual segmentation and simulated 3-D MR images with added intensity distortion. For each measure we deter- mined the consistency of registration
Mark Holden; Derek L. G. Hill; Erika R. E. Denton; Jo M. Jarosz; Tim C. S. Cox; Torsten Rohlfing; Joanne Goodey; David J. Hawkes
A large number of 3D deep seismic surveys in the Faroe-Shetland Channel gives continuous coverage over most of the region. These surveys were designed primarily to image depths in excess of 4 km, use low frequency sources and are recorded at low temporal sample rates. However, commercial 3D data can generate highly detailed images of the seabed due to the
The's paper describes an efficient algorithm for the segmentation of echo clusters within a dynamic 3-D sonar image. The sensor centered image is an echo management framework grouping sonar returns into spherical cells, and allowing real-time organisation of 3-D range data using inexpensive equipment. Each cell acts as a spatial key to the features related to this location. The spherical
In this paper, we propose an algorithm for lossy adaptive encoding of digital three-dimensional (3D) images based on singular value decomposition (SVD). This encoding allows us to design algorithms for progressive transmission and reconstruction of the 3Dimage, for one or several selected regions of interest (ROI) avoiding redundancy in data transmission. The main characteristic of the proposed algorithms is
Ismael Baeza; José-Antonio Verdoy; Rafael-Jacinto Villanueva; Javier Villanueva-Oller
Ultrasound Current Source Density Imaging (UCSDI) potentially improves 3-D mapping of bioelectric sources in the body at high spatial resolution, which is especially important for diagnosing and guiding treatment for cardiac and neurologic disorders, including arrhythmia and epilepsy. In this study, we report 4-D imaging of a time varying electric dipole in saline. A 3-D dipole field was produced in
Zhaohui Wang; Ragnar Olafsson; Pier Ingram; Qian Li; Russell S. Witte
Three-dimensional (3D) microscopic imaging techniques such as confocal microscopy have become a common tool in measuring cellular structures. While computer volume visualization has advanced into a sophisticated level in medical applications, much fewer studies have been made on data acquired by the 3D microscopic imaging techniques. To optimize the visualization of such data, it is important to consider the data
In this paper, we propose an active vision strategy for the construction of a 3D map in a robot brain from its stereo eye images. We show that, by the direct combination of its action and image change caused by the action, the robot can acquires a 3D accurate map in his brain. If the robot stereo cameras and his
Registration of pre- and intra-interventional data is one of the key technologies for image-guided radiation therapy, radiosurgery, minimally invasive surgery, endoscopy, and interventional radiology. In this paper, we survey those 3D/2D data registration methods that utilize 3D computer tomography or magnetic resonance images as the pre-interventional data and 2D X-ray projection images as the intra-interventional data. The 3D/2D registration methods are reviewed with respect to image modality, image dimensionality, registration basis, geometric transformation, user interaction, optimization procedure, subject, and object of registration. PMID:20452269
This is a self-directed learning module to introduce students to basic concepts of imaging technology as well as to give students practice going between 2D and 3Dimaging using everyday objects.Annotated: true
Purpose. To report an imaging technique for measurement of oxygen tension (PO2) in retinal tissue and establish its feasibility for measuring retinal PO2 variations in rat eyes by adjusting the fraction of inspired oxygen (FiO2). Methods. A narrow laser line was projected at an angle on the retina, and phosphorescence emission was imaged after intravitreal injection of an oxygen-sensitive molecular probe. A frequency-domain approach was used for phosphorescence lifetime measurements. Retinal PO2 maps were computed from phosphorescence lifetime images, and oxygen profiles through the retinal depth were derived in rats in conditions of 10%, 21%, and 50% FiO2. Results. Retinal PO2 measurements were repeatable, and variations in outer and inner retina PO2 at different locations along the image were not significant (P ? 0.3). Maximum outer retinal PO2 obtained in 10%, 21%, and 50% FiO2 were significantly different (P < 0.0001). Maximum outer retinal PO2 correlated with systemic arterial PO2 (R = 0.70; P < 0.0001). The slope of the outer retina PO2 profile correlated with maximum outer retinal PO2 (R = 0.84; P < 0.0001). Mean inner retina PO2 correlated with maximum outer retinal PO2 (R = 0.88; P < 0.0001). Conclusions. A technique has been developed for quantitative mapping of retinal tissue oxygen tension with the potential to enable sequential monitoring of retinal oxygenation in health and disease.
Wanek, Justin; Blair, Norman P.; Little, Deborah M.; Wu, Tingting
Mining discriminative spatial patterns in image data is an emerging subject of interest in medical imaging, meteorology, engineering,\\u000a biology, and other fields. In this paper, we propose a novel approach for detecting spatial regions that are highly discriminative\\u000a among different classes of three dimensional (3D) image data. The main idea of our approach is to treat the initial 3Dimage
Vasileios Megalooikonomou; Despina Kontos; Dragoljub Pokrajac; Aleksandar Lazarevic; Zoran Obradovic
In this paper, we present a novel method for registering a 3D MR image to 2D X-ray images with the final goal to estimate the position and orientation of the patient during surgery or external beam radiotherapy. By using 3D MR preoperative images instead of CT images, we increase a soft tissue contrast and reduce the dose delivered to the
We discuss a real-time coherence gated three-dimensional (3-D) imaging system, based on photorefractive holography with ultrashort pulses, which has been applied to imaging through turbid media with a view to developing biomedical instrumentation. Sub-100-?m depth-resolved images of 3-D objects embedded in a scattering medium have been obtained. Using a long integration time in rhodium-doped barium titanate (Rh:BaTiO4), an image of
R. Jones; N. P. Barry; Sam C. W. Hyde; Mary Tziraki; J. C. Dainty; Paul M. W. French; D. D. Nolte; K. M. Kwolek; Michael R. Melloch
Most of the existing approaches for landmark image classification utilize either holistic features or interest of points in the whole image to train the classification model, which may lead to unsatisfactory result due to involvement of much information non-located on the landmark in the training process. In this paper, we propose a novel approach to improve landmark image classification result
Conventional volume transmission holograms of a 3D scene were recorded on dichromated poly(acrylic acid) (DCPAA) films under 488 nm light. The holographic characterization and quality of reconstruction have been studied by varying the influencing parameters such as concentration of dichromate and electron donor, and the molecular weight of the polymer matrix. Ammonium and potassium dichromate have been employed to sensitize the poly(acrylic) matrix. the recorded hologram can be efficiently reconstructed either with red light or with low energy in the blue region without any post thermal or chemical processing.
Lemelin, Guylain; Jourdain, Anne; Manivannan, Gurusamy; Lessard, Roger A.
Current OCT devices provide three-dimensional (3D) in-vivo images of the human retina. The resulting very large data sets are difficult to manually assess. Automated segmentation is required to automatically process the data and produce images that are clinically useful and easy to interpret. In this paper, we present a method to segment the retinal layers in these images. Instead of using complex heuristics to define each layer, simple features are defined and machine learning classifiers are trained based on manually labeled examples. When applied to new data, these classifiers produce labels for every pixel. After regularization of the 3D labeled volume to produce a surface, this results in consistent, three-dimensionally segmented layers that match known retinal morphology. Six labels were defined, corresponding to the following layers: Vitreous, retinal nerve fiber layer (RNFL), ganglion cell layer & inner plexiform layer, inner nuclear layer & outer plexiform layer, photoreceptors & retinal pigment epithelium and choroid. For both normal and glaucomatous eyes that were imaged with a Spectralis (Heidelberg Engineering) OCT system, the five resulting interfaces were compared between automatic and manual segmentation. RMS errors for the top and bottom of the retina were between 4 and 6 ?m, while the errors for intra-retinal interfaces were between 6 and 15 ?m. The resulting total retinal thickness maps corresponded with known retinal morphology. RNFL thickness maps were compared to GDx (Carl Zeiss Meditec) thickness maps. Both maps were mostly consistent but local defects were better visualized in OCT-derived thickness maps. PMID:21698034
Vermeer, K A; van der Schoot, J; Lemij, H G; de Boer, J F
A fully automated segmentation of the endocardial surface was developed by integrating spatio-temporal information of 3D ultrasound image sequences. 3D echocardiographic image sequences of the left ventricle of five healthy children were obtained in transthoracic short\\/long axis view. 2D and 3D (adaptive) filtering was used to reduce speckle noise and optimize the distinction between blood and myocardium, while preserving sharpness
M. M. Nillesen; R. G. P. Lopata; I. H. Gerrits; L. Kapusta; H. J. Huisman; J. M. Thijssen; C. L. de Korte
In this paper, as an approach for a wide 3D real image display system without special glasses, a 100" Fresnel lens-based 3D real-projection display system is implemented and its physical size is designed by 2800x2800x1600 mm3 in length, width and depth, respectively. In this display system, the conventional 2D video image is projected into the air through some projection optics and a pair of Fresnel lens and as a result, it can form a floating video image having a real depth. From some experiments with the test video images, the floated 3D video images with some depth have been realistically viewed, in which forward depth of the floated 3Dimage from the display screen is found to be 35~47 inches and the viewing angle to be 60 degrees, respectively. This feasibility test for the prototype of 100" Fresnel lens-based 3D real image rear-projection display systems suggests a possibility of its practical applications to the 3D advertisements, 3D animations, 3D games and so on.
Jang, Sun-Joo; Kim, Seung-Chul; Koo, Jung-Sik; Park, Jung-Il; Kim, Eun-Soo
Time-reverse imaging has become an efficient tool to detect the origin of passively acquired seismic tremor signals. Practical experience has mainly been developed for 2D applications. Three component signals are reduced to two components and reverse propagated on the plane vertically below a station line. The data used for time-reverse imaging are the vertical and the horizontal particle displacement parallel to the line. Dropping the horizontal component perpendicular to the line causes partial loss of information on particle motion and directivity of the recorded waves. We present a comparison of 2D and 3D time-reverse imaging for a specific site with small cross-line gradients and investigate how closely 2D imaging approximates 3Dimaging. Our large-scale synthetic survey with different S/N-ratios demonstrates how a subsurface source of tremor-like signals is imaged in different vertical planes. An imaging condition based on the energy density gives best results. We observe higher sensitivity to noise and stronger out-of-plane focusing for 2D than for 3Dimaging. We suggest normalized visualization of multiple planes from 2D imaging in one 3D display as an approach to reliably locate sources. Comparison with examples of full 3D time-reverse imaging shows that normalized visualization of multiple 2D planes with a proper imaging condition can adequately approximate the result from full 3Dimaging for the particular model considered in this study.
Spatial registration and fusion of ultrasound (US) images with other modalities may aid clinical interpretation. We implemented and evaluated on patient data an automated retrospective registration of magnetic resonance angiography (MRA) carotid bifurcation images with 3-D power Doppler ultrasound (PD US) and indirectly with 3-D B-mode US. Volumes were initially thresholded to reduce the uncorrelated noise signals. The registration algorithm
Piotr J. Slomka; Jonathan Mandel; Donal Downey; Aaron Fenster
Recently, the computer generated hologram (CGH) calculated from real existing objects is more actively investigated to support holographic video and TV applications. In this paper, we propose a method of generating a hologram of the natural 3-D scene from multi-view images in order to provide motion parallax viewing with a suitable navigation range. After a unified 3-D point source set describing the captured 3-D scene is obtained from multi-view images, a hologram pattern supporting motion-parallax is calculated from the set using a point-based CGH method. We confirmed that 3-D scenes are faithfully reconstructed using numerical reconstruction.
Chang, Eun-Young; Kang, Yun-Suk; Moon, KyungAe; Ho, Yo-Sung; Kim, Jinwoong
In this paper, we propose photon counting three-dimensional (3D) passive sensing and object recognition using integral imaging. The application of this approach to 3D automatic target recognition (ATR) is investigated using both linear and nonlinear matched filters. We find there is significant potential of the proposed system for 3D sensing and recognition with a low number of photons. The discrimination capability of the proposed system is quantified in terms of discrimination ratio, Fisher ratio, and receiver operating characteristic (ROC) curves. To the best of our knowledge, this is the first report on photon counting 3D passive sensing and ATR with integral imaging.
SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.
In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for
Real-time three-dimensional ultrasound imaging (4D US) was utilized to monitor the treatment site during high-intensity focused ultrasound (HIFU) treatment. To obtain real-time monitoring during HIFU sonication, a 4D US imaging system and HIFU were synchronized and interference on the US image adjusted so that the region of interest was visible during treatment. The system was tested using tissue mimicking phantom
Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an “anytime, anywhere” activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing information wherever the provider and patient are. Thin-client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in supporting image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver acceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.
Iconic environments for querying pictorial databases allow one to express spatial relationships between the objects in the images through appropriate icon positioning. In this paper, a visual environment is presented for image retrieval from a database of real world pictures. The environment differs from the other experiences reported in the literature, since the visual query is performed taking into account
In this paper, we address the problem of esti- mating three-dimensional motions of a stratified atmospher e from satellite image sequences. The analysis of three-dimensional atmospheric fluid flows associated with incomplete observat ion of atmospheric layers due to the sparsity of cloud systems is very difficult. This makes the estimation of dense atmospheric mo tion field from satellite images sequences
A three dimensional sonar imaging system is under development for use by Navy divers for mine-field reconnaissance and mine-hunting systems. These divers require a small, low power, lightweight acoustic imaging system with high resolution for examining an...
A. M. Chiang J. Impagliazzo S. Kay S. R. Broadstone
Panoramic images are efficiently used for documenting archaeological sites and objects. In our paper we present a new approach in developing the use of panoramic images for archaeological survey. The work is part of the Finnish Jabal Haroun Project, in Petra, Jordan. The primary motivation has been in developing a procedure for field invention, in which photogrammetric documentation could be
Henrik Haggrén; Hanne Junnilainen; Jaakko Järvinen; Terhi Nuutinen; Mika Laventob; Mika Huotarib
We have developed the small three dimensional image capture system by Thin Observation Module by Bound Optics(TOMBO). As a micro lens array, we used GRIN lens array. It decreases a cross talk which lens-to-lens occurs on image sensor. This module uses a micro-lens array to form multiple images, which are captured on a photo-detector array. Digital processing of the captured multiple images is used to extract the surface profile. Preliminary experiments were executed on an evaluation system to verify the principles of the system. In this paper, we have proposed ultra thin three dimensional capture system. A compound-eye imaging system and post-processing are employed. Experimental results verify the principle of the proposed method and show the potential capability of the proposed system architecture.
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach.
This paper presents a new high-resolution computational integral imaging system employing a pickup with the axial movement of a lenslet array and a computation reconstruction algorithm with pixel-to-pixel mapping. In the proposed method, a lenslet array and its image sensor are moved together along the z-axis direction (or axial direction) and a series of elemental image arrays are obtained while moving. The elemental image arrays are then applied to pixel-to-pixel mapping without interpolation for the reconstruction of 3D slice images. Also, an analysis of the proposed reconstruction method is provided. To show the usefulness of the proposed method, experiments are conducted. The results indicate that the proposed method is superior to the existing method such as MALT in terms of image quality. PMID:23571977
In this paper an approach to recover the 3D human body pose from static images is proposed. We adopt a discriminative learning\\u000a technique to directly infer the 3D pose from appearance-based local image features. We use simplified Gradient Location and\\u000a Orientation histogram (GLOH) as our image feature representation. We then employ the gradient tree-boost regression to train\\u000a a discriminative model
Suman Sedai; Farid Flitti; Mohammed Bennamoun; Du Huynh
A novel 3D Digital Particle Image Thermometry and Velocimetry (3DDPITV) system has been designed and fabricated. By combining 3D Digital Particle Image Velocimetry (3DDPIV) and Digital Particle Image Thermometry (DPIT) into one system, this technique provides simultaneous temperature and velocity data in a volume of ˜1x1x0.5 in^3 using temperature sensitive liquid crystal particles as flow sensors. Two high-intensity xenon flashlamps
We investigated cine-mode portal imaging on a Varian Trilogy accelerator and found that the linearity and other dosimetric properties are sufficient for 3D dose reconstruction as used in patient-specific quality assurance for VMAT (RapidArc) treatments. We also evaluated the gantry angle label in the portal image file header as a surrogate for the true imaged angle. The precision is only just adequate for the 3D evaluation method chosen, as discrepancies of 2° were observed.
To compare the optical properties of the human retina, 3-D volumetric images of the same eye are acquired with two nearly identical optical coherence tomography (OCT) systems at center wavelengths of 845 and 1060 nm using optical frequency domain imaging (OFDI). To characterize the contrast of individual tissue layers in the retina at these two wavelengths, the 3-D volumetric data sets are carefully spatially matched. The relative scattering intensities from different layers such as the nerve fiber, photoreceptor, pigment epithelium, and choroid are measured and a quantitative comparison is presented. OCT retinalimaging at 1060 nm is found to have a significantly better depth penetration but a reduced contrast between the retinal nerve fiber, the ganglion cell, and the inner plexiform layers compared to the OCT retinalimaging at 845 nm.
Chen, Yueli; Burnes, Daina L.; de Bruin, Martijn; Mujat, Mircea; de Boer, Johannes F.
This paper presents a feature-based face recognition system based on both 3D range data as well as 2D gray-level facial images. Feature points are described by Gabor filter responses in the 2D domain and Point Signature in the 3D domain. Extracted shape features from 3D feature points and texture features from 2D feature points are first projected into their own
Three-dimensional image quality assessment causes new challenges for a wide set of applications and particularly for emerging 3-D watermarking schemes. First, new metrics have to be drawn for the distortion measurement from an original 3-D surface to its deformed version: this metric is necessary to address distortions that are acceptable and to which a 3-D watermarking algorithm should resist. In
Patrice Rondao Alface; Mathieu De Craene; Benoit B. Macq
Given a set of 2D images, we propose a novel approach for the reconstruction of straight 3D line segments that represent the underlying geometry of static 3D objects in the scene. Such an algorithm is especially useful for the automatic 3D reconstruction of man-made environments. The main contribution of our approach is the generation of an improved reconstruction by imposing
Arjun Jain; Christian Kurz; Thorsten Thormählen; Hans-Peter Seidel
A geometric scheme for detecting, representing, and measuring 3D medical data is presented. The technique based on deforming 3D surfaces, represented via level-sets, towards the medical objects, according to intrinsic geometric measures of the data. The 3D medical object is represented as a (weighted) minimal surface in a Riemannian space whose metric is induced from the image. This minimal surface
R. Malladi; R. Kimmel; D. Adalsteinsson; G. Sapiro; V. Caselles; J. A. Sethian
The basics of three-dimensional (3-D) and spectral imaging techniques that are based on the detection of coherence functions and other related techniques are reviewed. The principle of the 3-D source retrieval is based on understanding the propagation law of optical random field through the free space. The 3-D and spectral information are retrieved from the cross-spectral density function of optical
The study of many biological processes requires the analysis of three-dimensional (3D) structures that change over time. Optical sectioning techniques can provide 3D data from living specimens; however, when 3D data are collected over a period of time, the quantity of image information produced leads to difficulties in interpretation. A computer-based system is described that permits the analysis and archiving
In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed. PMID:17269632
SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation\\/interpolation methods such as
A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal
L. Antos; P. Emord; B. Luquette; B. McGee; D. Nguyen; A. Phipps; D. Phillips; M. Helguera
A computerised system is presented for the automatic quantification of blood vessel topography in retinalimages. This system utilises digital image processing techniques to provide more reliable and comprehensive information for the retinal vascular network. It applies strategies and algorithms for measuring vascular trees and includes methods for locating the centre of a bifurcation, detecting vessel branches, estimating vessel diameter,
Xiaohong W. Gao; Anil A. Bharath; Alice V. Stanton; Alun D. Hughes; Neil Chapman; Simon A. Thom
The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-Dimaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-Dimage data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16
A novel hardware implementation of an omnidirectional image sensor is presented which is capable of acquiring and processing 3Dimage sequences in real time. The system consists of a hemispherical arrangement of a large number of CMOS imagers, connecting to a layered arrangement of a high-end FPGA platform that is responsible data framing and image processing. The hardware platform in
Hossein Afshari; Laurent Jacques; Luigi Bagnato; Alexandre Schmid; P. Vandergheynst; Y. Leblebici
Any image matching scheme that is based on landmarks requires a coordinate transformation that maps landmark locations in one image to corresponding locations in a second image. The development of an approach to this coordinate transformation, called the elastic body spline (EBS), is outlined. The spline is used to match 3D magnetic resonance images (MRIs) of the breast that are
Malcolm H. Davis; Alireza Khotanzad; Duane P. Flamig; Steven E. Harms
3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.
Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee
We introduce a new model-based approach for the segmentation and quantification of the aortic arch morphology in 3D CTA images for endovascular aortic repair (EVAR). The approach is based on a 3D analytic intensity model for thick vessels, which is directly fitted to the image. Based on the fitting results we compute the (local) 3D vessel curvature and torsion as well as the relevant lengths not only along the 3D centerline but particularly along the inner and outer contour. These measurements are important for pre-operative planning in EVAR applications. We have successfully applied our approach using ten 3D CTA images and have compared the results with ground truth obtained by a radiologist. It turned out that our approach yields accurate estimation results. We have also performed a comparison with a commercial vascular analysis software.
Wörz, S.; von Tengg-Kobligk, H.; Henninger, V.; Böckler, D.; Kauczor, H.-U.; Rohr, K.
The warped poplar panel and the technique developed by Leonardo to paint the Mona Lisa present a unique research and engineering challenge for the design of a complete optical 3Dimaging system. This paper discusses the solution developed to precisely measure in 3D the world's most famous painting despite its highly contrasted paint surface and reflective varnish. The discussion focuses on the opto-mechanical design and the complete portable 3Dimaging system used for this unique occasion. The challenges associated with obtaining 3D color images at a resolution of 0.05 mm and a depth precision of 0.01 mm are illustrated by exploring the virtual 3D model of the Mona Lisa.
Blais, Francois; Cournoyer, Luc; Beraldin, J.-Angelo; Picard, Michel
Retinal function can be objectively evaluated by measuring the light reflectance changes of the ocular fundus following light stimulation. Two independent methods using either infrared light or visible light for illumination will be presented: the former is called intrinsic signal imaging and the latter is retinal densitometry.
In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation.
Nigel M. John; Mansur R. Kabuka; Mohamed O. Ibrahim
We describe a simple device for creating true 3D views of image pairs obtained at 3D CT reconstruction. The device presents the images in a slightly different angle of view for the left and the right eyes. This true 3D viewing technique was applied experimentally in the evaluation of complex acetabular fractures. Experiments were also made to determine the optimal angle between the images for each eye. The angle varied between 1 degree and 7 degrees for different observers and also depended on the display field of view used. PMID:9394669
Haveri, M; Suramo, I; Junila, J; Lähde, S; Karhula, V
In this paper, we present a knowledge-based texture segmentation system for the identification of 3D structures in biomedical images. The segmentation is guided by in Iterative Octree Expansion and (leaf node) Linking control algorithm. The segmentation is performed directly in 3D space which is contrary to that done in common approaches where 3D structures are reconstructed from results of 2D segmentation of a sequence of consecutive, cross-sectional images. Test result of a prototype of this system on real data confocal scanning fluorescence microscopic images of a developing chick embryo heart is reported.
Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.
Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.
Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.
In this work, we demonstrate 3-D photoacoustic imaging of optically absorbing targets embedded as deep as 5 cm inside a highly scattering background medium using a 2-D capacitive micromachined ultrasonic transducer (CMUT) array with a center frequency of 5.5 MHz. 3-D volumetric images and 2-D maximum intensity projection images are presented to show the objects imaged at different depths. Due to the close proximity of the CMUT to the integrated frontend circuits, the CMUT array imaging system has a low noise floor. This makes the CMUT a promising technology for deep tissue photoacoustic imaging. PMID:22977296
Ma, Te-Jen; Kothapalli, Sri Rajasekhar; Vaithilingam, Srikant; Oralkan, Omer; Kamaya, Aya; Wygant, Ira O; Zhuang, Xuefeng; Gambhir, Sanjiv S; Jeffrey, R Brooke; Khuri-Yakub, Butrus T
In this work, we demonstrate 3-D photoacoustic imaging of optically absorbing targets embedded as deep as 5 cm inside a highly scattering background medium using a 2-D capacitive micromachined ultrasonic transducer (CMUT) array with a center frequency of 5.5 MHz. 3-D volumetric images and 2-D maximum intensity projection images are presented to show the objects imaged at different depths. Due to the close proximity of the CMUT to the integrated frontend circuits, the CMUT array imaging system has a low noise floor. This makes the CMUT a promising technology for deep tissue photoacoustic imaging.
Ma, Te-Jen; Kothapalli, Sri Rajasekhar; Vaithilingam, Srikant; Oralkan, Omer; Kamaya, Aya; Wygant, Ira O.; Zhuang, Xuefeng; Gambhir, Sanjiv S.; Jeffrey, R. Brooke; Khuri-Yakub, Butrus T.
Extraction of 'man-made object' from stereoscopic aerial images is a problem which is not entirely solved. In this paper we propose an interactive approach to extract such objects. The disparity image is used to roughly delineate buildings. Then, the buildings are extracted according to their shape. An operator clicks inside the roof of the building of interest and chooses between two different algorithms, the rest of the process is entirely automatic. A very robust algorithm based on the Hough transform is used to extract rectangular buildings, whereas, a large spectrum algorithm based on snakes is used to extract more complex shapes. Our method gives good results in extracting buildings on IGN aerial images already rectified in epipolar geometry.
Dynamic Reconstruction and Rendering (DRR) is a fast and flexible tomosynthesis image reconstruction and display implementation. By leveraging the computational efficiency gains afforded by off-the-shelf GPU hardware, tomosynthesis reconstruction can be performed on demand at real-time, user-interactive frame rates. Dynamic multiplanar reconstructions allow the user to adjust reconstruction and display parameters interactively, including axial sampling, slice location, plane tilt, magnification, and filter selection. Reconstruction on-demand allows tomosynthesis images to be viewed as true three-dimensional data rather than just a stack of two-dimensional images. The speed and dynamic rendering capabilities of DRR can improve diagnostic accuracy and lead to more efficient clinical workflows.
Kuo, Johnny; Ringer, Peter A.; Fallows, Steven G.; Bakic, Predrag R.; Maidment, Andrew D. A.; Ng, Susan
Empirical and theoretical modeling of 3-D structures in the solar corona is confronted by the tremendous amount of data needed to represent phenomena with a large dynamic range both in size and magnitude, and with a rapid temporal evolution. Octree representation of the 3-D coronal electron distribution offers the right compromise between resolution and size, allowing computation of synthetic images
D. Vibert; A. Llebaria; T. Netter; L. Balard; P. Lamy
Neuron axon analysis is a fundamental mean to investigate nervous system in neurobiology and often requires collecting a great amount of statistical information. Automated extraction of axons axis in 3D microscopic images posts a key problem in the field of neuron axon analysis. To address tortuous axons with several touchings in 3D volumes, a novel simulation based approach with regard
An approach for the 3D segmentation and reconstruction of human left coronary arteries using angio-CT images is presented in this paper. Each voxel in the 3D dataset is assumed to belong to one of the three homogeneous regions: blood, myocardium, and lung...
In this paper, we first review the approaches to recover 3D shape and related movements of a human and then we present an easy and reliable approach to recover a 3D model using just one image or monocular video sequence. A simplification of the perspective camera model is required, due to the absence of stereo view. The human figure is
The rapid diagnosis of invisible internal injury in an austere and hostile front-line operational environment remains a challenge for (Canadian Forces) medical and search and rescue personnel. The availability of a portable 4D-ultrasound imaging system wi...
The paper presents an advanced solution for capturing the height of an object in addition to the 2D image as it is frequently desired in machine vision applications. Based upon the active fringe projection methodology, the system takes advantage of a series of patterns projected onto the object surface and observed by a camera to provide reliable, accurate and highly
Accurate localization of the prostate and its surrounding tissue is essential in the treatment of prostate cancer. This paper presents a novel approach to fully automatically segment the prostate, including its seminal vesicles, within a few minutes of a magnetic resonance (MR) scan acquired without an endorectal coil. Such MR images are important in external beam radiation therapy, where using an endorectal coil is highly undesirable. The segmentation is obtained using a deformable model that is trained on-the-fly so that it is specific to the patient's scan. This case specific deformable model consists of a patient specific initialized triangulated surface and image feature model that are trained during its initialization. The image feature model is used to deform the initialized surface by template matching image features (via normalized cross-correlation) to the features of the scan. The resulting deformations are regularized over the surface via well established simple surface smoothing algorithms, which is then made anatomically valid via an optimized shape model. Mean and median Dice's similarity coefficients (DSCs) of 0.85 and 0.87 were achieved when segmenting 3T MR clinical scans of 50 patients. The median DSC result was equal to the inter-rater DSC and had a mean absolute surface error of 1.85 mm. The approach is showed to perform well near the apex and seminal vesicles of the prostate. PMID:22875243
Chandra, Shekhar S; Dowling, Jason A; Shen, Kai-Kai; Raniga, Parnesh; Pluim, Josien P W; Greer, Peter B; Salvado, Olivier; Fripp, Jurgen
The deployment of vehicle airbags for maximum protection requires information about the occupant's position, movement, weight, size etc. Specifically, it is desirable to discriminate between adults, children, front or rear-facing child seats, objects put on the seat or simply empty seats. 2D images lack depth information about the object and are very sensitive to illumination conditions. Occupant position classification techniques
Pandu R. Rao Devarakota; Bruno Mirbach; Marta Castillo-Franco; Bjorn Ottersten
Three-dimensional medical imaging methodologies are surveyed with respect to hardware versus software, stand-alone versus on-the-scanner, speed, interaction, rendering methodology, fidelity, ease of use, cost, and quantitative capability. The question of volume versus surface rendering is considered in more detail. Research results are cited to illustrate the capabilities discussed
In this paper, we address the problem of the recovery of a realistic textured model of a scene from a sequence of images, without any prior knowledge either about the parameters of the cameras or about their motion. We do not require any knowledge of the absolute coordinates of some control points in the scene to achieve this goal. First,
In these days of the rapid development of diagnostic equipment with increasingly sophisticated technology it is necessary to put more emphasis on implementing processes for the computer support of medical diagnostics, which are more and more often used to automate diagnostic procedures carried out in healthcare. Current research shows that a significant part of diagnostic imaging, including e.g. of coronary
Miros?aw Trzupek; Marek R. Ogiela; Ryszard Tadeusiewicz
Purpose Generation of graspable three-dimensional objects applied for surgical planning, prosthetics and related applications using\\u000a 3D printing or rapid prototyping is summarized and evaluated.\\u000a \\u000a \\u000a \\u000a \\u000a Materials and methods Graspable 3D objects overcome the limitations of 3D visualizations which can only be displayed on flat screens. 3D objects\\u000a can be produced based on CT or MRI volumetric medical images. Using dedicated post-processing algorithms, a
F. Rengier; A. Mehndiratta; H. von Tengg-Kobligk; C. M. Zechmann; R. Unterhinninghofen; H.-U. Kauczor; F. L. Giesel
3Dimaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3Dimaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3Dimaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. PMID:23467008
Ear is a new class of relatively stable biometrics which is not affected by facial expressions, cosmetics and eye glasses.\\u000a To use ear biometrics for human identification, ear detection is the first part of an ear recognition system. In this chapter\\u000a we propose two approaches for locating human ears in side face range images: (a) template matching based ear detection
Humans have an amazing ability to perceive depth from a single still image; however, it remains a challenging problem for current computer vision systems. In this paper, we will present algorithms for estimating depth from a single still im- age. There are numerous monocular cues—such as texture vari- ations and gradients, defocus, color\\/haze, etc.—that can be used for depth perception.
The authors use partial-differential-equation-based filtering as a preprocessing and post processing strategy for computer-aided cytology. They wish to accurately extract and classify the shapes of nuclei from confocal microscopy images, which is a prerequisite to an accurate quantitative intranuclear (genotypic and phenotypic) and internuclear (tissue structure) analysis of tissue and cultured specimens. First, the authors study the use of a
Alessandro Sarti; Carlos Ortiz de Solórzano; Stephen Lockett; Ravikanth Malladi
In this paper, we present methods for 3D visualization and quantitative measurements of retinal blood flow in rats by the use of optical microangiography imaging technique (OMAG). We use ultrahigh sensitive OMAG to provide high-quality 3D RBF perfusion maps in the rat eye, from which the Doppler angle, as well as the diameters of blood vessels, are evaluated. Estimation of flow velocity (i.e. axial flow velocity) is achieved by the use of Doppler OMAG, which has its origins in phase-resolved Doppler optical coherence tomography. The measurements of the Doppler angle, vessel size, and the axial velocity lead to the quantitative assessment of the absolute flow velocity and the blood flow rate in selected retinal vessels. We demonstrate the feasibility of OMAG to provide 3D microangiograms and quantitative assessment of retinal blood flow in a rat model subjected to raised intra-ocular pressure (IOP). We show that OMAG is capable of monitoring the longitudinal response of absolute blood velocity and flow rate of retinal blood vessels to increased IOP in the rat, demonstrating its usefulness for ophthalmological research. PMID:21412463
In this paper, we present methods for 3D visualization and quantitative measurements of retinal blood flow in rats by the use of optical microangiography imaging technique (OMAG). We use ultrahigh sensitive OMAG to provide high-quality 3D RBF perfusion maps in the rat eye, from which the Doppler angle, as well as the diameters of blood vessels, are evaluated. Estimation of flow velocity (i.e. axial flow velocity) is achieved by the use of Doppler OMAG, which has its origins in phase-resolved Doppler optical coherence tomography. The measurements of the Doppler angle, vessel size, and the axial velocity lead to the quantitative assessment of the absolute flow velocity and the blood flow rate in selected retinal vessels. We demonstrate the feasibility of OMAG to provide 3D microangiograms and quantitative assessment of retinal blood flow in a rat model subjected to raised intra-ocular pressure (IOP). We show that OMAG is capable of monitoring the longitudinal response of absolute blood velocity and flow rate of retinal blood vessels to increased IOP in the rat, demonstrating its usefulness for ophthalmological research.
Range gated imaging is a remote sensing acquisition which involves the emission of a laser pulse and an intensified camera to gate the reflected laser pulse. Range accuracy has always been an issue especially when a highly accurate reconstructed model is expected as the final outcome. The reflected pulse profile and pulse instability are among the issues that affect the range accuracy when a general solution such as constant offset is not applicable. In this paper, a study to estimate a more accurate model for the reflected pulse profile has been investigated through experiments. T Location-Scale model has been proposed to replace the Gaussian model as the general assumption for range-gated image formation model. The improvement on range accuracy which is around 0.3% has been verified through simulation based on the acquired samples. The series of range-gated images can be reconstructed into a three-dimensional depth map through range calculation. This can be used in the subsequent range reconstruction works.
Purpose: To investigate the clinical importance and feasibility of a 3-D portal image analysis method in comparison with a standard 2-D portal image analysis method for pelvic irradiation techniques.Methods and Materials: In this study, images of 30 patients who were treated for prostate cancer were used. A total of 837 imaged fields were analyzed by a single technologist, using automatic
Peter Remeijer; Erik Geerlof; Lennert Ploeger; Kenneth Gilhuijs; Marcel van Herk; Joos V Lebesque
Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the 'hand-eye' calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795). PMID:24099806
Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the ‘hand-eye’ calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795).
We propose a novel and fast way to perform 2D-3D registration between available intra-operative 2D images with pre-operative 3Dimages in order to provide better image-guidance. The current work is a feature based registration algorithm that allows the similarity to be evaluated in a very efficient and faster manner than that of intensity based approaches. The current approach is focused on solving the problem for neuro-interventional applications and therefore we use blood vessels, and specifically their centerlines as the features for registration. The blood vessels are segmented from the 3D datasets and their centerline is extracted using a sequential topological thinning algorithm. Segmentation of the 3D datasets is straightforward because of the injection of contrast agents. For the 2D image, segmentation of the blood vessel is performed by subtracting the image with no contrast (native) from the one with a contrast injection (fill). Following this we compute a modified version of the 2D distance transform. The modified distance transform is computed such that distance is zero on the centerline and increases as we move away from the centerline. This allows us a smooth metric that is minimal at the centerline and large as we move away from the vessel. This is a one time computation, and need not be reevaluated during the iterations. Also we simply sum over all the points rather than evaluating distances over all point pairs as would be done for similar Iterative Closest Point (ICP) based approaches. We estimate the three rotational and three translational parameters by minimizing this cost over all points in the 3D centerline. The speed improvement allows us to perform the registration in under a second on current workstations and therefore provides interactive registration for the interventionalist.
There are many visual inspection and sensing applications where both a high resolution image and a depth-map of the imaged object are desirable at high speed. Presently available methods to capture 3D data (stereo cameras and structured illumination), are limited in speed, complexity, and transverse resolution. Additionally these techniques rely on a separated baseline for triangulation, precluding use in confined spaces. Typically, off the shelf lenses are implemented where performance in resolution, field-of-view, and depth of field are sacrificed in order to achieve a useful balance. Here we present a novel lens system with high-resolution and wide field-of-view for rapid 3Dimage capture. The design achieves this using a single lens with no moving parts. A depth-from-defocus algorithm is implemented to reconstruct 3D object point clouds and matched with a fused image to create a 3D rendered view.
A comparison of six similarity measures for use in intensity-based two-dimensional-three-dimensional (2-D-3-D) image registration is presented. The accuracy of the similarity measures are compared to a \\
Graeme P. Penney; Jürgen Weese; John A. Little; Paul Desmedt; Derek L. G. Hill; David J. Hawkes
We present a new approach for accelerated computation of hologram patterns of a three-dimensional (3-D) image by taking into account of its interline redundant data. Interline redundant data of a 3-Dimage are extracted with the differential pulse code modulation (DPCM) algorithm, and then the CGH patterns for these compressed line images are generated with the novel lookup table (N-LUT) technique. To confirm the feasibility of the proposed method, experiments with four kinds of 3-D test objects are carried out, and the results are comparatively discussed with the conventional methods in terms of the number of object points and the computation time. Experimental results show that the number of calculated object points and the computation time for one object point have been reduced by 73.3 and 83.9%, on the average, for four test 3-Dimages in the proposed method employing a top-down scanning method, compared to the conventional method.
Although computer processing power and network bandwidth are rapidly increasing, the average desktop is still not able to rapidly process large datasets such as 3-D medical image volumes. We have therefore developed a server side approach to this problem, in which a high performance graphics server accepts commands from web clients to load, process and render 3-Dimage volumes and models. The renderings are saved as 2-D snapshots on the server, where they are uploaded and displayed on the client. User interactions with the graphic interface on the client side are translated into additional commands to manipulate the 3-D scene, after which the server re-renders the scene and sends a new image to the client. Example forms-based and Java-based clients are described for a brain mapping application, but the techniques should be applicable to multiple domains where 3-D medical image visualization is of interest.
Poliakov, A. V.; Albright, E.; Corina, D.; Ojemann, G.; Martin, R. F.; Brinkley, J. F.
One of the most important technical challenges in image-guided intervention is to obtain a precise transformation between the intrainterventional patient's anatomy and corresponding preinterventional 3-Dimage on which the intervention was planned. This goal can be achieved by acquiring intrainterventional 2-D images and matching them to the preinterventional 3-Dimage via 3-D\\/2-D image registration. A novel 3-D\\/2-D registration method is
Primoz Markelj; Dejan Tomazevic; Franjo Pernus; Bostjan Likar
In order to get whole shape of an object, lots of parts of 3Dimages need to be captured from multiple views and aligned into a same 3D coordinate. That usually involves in both complex software process and expensive hardware system. In this paper, a shortcut approach is proposed to align 3Dimages captured from multiple views. Employing only a calibrated turntable, a single-view 3D camera can capture a sequence of 3Dimages of an object from different view angle one by one, then align them quickly and automatically. The alignment doesn't need any help from the operator. It can achieve good performances such as high accuracy, robust, rapidly capturing and low cost. The turntable calibration can be easily implemented by the single-view 3D camera. Fixed with the turntable, single-view 3D camera can calibrate the revolving-axis of the turntable just by measuring the positions of a little calibration-ball revolving with the turntable at several angles. Then system can get the coordinate transformation formula between multiple views of different revolving angle by a LMS algorithm. The formulae for calibration and alignment are given with the precision analysis. Experiments were performed and showed effective result to recover 3D objects.
Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.
Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.
Optical coherence tomography with adaptive optics (AO-OCT) is a highly sensitive, noninvasive method for 3Dimaging of the microscopic retina. The purpose of this study is to advance AO-OCT technology by enabling repeated imaging of cone photoreceptors over extended periods of time (days). This sort of longitudinal imaging permits monitoring of 3D cone dynamics in both normal and diseased eyes, in particular the physiological processes of disc renewal and phagocytosis, which are disrupted by retinal diseases such as age related macular degeneration and retinitis pigmentosa. For this study, the existing AO-OCT system at Indiana underwent several major hardware and software improvements to optimize system performance for 4D cone imaging. First, ultrahigh speed imaging was realized using a Basler Sprint camera. Second, a light source with adjustable spectrum was realized by integration of an Integral laser (Femto Lasers, ?c=800nm, ??=160nm) and spectral filters in the source arm. For cone imaging, we used a bandpass filter with ?c=809nm and ??=81nm (2.6 ?m nominal axial resolution in tissue, and 167 KHz A-line rate using 1,408 px), which reduced the impact of eye motion compared to previous AO-OCT implementations. Third, eye motion artifacts were further reduced by custom ImageJ plugins that registered (axially and laterally) the volume videos. In two subjects, cone photoreceptors were imaged and tracked over a ten day period and their reflectance and outer segment (OS) lengths measured. High-speed imaging and image registration/dewarping were found to reduce eye motion to a fraction of a cone width (1 ?m root mean square). The pattern of reflections in the cones was found to change dramatically and occurred on a spatial scale well below the resolution of clinical instruments. Normalized reflectance of connecting cilia (CC) and OS posterior tip (PT) of an exemplary cone was 54+/-4, 47+/-4, 48+/-6, 50+/-5, 56+/-1% and 46+/-4, 53+/-4, 52+/-6, 50+/-5, 44+/-1% for days #1,3,6,8,10 respectively. OS length of the same cone was 28.9, 26.4, 26.4, 30.6, and 28.1 ?m for days #1,3,6,8,10 respectively. It is plausible these changes are an optical correlate of the natural process of OS renewal and shedding.
Kocaoglu, Omer P.; Lee, Sangyeol; Jonnal, Ravi S.; Wang, Qiang; Herde, Ashley E.; Besecker, Jason; Gao, Weihua; Miller, Donald T.
We present the last results obtained by using our Imaging Topological Radar (ITR), an high resolution laser scanner aimed at reconstruction 3D digital models of real targets, either single objects or complex scenes. The system, based on amplitude modulation ranging technique, enables to obtain simultaneously a shade-free, high resolution, photographic-like picture and accurate range data in the form of a range image, with resolution depending mainly on the laser modulation frequency (current best performance are ~100?m). The complete target surface is reconstructed from sampled points by using specifically developed software tools. The system has been successfully applied to scan different types of real surfaces (stone, wood, alloy, bones) and is suitable of relevant applications in different fields, ranging from industrial machining to medical diagnostics. We present some relevant examples of 3D reconstruction in the heritage field. Such results were obtained during recent campaigns carried out in situ in various Italian historical and archaeological sites (S. Maria Antiqua in Roman Forum, "Grotta dei cervi" Porto Badisco - Lecce, South Italy). The presented 3D models will be used by cultural heritage conservation authorities for restoration purpose and will available on the Internet for remote inspection.
Poggi, Claudio; Guarneri, Massimiliano; Fornetti, Giorgio; Ferri de Collibus, Mario; De Dominicis, Luigi; Paglia, Emiliano; Ricci, Roberto
In capsule endoscopy (CE), there is research to develop hardware that enables ''real'' three-dimensional (3-D) video. However, it should not be forgotten that ''true'' 3-D requires dual video images. Inclusion of two cameras within the shell of a capsule endoscope though might be unwieldy at present. Therefore, in an attempt to approximate a 3-D reconstruction of the digestive tract surface, a software that recovers information-using gradual variation of shading-from monocular two-dimensional CE images has been proposed. Light reflections on the surface of the digestive tract are still a significant problem. Therefore, a phantom model and simulator has been constructed in an attempt to check the validity of a highlight suppression algorithm. Our results confirm that 3-D representation software performs better with simultaneous application of a highlight reduction algorithm. Furthermore, 3-D representation follows a good approximation of the real distance to the lumen surface. PMID:24044049
Interpolation is a necessary processing step in 3-D reconstruction because of the non-uniform resolution. Conventional interpolation methods simply use two slices to obtain the missing slices between the two slices .when the key slice is missing, those methods may fail to recover it only employing the local information .And the surface of 3D object especially for the medical tissues may be highly complicated, so a single interpolation can hardly get high-quality 3Dimage. We propose a novel binary 3Dimage interpolation algorithm. The proposed algorithm takes advantages of the global information. It chooses the best curve adaptively from lots of curves based on the complexity of the surface of 3D object. The results of this algorithm are compared with other interpolation methods on artificial objects and real breast cancer tumor to demonstrate the excellent performance.
This paper presents a novel method for reconstructing a 3D human body pose using depth information based on top-down learning. The human body pose is represented by a linear combination of prototypes of 2D depth images and their corresponding 3D body models in terms of the position of a predetermined set of joints. In a 2D depth image, the optimal
This paper proposes a human body posture estimation method based on back projection of human silhouette images extracted from multi-camera images. To achieve real-time 3D human body posture estimation, a server-client system is introduced into the multi-camera system, improvements of the background subtraction and back projection are investigated. To evaluate the feasibility of the proposed method, 3D estimation experiments of
Three-dimensional (3D) objects reconstruction using just bi-dimensional (2D) images has been a major research topic in Computer\\u000a Vision. However, it is still a hard problem to address, when automation, speed and precision are required and\\/or the objects\\u000a have complex shapes or image properties. In this paper, we compare two Active Computer Vision methods frequently used for\\u000a the 3D reconstruction of
Teresa C. S. Azevedo; João Manuel R. S. Tavares; Mário A. P. Vaz
Computer tomography has become a major area in biomedical imaging system to reconstruct 3Dimage. Several exact CT reconstruction algorithms, such as the generalized filtered-back projection (FBP) and back projection-filtration (BPF) methods and cone beam reconstruction algorithm have been developed to solve the long object problem. Although the well-known 3D Shepp-Logan phantom (SLP) is often used to validate these algorithms,
Nitin Kothari; Yogesh K. Bhateshvar; Abhishek Katariya; Shilpa Kothari
The purpose of this paper is to develop a non-contact profile sensor system which will be able to accurately measure 3D free-form machined metal surfaces. The proposed sensor system has many advantages as compared with conventional measuring systems. First, a new detecting system of optical ring images utilized by rotating image detector is developed to measure 3D profiles in the
Two-dimensional (2-D) approaches to microwave imaging have dominated the research landscape primarily due to the moderate levels of measurement data, data-acquisition time, and computational costs required. Three-dimensional (3-D) approaches have been investigated in simulation, phantom, and animal experiments. While 3-D approaches are certainly important in terms of the potential to improve image quality, their associated costs are significant at this
Paul M. Meaney; Keith D. Paulsen; Shireen D. Geimer; Shah A. Haider; Margaret W. Fanning
A 3-Dimaging method is proposed for a scanning laser ophthalmoscope. It employs two defocused images obtained in a confocal optical arrangement with two detection apertures. Evaluation of the signal-to-noise ratio for detecting signals indicate that profile measurements can be achieved with an accuracy better than 50 ?m. With this method, the 3-D shape Z(x, y) of the eye fundus is observed in vivo in real-time using a standard television system.
Kobayashi, K.; Matsui, H.; Nakano, H.; Asakura, T.
\\u000a Motivated by the need for methods to aid the deformable registration of brain tumor images, we present a three-dimensional\\u000a (3D) mechanical model for simulating large non-linear deformations induced by tumors to the surrounding encephalic tissues.\\u000a The model is initialized with 3D radiological images and is implemented using the finite element (FE) method. To simulate\\u000a the widely varying behavior of brain
\\u000a In this paper, we propose an active vision strategy for the construction of a 3D map in a robot brain from its stereo eye\\u000a images. We show that, by the direct combination of its action and image change caused by the action, the robot can acquires\\u000a a 3D accurate map in his brain. If the robot stereo cameras and his
We demonstrate three-dimensional (3D) super-resolution live-cell imaging through thick specimens (50-150 ?m), by coupling far-field individual molecule localization with selective plane illumination microscopy (SPIM). The improved signal-to-noise ratio of selective plane illumination allows nanometric localization of single molecules in thick scattering specimens without activating or exciting molecules outside the focal plane. We report 3D super-resolution imaging of cellular spheroids. PMID:21983925
Cella Zanacchi, Francesca; Lavagnino, Zeno; Perrone Donnorso, Michela; Del Bue, Alessio; Furia, Laura; Faretta, Mario; Diaspro, Alberto
Purpose The goal of this study was to evaluate the use of 3D ultrasound (3DUS) breast IGRT for electron and photon lumpectomy site boost treatments. Materials and methods 20 patients with a prescribed photon or electron boost were enrolled in this study. 3DUS images were acquired both at time of simulation, to form a coregistered CT/3DUS dataset, and at the time of daily treatment delivery. Intrafractional motion between treatment and simulation 3DUS datasets were calculated to determine IGRT shifts. Photon shifts were evaluated isocentrically, while electron shifts were evaluated in the beam's-eye-view. Volume differences between simulation and first boost fraction were calculated. Further, to control for the effect of change in seroma/cavity volume due to time lapse between the 2 sets of images, interfraction IGRT shifts using the first boost fraction as reference for all subsequent treatment fractions were also calculated. Results For photon boosts, IGRT shifts were 1.1 ± 0.5 cm and 50% of fractions required a shift >1.0 cm. Volume change between simulation and boost was 49 ± 31%. Shifts when using the first boost fraction as reference were 0.8 ± 0.4 cm and 24% required a shift >1.0 cm. For electron boosts, shifts were 1.0 ± 0.5 cm and 52% fell outside the dosimetric penumbra. Interfraction analysis relative to the first fraction noted the shifts to be 0.8 ± 0.4 cm and 36% fell outside the penumbra. Conclusion The lumpectomy cavity can shift significantly during fractionated radiation therapy. 3DUS can be used to image the cavity and correct for interfractional motion. Further studies to better define the protocol for clinical application of IGRT in breast cancer is needed.
A 3-D augmented reality navigation system using autostereoscopic images was developed for MRI-guided surgery. The 3-Dimages are created by employing an animated autostereoscopic image, integral videography (IV), which provides geometrically accurate 3-D spatial images and reproduces motion parallax without using any supplementary eyeglasses or tracking devices. The spatially projected 3-Dimages are superimposed onto the surgical area and viewed via a half-silvered mirror. A fast and accurate spatial image registration method was developed for intraoperative i.v. image-guided therapy. Preliminary experiments showed that the total system error in patient-to-image registration was 0.90 +/- 0.21 mm, and the procedure time for guiding a needle toward a target was shortened by 75%. An animal experiment was also conducted to evaluate the performance of the system. The feasibility studies showed that augmented reality of the image overlay system could increase the surgical instrument placement accuracy and reduce the procedure time as a result of intuitive 3-D viewing. PMID:20172791
The Imaging Technologies group at De Montfort University has developed an integral 3Dimaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3Dimages. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
A new 3-D grain size distribution analysis method for coarse aggregates using image analysis is presented. The method is designed for a laboratory environment and requires no sieving, only imaging of the aggregate. A luminous background material eliminates unwanted shadow effects. Particles are placed so they are not touching, thus the images of the aggregates are of good quality allowing
To date, 3D ultrasound imaging has been hampered by fractionated, job specific computer procedures, and the need for significant operator interaction. This paper presents our data processing algorithm for cardiac structure visualization from serial transesophageal echocardiographic (TEE) images. Major steps in the algorithm are: 1) image registration, 2) histogram operations for contrast enhancement, 3) noise and speckle filtering, 4) segmentation
Marek Belohlavek; David A. Foley; James B. Seward; James F. Greenleaf
We propose a 3Dimage display system which can present real scenes with realistic motion parallax. In the sensing system, a scene is observed by using a camera matrix. An excellent stereo algorithm SEA which utilizes 3×3 image matrix recovers the depth information of the scene with the density and the sharpness required for high quality image generation. In the
Anaglyphs are an interesting way of generating stereoscopic images, especially in a cost-efficient and technically simple way. An anaglyph is generated by combining stereo pair of images for left and right scenes with appropriate offset with respect to each other, where each image is shown using a different color in order to reflect the 3D effect for the users who
We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques
Raffaella Fontana; Maria Chiara Gambino; Marinella Greco; Luciano Marras; Enrico M. Pampaloni; Anna Pelagotti; Luca Pezzati; Pasquale Poggi
Many image matching schemes are based on mapping coordinate locations, such as the locations of landmarks, in one image to corresponding locations in a second image. A new approach to this mapping (coordinate transformation), called the elastic body spline (EBS), is described. The spline is based on a physical model of a homogeneous, isotropic three-dimensional (3-D) elastic body. The model
Malcolm H. Davis; Alireza Khotanzad; Duane P. Flamig; Steven E. Harms
Computer graphics is important in developing fractal images visualizing the Mandelbrot and Julia sets from a complex function. Computer rendering is a central tool for obtaining nice fractal images. We render 3D objects with the height of each complex point of a fractal image considering the diverging speed of its orbit. A potential function helps approximate this speed. We propose
Young Bong Kim; Hyoung Seok Kim; Hong Oh Kim; Sung Yong Shin
Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-Dimaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is selected. One-to-one correspondence between visual filed and focal length of system is obtained and offers effective visual field information for the matching of imaging filed and illumination filed in range-gated 3-Dimaging technology. On the basis of the experimental results, combined with the depth of field theory, the application of camera calibration in range-gated 3-Dimaging technology is futher studied.
Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.
Mutual information has been used for matching and register- ing 3D models to 2D images. However, in Viola's original framework (1), surface albedo variance is assumed to be minimal when measuring simi- larity between 3D models and 2D image data using mutual information. In reality, most objects have textured surfaces with different albedo val- ues across their surfaces, and direct
Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3Dimages at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.
Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj
Digital speckle correlation techniques have already been successfully proven to be an accurate displacement analysis tool for a wide range of applications. With the use of two cameras, three dimensional measurements of contours and displacements can be carried out. With a simple setup it opens a wide range of applications. Rapid new developments in the field of digital imaging and computer technology opens further applications for these measurement methods to high speed deformation and strain analysis, e.g. in the fields of material testing, fracture mechanics, advanced materials and component testing. The high resolution of the deformation measurements in space and time opens a wide range of applications for vibration analysis of objects. Since the system determines the absolute position and displacements of the object in space, it is capable of measuring high amplitudes and even objects with rigid body movements. The absolute resolution depends on the field of view and is scalable. Calibration of the optical setup is a crucial point which will be discussed in detail. Examples of the analysis of harmonic vibration and transient events from material research and industrial applications are presented. The results show typical features of the system.
PurposeTo develop a non-invasive method for quantification of blood and pigment distributions across the posterior pole of the fundus from multispectral images using a computer-generated reflectance model of the fundus.MethodsA computer model was developed to simulate light interaction with the fundus at different wavelengths. The distribution of macular pigment (MP) and retinal haemoglobins in the fundus was obtained by comparing
A Calcagni; J M Gibson; I B Styles; E Claridge; F Orihuela-Espina
The use of 3D surface imaging technology is becoming increasingly common in craniofacial clinics and research centers. Due to fast capture speeds and ease of use, 3D digital stereophotogrammetry is quickly becoming the preferred facial surface imaging modality. These systems can serve as an unparalleled tool for craniofacial surgeons, proving an objective digital archive of the patient's face without exposure to radiation. Acquiring consistent high-quality 3D facial captures requires planning and knowledge of the limitations of these devices. Currently, there are few resources available to help new users of this technology with the challenges they will inevitably confront. To address this deficit, this report will highlight a number of common issues that can interfere with the 3D capture process and offer practical solutions to optimize image quality.
In this paper, we propose a 3D sensing and visualization of micro-objects using an axially distributed image capture system. In the proposed method, the micro-object is optically magnified and the axial images of magnified micro-object are recorded using axially distributed image capture. The recorded images are used to visualize the 3D scene using the computational reconstruction algorithm based on ray back-projection. To show the usefulness of the proposed method, we carry out preliminary experiments and present the results.
Independent of overall bone density, 3D trabecular bone (TB) architecture has been shown to play an important role in conferring strength to the skeleton. Advances in imaging technologies such as micro-computed tomography (CT) and micro-magnetic resonance (MR) now permit in vivo imaging of the 3D trabecular network in the distal extremities. However, various experimental factors preclude a straightforward analysis of the 3D trabecular structure on the basis of these in vivo images. For MRI, these factors include blurring due to patient motion, partial volume effects, and measurement noise. While a variety of techniques have been developed to deal with the problem of patient motion, the second and third issues are inherent limitations of the modality. To address these issues, we have developed a series of robust processing steps to be applied to a 3D MR image and leading to a 3D skeleton that accurately represents the trabecular bone structure. Here we describe the algorithm, provide illustrations of its use with both specimen and in vivo micro-MR images, and discuss the accuracy and quantify the relationship between the original bone structure and the resulting 3D skeleton volume.
Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.
One of the main issues in the application of multiple-point statistics (MPS) to the simulation of three-dimensional (3D) blocks is the lack of a suitable 3D training image. In this work, we compare three methods of overcoming this issue using information coming from bidimensional (2D) training images. One approach is based on the aggregation of probabilities. The other approaches are novel. One relies on merging the lists obtained using the impala algorithm from diverse 2D training images, creating a list of compatible data events that is then used for the MPS simulation. The other (s2Dcd) is based on sequential simulations of 2D slices constrained by the conditioning data computed at the previous simulation steps. These three methods are tested on the reproduction of two 3Dimages that are used as references, and on a real case study where two training images of sedimentary structures are considered. The tests show that it is possible to obtain 3D MPS simulations with at least two 2D training images. The simulations obtained, in particular those obtained with the s2Dcd method, are close to the references, according to a number of comparison criteria. The CPU time required to simulate with the method s2Dcd is from two to four orders of magnitude smaller than the one required by a MPS simulation performed using a 3D training image, while the results obtained are comparable. This computational efficiency and the possibility of using MPS for 3D simulation without the need for a 3D training image facilitates the inclusion of MPS in Monte Carlo, uncertainty evaluation, and stochastic inverse problems frameworks.
Summary Deconvolution techniques have been widely used for restoring the 3-D quantitative information of an unknown specimen observed using a wide-field fluorescence microscope. Deconv, an open-source deconvolution software package, was developed for 3-D quantitative fluorescence microscopy imaging and was released under the GNU Public License. Deconv provides numerical routines for simulation of a 3-D point spread function and deconvolution routines implemented three constrained iterative deconvolution algorithms: one based on a Poisson noise model and two others based on a Gaussian noise model. These algorithms are presented and evaluated using synthetic images and experimentally obtained microscope images, and the use of the library is explained. Deconv allows users to assess the utility of these deconvolution algorithms and to determine which are suited for a particular imaging application. The design of Deconv makes it easy for deconvolution capabilities to be incorporated into existing imaging applications.
SUN, Y.; DAVIS, P.; KOSMACEK, E. A.; IANZINI, F.; MACKEY, M. A.
We developed a glasses-free 3D stereoscopic display using an LCD display panel and a special grating film for stereoscopic viewing. The display screen is divided in half in order that left and right regions provide the stereoscopic images for left and right eyes. Because both stereoscopic images are not in the same position, it is difficult for the observer to view the 3Dimage by the stereoviewing. The grating film can solve this problem because it shifts both left and right images to the same position. Moreover this grating film can give us glassesfree 3D viewing because of its view control effect. As the result, the observer can watch overlapped stereoscopic images for left and right eyes without special glasses such as polarized glasses.
Full-field optical coherence tomography (FF-OCT) is an emerging non-invasive, label-free, interferometric technique for 3Dimaging of biomedical objects with micron-scale resolutions. The conventional phase-shifting technique in FF-OCT involves in mechanically moving a mirror to change the optical path difference for obtaining en face OCT images, but with the use of a broadband source in FF-OCT, the phase shifts of different spectral components are not the same, resulting in the ambiguities in 3-Dimage reconstruction. In this study, we propose to utilize the ferroelectric liquid crystal (FLC) device-controlled geometric phase shifting technique to realize achromatic phase shifting for rapid 3-Dimaging. We demonstrate this FLC-controlled FF-OCT technique by imaging biological samples (e.g., onion tissue).
A new architecture of the 3-D multiview camera and projector is presented. Camera optical system consist of a single wide aperture objective, a secondary (small) objective, a field lens and a scanner. Projector supplementary includes rear projection pupil forming screen. The system is intended for sequential 2-D prospective images acquisition and projection while the small working aperture is sliding across the opening of the big size objective lens or the spherical mirror. Both horizontal and full parallax imaging are possible. The system can transmit 3-Dimages in real time through fiber bundles, free space, and video image transmission lines. This system can also be used for real time conversion of infrared 3-Dimages. With this system, clear multiview stereoscopic images of real scene can be displayed with 30 degrees view zone angle.
Shestak, Sergey A.; Son, Jung-Young; Jeon, Hyung-Wook; Komar, Victor G.
This paper presents the implementation of a 3-D architecture for a biomedical-imaging system based on a multilayered system-on-system structure. The architecture consists of a complementary metal-oxide semiconductor image sensor layer, memory, 3-D discrete wavelet transform (3D-DWT), 3-D Advanced Encryption Standard (3D-AES), and an RF transmitter as an add-on layer. Multilayer silicon (Si) stacking permits fabrication and optimization of individual layers
Three dimensional mass spectral imaging (3D MSI) is an exciting field that grants the ability to study a broad mass range of molecular species ranging from small molecules to large proteins by creating lateral and vertical distribution maps of select compounds. Although the general premise behind 3D MSI is simple, factors such as choice of ionization method, sample handling, software considerations and many others must be taken into account for the successful design of a 3D MSI experiment. This review provides a brief overview of ionization methods, sample preparation, software types and technological advancements driving 3D MSI research of a wide range of low- to high-mass analytes. Future perspectives in this field are also provided to conclude that the outlook for 3D MSI is positive and promises ever-growing applications in the biomedical field with continuous developments of this powerful analytical tool. PMID:21320052
We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-Dimages that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-Dimages on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-Dimages to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-Dimages. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.
We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 degrees in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system. PMID:15719956
The dynamic visual image modeling for 3D synthetic scenes by using dynamic multichannel binocular visual image based on the mobile self-organizing network. Technologies of 3D modeling synthetic scenes have been widely used in kinds of industries. The main purpose of this paper is to use multiple networks of dynamic visual monitors and sensors to observe an unattended area, to use the advantages of mobile network in rural areas for improving existing mobile network information service further and providing personalized information services. The goal of displaying is to provide perfect representation of synthetic scenes. Using low-power dynamic visual monitors and temperature/humidity sensor or GPS installed in the node equipment, monitoring data will be sent at scheduled time. Then through the mobile self-organizing network, 3D model is rebuilt by synthesizing the returned images. On this basis, we formalize a novel algorithm for multichannel binocular visual 3Dimages based on fast 3D modeling. Taking advantage of these low prices mobile, mobile self-organizing networks can get a large number of video from where is not suitable for human observation or unable to reach, and accurately synthetic 3D scene. This application will play a great role in promoting its application in agriculture.
Gao, Li; Yan, Juntao; Li, Xiaobo; Ji, Yatai; Li, Xin
An exciting application of using Selective Laser Sintering (SLS) to produce solid models of microscopic specimens imaged with a laser scanning confocal microscope is presented. The SLS model, fabricated by sintering together successive layers of a fine powder with a computer controlled scanning laser, is a true 3-D magnification of the microscopic sample. The 3-D models produced are accurate representations of the data and can be handled and manipulated to evaluate surface details and morphology. The models give an extremely powerful method for 3-D data visualization and tactilization. Three dimensional digital image processing is performed on microscopic images to prepare the data for the selective laser sintering process. The image processing techniques for preparing 3-Dimages of both translucent and opaque specimens for laser sintering are presented. For translucent specimens, the image processing removes noise and "fills in" inclusions and cavities within the specimen data in order to produce a structurally sound model. When imaging an opaque specimen an image of the upper surface alone is acquired. Image processing is used to remove noise and to Fill the volume under the specimen surface. Sample models of a dandelion (Taraxicum officinale Compositae) pollen grain and the surface of a U.S. penny are presented. These models present examples of both the translucent and opaque data types. PMID:8329596
Bartels, K A; Bovik, A C; Crawford, R C; Diller, K R; Aggarwal, S J
We have developed a 3Dimage processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.
Kim, Jong H.; Yeon, Kyoung M.; Han, Man C.; Lee, Dong Hyuk; Cho, Han I.
In this paper, new techniques of determining the focus of a disease position in a gamma knife operation are presented. In these techniques, the transparent 3D color image of the human body organ is reconstructed using a new three-dimensional reconstruction method, and then the position, the area, and the volume of focus of a disease such as cancer or a tumor are calculated. They are used in the gamma knife operation. The CT pictures are input into a digital image processing system. The useful information is extracted and the original data are obtained. Then the transparent 3D color image is reconstructed using these original data. By using this transparent 3D color image, the positions of the human body organ and the focus of a disease are determined in a coordinate system. While the 3Dimage is reconstructed, the area and the volume of human body organ and focus of a disease can be calculated at the same time. It is expressed through actual application that the positions of human body organ and focus of a disease can be determined exactly by using the transparent 3D color image. It is very useful in gamma knife operation or other surgical operation. The techniques presented in this paper have great application value.
Confocal STEM is a new electron microscopy imaging mode. In a microscope with spherical aberration-corrected electron optics, it can produce three-dimensional (3D) images by optical sectioning. We have adapted the linear imaging theory of light confocal microscopy to confocal STEM and use it to suggest optimum imaging conditions for a confocal STEM limited by fifth-order spherical aberration. We predict that
Abstract: We describe a learning based method,for recovering 3D human,body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labelling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette
A robotic three-dimensional (3D) scanning superconducting quantum interference device (SQUID) imaging system was developed for practical nondestructive evaluation (NDE) applications. The major feature of this SQUID NDE system is that the SQUID sensor itself scans in 3D by traveling over the surface of an object during testing without the need for magnetic shielding. This imaging system consists of (i) DC-SQUID gradiometer for effective movement of the sensor, (ii) SQUID sensor manipulator utilizing an articulated-type robot used in industry, (iii) laser charge-coupled-device (CCD) displacement sensor to measure the 3D coordinates of points on the surface of the object, and (iv) computer-aided numerical interpolation scheme for 3D surface reconstruction of the object. The applicability of this system for NDE was demonstrated by successfully detecting artificial damage of cylindrical-shaped steel tubes.
Isawa, K.; Nakayama, S.; Ikeda, M.; Takagi, S.; Tosaka, S.; Kasai, N.
Postmortem investigations are increasingly assisted by three-dimensional multi-slice computed tomography (3D-MSCT) and have become more available to forensic pathologists over the past 20years. In cases of ballistic wounds, 3D-MSCT can provide an accurate description of the bullet location, bone fractures and, more interestingly, a clear visual of the intracorporeal trajectory (bullet track). These forensic medical examinations can be combined with tridimensional bullet trajectory reconstructions created by forensic ballistic experts. These case reports present the implementation of tridimensional methods and the results of 3D crime scene reconstruction in two cases. The authors highlight the value of collaborations between police forensic experts and forensic medicine institutes through the incorporation of 3D-MSCT data in a crime scene reconstruction, which is of great interest in forensic science as a clear visual communication tool between experts and the court. PMID:23931960
This work is a contribution to the problem of localizing key cerebral structures in 3D MRIs and its quantitative evaluation. In pursuing it, the cooperation between an image-based segmentation method and a hierarchical deformable registration approach has been considered. The segmentation relies on two main processes: homotopy modification and contour decision. The first one is achieved by a marker extraction stage where homogeneous 3D regions of an image, I(s), from the data set are identified. These regions, M(I), are obtained combining information from deformable atlas, achieved by the warping of eight previous labeled maps on I(s). Then, the goal of the decision stage is to precisely locate the contours of the 3D regions set by the markers. This contour decision is performed by a 3D extension of the watershed transform. The anatomical structures taken into consideration and embedded into the atlas are brain, ventricles, corpus callosum, cerebellum, right and left hippocampus, medulla and midbrain. The hybrid method operates fully automatically and in 3D, successfully providing segmented brain structures. The quality of the segmentation has been studied in terms of the detected volume ratio by using kappa statistic and ROC analysis. Results of the method are shown and validated on a 3D MRI phantom. This study forms part of an on-going long term research aiming at the creation of a 3D probabilistic multi-purpose anatomical brain atlas.
A 3D laser imaging test facility was developed and established using a module design approach. The test facility provides a complete, controllable and repeatable experiment environment, and supports research and simulation of 3Dimaging LiDAR system. The test facility consists of five major parts: an open 3D-imaging LiDAR, a target simulator, a far-field emulation, a background light environment simulation system, and a large FOV stereo vision system. The test facility has been set three working modes: unit module analysis mode, accuracy and imaging mechanism of LiDAR system mode, moving target detecting and environmental modeling mode. The open test facility continually evolves to meet the expanding role of 3D laser imaging applications.
In this paper, lossy compression of 3D MR images of the human brain is associated with a segmentation algorithm, in the context of an interactive brain sulci delineation application. Influence of compression losses is analyzed according to the segmentation results. Lossy compression is performed by subband coding leading to a multiresolution representation of the image. Wavelets are adapted for medical images statistics. The decompressed images are segmented by directional watershed transform (DWST), providing an accurate 3D segmentation of the brain. Impact of losses on the quality of the segmentation is estimated either by a 3D Chamfer distance function and by visual appreciation. In this article, we show that lossy compression can be combined with some applications, providing high compression ratio without significantly altering the results of the application.
Piscaglia, Patrick; Thiran, Jean-Philippe; Macq, Benoit M.; Michel, Christian
3-D reconstruction of the electron scattering intensity of a virus from cryo electron microscopy is essentially a 3-D tomography problem in which the orientation of the 2-D projections is unknown. Many biological problems concern mixtures of different types of virus particles or mixtures of different maturation states of the same type of virus particle. For a variety of reasons, especially low SNR, it can be very challenging to label the type or state shown in an individual image. Algorithms capable of computing multiple reconstructions, one for each type or state, based on images which are not labeled according to type or state, are described and demonstrated on experimental images.
Lee, Junghoon; Zheng, Yili; Doerschuk, Peter C.; Tang, Jinghua; Johnson, John E.
Photoacoustic imaging is exquisitely sensitive to blood and can infer blood oxygenation based on multispectral images. In this work we present multispectral real-time 3D photoacoustic imaging of blood phantoms. We used a custom-built 128-channel hemispherical transducer array coupled to two Nd:YAG pumped OPO laser systems synchronized to provide double pulse excitation at 680 nm and 1064 nm wavelengths, all during a triggered series of ultrasound pressure measurements lasting less than 300 ?s. The results demonstrated that 3D PAI is capable of differentiating between oxygenated and deoxygenated blood at high speed at mm-level resolution.
We have applied image analysis methods in the assessment of human kidney perfusion based on 3D dynamic contrast-enhanced (DCE) MRI data. This approach consists of 3D non-rigid image registration of the kidneys and fuzzy C-mean classification of kidney tissues. The proposed registration method reduced motion artifacts in the dynamic images and improved the analysis of kidney compartments (cortex, medulla, and cavities). The dynamic intensity curves show the successive transition of the contrast agent through kidney compartments. The proposed method for motion correction and kidney compartment classification may be used to improve the validity and usefulness of further model-based pharmacokinetic analysis of kidney function.
Industrial robots are commonly used for physically stressful jobs in complex environments. In any case collisions with heavy and high dynamic machines need to be prevented. For this reason the operational range has to be monitored precisely, reliably and meticulously. The advantage of the SwissRanger® SR-3000 is that it delivers intensity images and 3D-information simultaneously of the same scene that conveniently allows 3D-monitoring. Due to that fact automatic real time collision prevention within the robots working space is possible by working with 3D-coordinates.
This paper proposes a new real-time method of estimating human postures in 3D form trinocular images. The proposed method extracts feature points of the human body by applying a type of function analysis to contours of human silhouettes. To overcome self-occlusion problems, dynamic compensation is carried out using the Kalman filter and all feature points are tracked. The 3D coordinates
We propose a novel method for extracting natural hand parameters from monocular image sequences. The purpose is to improve a vision-based sign language recognition system by providing detail information about the finger constellation and the 3D hand posture. Therefore, the hand is modelled by a set of 2D appearance models, each representing a limited variation range of 3D hand shape
(This talk is given on behalf of the EISCAT Scientific Association and the EISCAT_3D Design Team) EISCAT_3D is a new kind of three-dimensional imaging radar for high-latitude atmosphere and geospace studies, located in northern Scandinavia. The facility will consist of multiple large phased-array antenna transmitters\\/receivers in three countries, comprising some 100 000 individual antenna elements. The new radars will measure
To investigate the relationship between NEC and image quality to 2D and 3D PET, while simultaneously optimizing 3D low energy threshold (LET), we have performed a series of phantom measurements. The phantom consisted of 46 1 cm fillable hollow spheres on a random grid inside a water-filled oval cylinder, 21 cm tall, 36 cm wide, and 40 cm long. The
John W. Wilson; Timothy G. Turkington; Josh M. Wilson; James G. Colsher; Steven G. Ross
There are many applications, such as rapid prototyping, simulations and presentations, where non-professional computer end-users could benefit from the ability to create simple 3D models. Existing tools are geared towards the creation of production quality 3D models by professional users with sufficient background, time and motivation to overcome steep learning curves. Inflatable Icons combine diffusion-based image extrusion with a number
Positron Emission Tomography (PET) images can be reconstructed using Fourier transform methods. This paper describes the performance of a fully 3-D Backprojection-Then-Filter (BPF) algorithm on the Cray T3E machine and on a cluster of workstations. PET reconstruction of small animals is a class of problems characterized by poor counting statistics. The low-count nature of these studies necessitates 3-D reconstruction in
A three-dimensional (3D) spatial and crystallographic reconstruction of an austenitic steel microstructure was generated using optical microscopy, serial sectioning, and electron backscatter diffraction, and was incorporated into an image-based finite element model to simulate the mesoscale mechanical response of the real microstructure. The effects of crystallographic orientation and the interactions between applied loads, constraints, and microstructure are discussed. Advanced 3D
Using near-infrared femtosecond pulses we move single gold nanoparticles (AuNPs) along biological fibers such as collagen and actin filaments. While the AuNP is sliding on the fiber, its trajectory is measured in 3D with nanometer resolution providing a high resolution image of the fiber. Here, we systematically moved a single AuNP along nm-size collagen fibers and actin filament inside CHO K1 living cells mapping their 3D topography with high fidelity.
By implicit camera calibration, we mean the process of calibrating cameras without explicitly computing their physical parameters. We introduce a new implicit model based on a generalized mapping between an image plane and multiple, parallel calibration planes (usually between four to seven planes). This paper presents a method of computing a relationship between a point on a three-dimensional (3D) object and its corresponding two-dimensional (2D) coordinate in a camera image. This relationship is expanded to form a mapping of points in 3D space to points in image (camera) space and visa versa that requires only matrix multiplication operations. This paper presents the rationale behind the selection of the forms of four matrices and the algorithms to calculate the parameters for the matrices. Two of the matrices are used to map 3D points in object space to 2D points on the CCD camera image plane. The other two matrices are used to map 2D points on the image plane to points on user defined planes in 3D object space. The mappings include compensation for lens distortion and measurement errors. The number of parameters used can be increased, in a straight forward fashion, to calculate and use as many parameters as needed to obtain a user desired accuracy. Previous methods of camera calibration use a fixed number of parameters which can limit the obtainable accuracy and most require the solution of nonlinear equations. The procedure presented can be used to calibrate a single camera to make 2D measurements or calibrate stereo cameras to make 3D measurements. Positional accuracy of better than 3 parts in 10,000 have been achieved. The algorithms in this paper were developed and are implemented in MATLABR (registered trademark of The Math Works, Inc.). We have developed a system to analyze the path of optical fiber during high speed payout (unwinding) of optical fiber off a bobbin. This requires recording and analyzing high speed (5 microsecond exposure time), synchronous, stereo images of the optical fiber during payout. A 3D equation for the fiber at an instant in time is calculated from the corresponding pair of stereo images as follows. In each image, about 20 points along the 2D projection of the fiber are located. Each of these 'fiber points' in one image is mapped to its projection line in 3D space. Each projection line is mapped into another line in the second image. The intersection of each mapped projection line and a curve fitted to the fiber points of the second image (fiber projection in second image) is calculated. Each intersection point is mapped back to the 3D space. A 3D fiber coordinate is formed from the intersection, in 3D space, of a mapped intersection point with its corresponding projection line. The 3D equation for the fiber is computed from this ordered list of 3D coordinates. This process requires a method of accurately mapping 2D (image space) to 3D (object space) and visa versa.
Objectives: This study was performed to reveal biometric peculiarities of New Zealand rabbit kidneys by means of three- dimensional (3D) reconstruction of multidetector computed tomography (MDCT) images. Methods: Under general anaesthesia, the kidneys of eight rabbits of both sexes were scanned by high resolution imaging using a general diagnostic MDCT. The thoracic and lumbar vertebrae of the rabbit were used
A new interactive approach is presented and implemented for constructing 3D city models from Google Earth and ground images. Using the roof size provided by Google Earth and the image coordinates of the four corner points of the building rectangular facade, without any prior knowledge about the parameters of the camera, we show a method for obtaining the building height
This study aimed at developing a new automatic segmentation algorithm for human knee cartilage volume quantification from MRI. Imaging was performed using a 3T scanner and a knee coil, and the exam consisted of a double echo steady state (DESS) sequence, which contrasts cartilage and soft tissues including the synovial fluid. The algorithm was developed on MRI 3-Dimages in
Pierre Dodin; Jean-Pierre Pelletier; Johanne Martel-Pelletier; Francois Abram
This paper proposes a new real-time method for estimating human postures in 3D from trinocular images. In this method, an upper body orientation detection and a heuristic contour analysis are performed on the human silhouettes extracted from the trinocular images so that representative points such as the top of the head can be located. The major joint positions are estimated
Three dimensional (3D) reconstruction of multiple two dimensional images, spatially and time registered, is increasingly being employed. These can be visualized on computer screens in full stereographic perspective using software and stereographic glasses now available. The authors have applied stereographic viewing to over 40 reconstructions of outlines of the endo and epicardiums of ultrasound imaged hearts. They found that standard
R. W. Martin; M. Legget; J. McDonald; X.-N. Li; D. Leotta; E. Bolson; G. Bashein; C. M. Otto; F. H. Sheehan
Current approaches to 3Dimaging at subcellular resolution using confocal microscopy and electron tomography, while powerful, are limited to relatively thin and transparent specimens. Here we report on the use of a new generation of dual beam electron microscopes capable of site-specific imaging of the interior of cellular and tissue specimens at spatial resolutions about an order of magnitude better
Jurgen A. W. Heymann; Mike Hayles; Ingo Gestmann; Lucille A. Giannuzzi; Ben Lich; Sriram Subramaniam
We present a new technique for tracking 3D objectsfrom 2D image sequences through the integrationof qualitative and quantitative techniques.The deformable models are initializedbased on a previously developed part-based qualitativeshape segmentation system. Using aphysics-based quantitative approach, objects aresubsequently tracked without feature correspondencebased on generalized forces computed fromthe stereo images. The automatic prediction ofpossible edge occlusion and...
Michael Chan; Dimitris N. Metaxas; Sven J. Dickinson
An automated system has been developed for visually inspecting the solder joints of SMDs (Surface Mounted Devices). The system is capable of inspecting fine pitch components down to 0.3 mm pitch QFPs (Quad Flat Packages). A unique image detection method was also developed to obtain precise 3-Dimages of solder joints. The principle of a confocal microscope is employed but
Yukio Matsuyama; Toshifumi Honda; Hisae Yamamura; Hideaki Sasazawa; M. Nomoto; T. Ninomiya; A. Schick; L. Listl; P. Kollensperger; D. Spriegel; P. Mengel; R. Schneider
In this paper, we propose an active vision strategy for the construction of 3D map in a robot brain from its stereo eye images. We show that a combi- nation of the robot action and image change caused by the action will improve the accuracy of the vi- sion system parameters. If the robot stereo cameras have been accurately calibrated,
In this work, 3D ultrasonic images of the internal region of the palm of the human hand are presented and analyzed in order to evaluate the ultrasonic technique for biometric recognition purposes. A commercial ultrasound imaging machine provided with a high frequency (12 MHz) linear array has been employed. The probe is moved in the directional orthogonal to the array
Quantitative vascular analysis is useful for treatment planning and evaluation of atherosclerosis, but it requires accurate and reliable determination of the 3D vascular structures from biplane images. To facilitate vascular analysis, we have developed technique for reliable estimation of the biplane imaging geometry as well as 3D vascular structures without using a calibration phantom. The centerlines of the vessels were tracked, and bifurcation points and their hierarchy were then determined automatically. The corresponding bifurcation points in biplane images were used to obtain an estimate of the imaging geometry with the enhanced Metz-Fencil technique, starting with an initial estimate based on gantry information. This initial estimate was iteratively refined by means of non-linear optimization techniques that aligned the projections of the reconstructed 3D bifurcation points with their respective image points. Methods have also been developed for assessing the accuracy and reliability of the calculated 3D vascular centerlines. Accuracy was evaluated by comparison of distances within a physical phantom with those in the reconstructed phantom. The reliability of the calculated geometries and 3D positions were evaluated using data from multiple projections and observers.
Sen, Anindya; Esthappan, Jacqueline; Lan, Li; Chua, Kok-Gee; Doi, Kunio; Hoffmann, Kenneth R.
In this study we analyze the feasibility of three dimensional (3D) electromagnetic (EM) imaging from a single borehole. The proposed logging tool consists of three mutually orthogonal magnetic dipole sources and multiple three component magnetic field receivers. A sensitivity analysis indicates that the most important sensor configuration for providing 3D geological information about the borehole consists of a transmitter with moment aligned parallel to the axis of the borehole, and receivers aligned perpendicular to the axis. The standard coaxial logging configuration provides the greatest depth of sensitivity compared to other configurations, but offers no information regarding 3D structure. Two other tool configurations in which both the source and receiver are aligned perpendicular to the borehole axis provide some directional information and therefore better image resolution, but not true 3D information. A 3D inversion algorithm has been employed to demonstrate the plausibility of 3D inversion using data collected with the proposed logging tool. This study demonstrates that an increase in image resolution results when three orthogonal sources are incorporated into the logging tool rather than a single axially aligned source.
Wide field of view (FOV) retinalimaging with high resolution has been demonstrated for quantitative analysis of retinal microstructures. An adaptive optics scanning laser ophthalmoscope (AO-SLO) that was built in our laboratory was improved by a customized scanning protocol for scanning wide region. A post-processing program was developed for generating wide FOV retinalimages. The high resolution retinalimage with
An apparent 3-Dimage can be perceived from only two 2-D images displayed at different depths, when an observer views them from the direction in which they are overlapped. The two 2-D images are created from an original 2-D image by dividing its luminance according to independently obtained depth information. Subjective test results show that (1) an apparent 3-Dimage is perceived and (2) the perceived depth continuously varies according to the change in luminance ratio between the two 2-D images. PMID:14967205
Common-image gathers indexed by opening angle and azimuth at imaging points in 3D situations are the key inputs for amplitude-variation-with-angle and velocity analysis by tomography. The Gaussian beam depth migration, propagating each ray by a Gaussian beam form and summing the contributions from all the individual beams to produce the wavefield, can overcome the multipath problem, image steep reflectors and, even more important, provide a convenient and efficient strategy to extract azimuth-opening angle domain common-image gathers (ADCIGs) in 3D seismic imaging. We present a method for computing azimuth and opening angle at imaging points to output 3D ADCIGs by computing the source and receiver wavefield direction vectors which are restricted in the effective region of the corresponding Gaussian beams. In this paper, the basic principle of Gaussian beam migration (GBM) is briefly introduced; the technology and strategy to yield ADCIGs by GBM are analyzed. Numerical tests and field data application demonstrate that the azimuth-opening angle domain imaging method in 3D Gaussian beam depth migration is effective.
The two major aspects of camera misalignment that cause visual discomfort when viewing images on a 3D display are vertical and torsional disparities. While vertical disparities are uniform throughout the image, torsional rotations introduce a range of disparities that depend on the location in the image. The goal of this study was to determine the discomfort ranges for the kinds of natural image that people are likely to take with 3D cameras rather than the artificial line and dot stimuli typically used for laboratory studies. We therefore assessed visual discomfort on a five-point scale from 'none' to 'severe' for artificial misalignment disparities applied to a set of full-resolution images of indoor scenes. For viewing times of 2 s, discomfort ratings for vertical disparity in both 2D and 3Dimages rose rapidly toward the discomfort level of 4 ('severe') by about 60 arcmin of vertical disparity. Discomfort ratings for torsional disparity in the same image rose only gradually, reaching only the discomfort level of 3 ('strong') by about 50 deg of torsional disparity. These data were modeled with a second-order hyperbolic compression function incorporating a term for the basic discomfort of the 3D display in the absence of any misalignments through a Minkowski norm. These fits showed that, at a criterion discomfort level of 2 ('moderate'), acceptable levels of vertical disparity were about 15 arcmin. The corresponding values for the torsional disparity were about 30 deg of relative orientation.